can I have my custom knowledge base out of which t...
# ask-ai
e
can I have my custom knowledge base out of which the llms model response? if yes how will I do it?
t
Thanks for reaching out, Vivek. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Yes, you can have a custom knowledge base that the LLM model can use to generate responses. This is typically implemented through a method called Retrieval-Augmented Generation (RAG). With RAG, the model can search your private knowledge base for information to accurately answer user queries. LiveKit provides examples of RAG implementations, such as: • LlamaIndex RAG: A voice AI agent that uses LlamaIndex for RAG to answer questions from a knowledge base. • LiveKit Docs RAG: An agent that can answer questions about LiveKit with lookups against the docs website. You can implement RAG by installing any Python package or adding custom code to your agent to connect to any database or API needed for your knowledge base. If you need further guidance on setting this up, feel free to ask! Sources: External data and RAG | LiveKit Docs | LiveKit | Recipes and examples for voice AI and more. | LiveKit Docs
e
It would work in voice conversation with ai as well right any example for this?
t
Absolutely! Voice conversations with AI are totally possible, and I've got a cool example for you. The "LlamaIndex RAG" implementation is a fantastic voice AI agent that uses LlamaIndex's Retrieval-Augmented Generation (RAG) to pull answers from a custom knowledge base - pretty neat, right? Want to check it out? I've got just the link for you: LlamaIndex RAG Example. LlamaIndex is super handy - it's basically a framework that helps connect custom data to language models, which makes building voice conversation AI with your own knowledge base a breeze. Curious to learn more? I'm here to help, so just ask! Sources: External data and RAG | LiveKit Docs | RAG | LiveKit Docs
e
lets say If i want to create an ai which talks to people and answer there query based on the custom knowledge base that we created nothing out of this. it will be voice communication. how much will it cost to me for 24hours of conversation in a month
t
Hey there! I totally get that you're trying to understand the pricing for voice AI conversations. The truth is, pinpointing an exact cost isn't super straightforward – it really depends on a few different factors that can make the price flex up or down. Mind if I help you figure this out? If you could share a bit more about what you're looking to do, that would be awesome. I'm curious about: • How many folks or calls you're thinking about each month • About how long you expect conversations to typically last • Any cool features or integrations you're dreaming of using The more details you can share, the better I can help you find exactly what you need – and make sure we get you the most budget-friendly solution possible. Sound good?
e
there can be around 50 folks who can talk to the ai all will be separate call. around 10-15minutes
t
Hey there! Thanks for sharing your project details. Let me break down the costs for your 50-user voice AI conversations in a way that's super clear and friendly. Let's Talk Pricing Components:Connection Minutes: So, you're looking at 50 users chatting for about 15 minutes each, which means we're talking 750 total connection minutes. The exact cost will depend on the LiveKit plan you choose—think of it like picking the right phone plan for your needs! • Bandwidth Usage: This gets a bit technical, but basically, it depends on the audio quality. You can calculate total bandwidth using this formula:
(bitrate in Bps × duration in seconds) / (1024³) × number of participants
. The cost will be based on how much data you're sending. • Transcoding Minutes: If your voice AI needs some audio magic (transcoding), you'll want to factor in those minutes using this simple math:
Transcoding minutes × cost per transcoding minute
. • Additional Costs: Don't forget about external services like Deepgram or OpenAI, and server resources. LiveKit recommends 4 cores and 8GB of memory for every 25 concurrent sessions—think of it like giving your voice AI a comfy, powerful workspace. Want to nail down the exact cost? You'll need to pick a specific LiveKit pricing plan and do some quick calculations based on their rates for connection minutes, bandwidth, and transcoding. I'd be happy to walk you through the pricing page or help you crunch those numbers. Just say the word! Sources: Understanding LiveKit Cloud Pricing | Deploying to production | LiveKit Docs
e
no cool feature as of now apart from if we can have something like if the user interrupt the ai then ai stop responding and listen to the user
t
Hey there! Great question about interruption handling. I totally understand wanting a more natural conversation flow with AI. Let me break down how we can make this work in a way that feels smooth and intuitive. • allow_interruptions: Think of this like a conversational switch - you can either keep it open (letting interruptions happen) or turn it off completely. • interrupt_speech_duration: This helps make sure interruptions are real, intentional moments, not just background noise. • interrupt_min_words: Another smart way to distinguish between accidental and purposeful interruptions. The coolest part? There's an event called
agent_speech_interrupted
that basically tells the system "Hey, someone wants to jump in!" - so the AI can gracefully pause and start listening. Sounds like something you're excited about implementing? I'd be happy to walk you through the details and help you set this up. Just let me know what specific questions you have! Sources: VoicePipelineAgent | LiveKit Docs
e
I think I can use livekit free plan as of now and for llm if I use gemini?
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
e
how much will it cost? • Additional Costs: Don't forget about external services like Deepgram or OpenAI, and server resources. LiveKit recommends 4 cores and 8GB of memory for every 25 concurrent sessions—think of it like giving your voice AI a comfy, powerful workspace.
?