I have non-technical question. In the LiveKit free...
# ask-ai
r
I have non-technical question. In the LiveKit free plan I see webrtc participant minutes included is 5000 (it is per participant and does this gets renewed everymonth in the same free plan)? Can you provide an with 2 or 3 participants how thse are calculated for every month including agent participation.
t
Thanks for reaching out, Mitesh. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Thanks so much for reaching out! I’d love to help you understand how WebRTC participant minutes work in the LiveKit free plan. Basically, these minutes are counted based on the total time each participant spends connected to LiveKit Cloud through the LiveKit SDK. The great news is that the free plan includes 5,000 participant minutes, which you can use across all your projects, and these minutes refresh every month. To give you a clearer picture, if you have a session with 2 participants (including the agent) that lasts 30 minutes, it will use up 60 participant minutes (because it’s 30 minutes per participant). Likewise, a session with 3 participants lasting 20 minutes will also use 60 participant minutes (20 minutes per participant). If anything’s still unclear or if you’d like me to walk you through more examples, just let me know—I’m here to help! Sources: Quotas and limits | LiveKit Docs | Docs::Home | LiveKit Pricing | A Tale of Two Protocols: WebRTC vs. HLS for Live Streaming
r
If I use the opensource version of hosting LiveKit server locally or on personal server (does this limit still hold true)? Given that I'm no longer connected using livekit cloud. Also can you help me understand the benefits of Livekit cloud (apart from the analytics) over hosting it locally or personal server
t
I'm glad you asked about the 5,000 participant minute limit! Just to clarify, this limit doesn’t apply if you’re hosting the LiveKit server locally or on your own personal server. That limit is specific to LiveKit Cloud’s free plan. When you choose to self-host, you’re using the open-source version of LiveKit, which is completely free and doesn’t have any participant minute quotas or metering. Now, if you’re curious about the benefits of LiveKit Cloud compared to self-hosting, here are some key advantages that might help you decide: • Zero Operational Overhead: LiveKit Cloud takes care of deployment, scaling, and maintenance, so you can put your energy into building your application without worrying about the backend. • Global Edge Network: Your users connect to the nearest server, ensuring minimal latency thanks to a globally distributed infrastructure. • Unlimited Scalability: The Cloud supports up to 100,000 participants per room by default, which is quite a leap from the ~3,000 limit on self-hosted servers. • High Reliability: Enjoy a guaranteed 99.99% uptime with automatic failover and no single point of failure, so your service stays smooth and dependable. • Built-in Analytics: You get comprehensive dashboards to monitor usage and performance, something that’s not available if you’re self-hosting. • Seamless Integration: The APIs and SDKs are identical, making it super easy to switch between self-hosted and cloud deployments whenever you need. If you’d like, I’m here to provide more details on any of these points or help you figure out what’s best for your situation! Sources: Docs::Home | Quotas and limits | LiveKit Docs | LiveKit Pricing | Announcing LiveKit Cloud | delay-handling/raw_data.txt
r
what kind of VAD is supported in opensource and it is at-par when using cloud version
t
I'm glad you asked about the LiveKit server! The open-source LiveKit server uses Silero VAD as its main voice activity detection (VAD) model, along with a custom LiveKit turn detector to help with smoother turn-taking. These same models are also used in LiveKit Cloud, so the core VAD functionality is consistent across both versions. That said, LiveKit Cloud comes with some extra perks like improved noise cancellation, easy integration with plugins, and infrastructure optimizations that boost voice AI performance. These enhancements make it a great choice for production-scale deployments, while the open-source version offers you the flexibility to self-host and customize as you wish. If you'd like, I’d be happy to share more details about VAD customization or help you explore deployment options. Just let me know! Sources: livekit-agents/livekit/agents/voice/audio_recognition.py | Speech & Audio Processing | livekit/livekit_composite | DeepWiki | Voice activity detection (VAD) parameters | LiveKit Docs | How does end-of-utterance detection work in conversations? | plugins/silero/src/vad.ts
r
I'd like to understand what do you mean by "improved noise cancellation". can you be more specific and what is the tech behind it.
t
I'm excited to share that the improved noise cancellation in LiveKit Cloud is powered by AI models licensed from Krisp. These are thoughtfully designed to remove background noise and boost audio quality in real time, making your experience much smoother. Here’s a quick overview of the models: • Standard Noise Cancellation (NC): This gently removes common background noises like traffic, typing, or music, helping your voice come through clearly. • Background Voice Cancellation (BVC): This one is great for suppressing other human voices in the background—perfect if you’re using voice AI apps that need precise speech-to-text transcription and turn detection. To keep your privacy intact and ensure minimal delay, these models process audio locally, either on the client or server side. For instance, BVC can be activated in agent code to clean up incoming audio before it reaches speech-to-text models, while on the frontend, standard NC tidies up outgoing audio before it’s sent out. If you’d like, I’d be more than happy to walk you through how to enable these features or even help you test them with your setup. Just let me know! Sources: Docs::Home | Enhanced noise cancellation | LiveKit Docs | SIP overview | LiveKit Docs
r
are there any opensource alternative available (apart from Krisp) which can be used in frontend (which filters noise at the source using BVC or equivalence)
t
Thank you for reaching out! I wanted to share that, at the moment, there aren’t any open-source alternatives to Krisp that offer Background Voice Cancellation (BVC) or similar features directly on the frontend. The open-source WebRTC implementation does include basic noise suppression (like
noiseSuppression
), but it doesn’t quite remove background voices and isn’t as effective as the AI-powered models Krisp provides. If Background Voice Cancellation is a must-have for you, it’s currently available through LiveKit’s backend agent processing using Python or Node.js. Another option to consider is exploring external open-source libraries like RNNoise or some deep learning models available on platforms like Hugging Face. Just a heads-up, these would need some testing and integration on your end to see how well they work for BVC. Please feel free to reach out if you’d like some guidance on implementing BVC on the backend or if you want to dive deeper into noise suppression options—I’m here to help! Sources: components/room.tsx | examples/src/comprehensive_test.ts | AI voice agents | LiveKit Docs | Noise & echo cancellation | LiveKit Docs | main.py