I have question, about the livekit deployed on kub...
# ask-ai
s
I have question, about the livekit deployed on kubernetes cluster. If I have a node of 16vcpu and 32 Gb ram or more than this resources and most of the resources given to one pod, how much users will easily managed by the one room in livekit pod.
t
Thanks for reaching out, Nouman. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
The number of users a single LiveKit pod with 16 vCPUs and 32 GB RAM can support depends on the use case: • Interactive Meetings: The system can handle up to 150 participants (publishers and subscribers) in a large meeting setup before reaching CPU limits. • Livestreaming: For scenarios with 1 publisher and up to 3000 subscribers, the pod can manage this load effectively. Performance depends on factors like video quality, bitrate, and network conditions. If you have a specific use case in mind, I can provide more tailored guidance. Sources: Benchmarking | LiveKit Docs | Benchmarking | LiveKit Docs | Going beyond a single-core
s
with 720p resolution
for live streaming
t
For livestreaming at 720p resolution, a LiveKit pod with 16 vCPUs and 32 GB RAM can comfortably support around 3,000 subscribers. This number comes from benchmark data where the system was running at about 92% CPU utilization in similar situations. Just wanted to share this so you have a clear idea of what to expect! Sources: Benchmarking | LiveKit Docs | Infrastructure & Deployment | livekit/livekit_composite | DeepWiki
s
no want to cpu utilization 75%
t
For livestreaming at 720p resolution with a target CPU utilization of 75%, a LiveKit pod with 16 vCPUs and 32 GB RAM can support an estimated 2400-2500 subscribers. This estimate comes from benchmark data that showed 3000 subscribers at 92% CPU usage. If you want to get exact numbers for your specific setup, you can try LiveKit's load testing tool (
lk load-test
). It lets you simulate subscribers and see how your system performs in a controlled way. Just let me know if you'd like a hand with setting it up—I’m here to help! Sources: Benchmarking | LiveKit Docs | README.md
s
pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080617 dropping candidate with ufrag zjSyoeEcWVbmfLMg because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag DBBvTGgExyEWgUHz because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags pc ERROR: 2025/08/07 080618 dropping candidate with ufrag MAwzPBnNfnDzJPQb because it doesn't match the current ufrags ^[[1;2Acould not connect Sub 2130: could not establish signal connection
t
The error "dropping candidate with ufrag because it doesn't match the current ufrags" pops up when there's a mismatch in the username fragment (ufrag) during ICE candidate handling. This usually means the ufrag values in the ICE candidates don’t line up with the current session’s ufrag in the SDP. Here’s how you can fix it: 1. Make sure the ufrag values in the offer and answer SDP match up. You can use LiveKit’s
lksdp.ExtractICECredential
method to pull out and check these credentials. 2. Double-check that the ICE candidates being exchanged have the right ufrag for the current session. 3. Take a look at your SDP processing logic to confirm that ICE credentials are properly patched during updates or restarts. If you want a hand debugging this or need help with any of these steps, just let me know!
s
when the users are high this issue came up
t
The error "dropping candidate with ufrag because it doesn't match the current ufrags" can pop up when there's a high user load, usually because the ICE credentials (ufrag and pwd) don’t line up during ICE Trickle or ICE Restart. This often happens due to race conditions or some hiccups in how SDP is handled. Here’s how you can tackle it: 1. Verify ICE Credentials Consistency: Make sure the same ufrag and pwd are used consistently across all SDP fragments and ICE candidates. 2. Handle ICE Trickle Correctly: Use the right methods like
HandleICETrickleSDPFragment
to check and add candidates that have matching ufrags. 3. Manage State During ICE Restart: After an ICE restart, reset connection details so old and new candidates don’t get mixed up. 4. Review SDP Parsing: Double-check that SDP fragments are parsed and marshaled correctly to keep ICE credentials consistent. Since high user load can make these issues more noticeable, it’s a good idea to have solid logging and validation in place to catch and fix inconsistencies early on. If you want, I’m here to help you debug or walk through these fixes anytime! Sources: CHANGELOG.md | sdp/sdp_test.go
s
yes
t
To help you debug and fix those ufrag mismatch errors when you have a lot of users, here are some friendly steps to try: 1. Check Your SDP Handling: Make sure the
ice-ufrag
and
ice-pwd
values in your SDP fragments match up with the remote description. Keeping your SDP updates in sync can really help avoid those mismatches. 2. Try an ICE Restart: When you spot mismatches, re-initialize the connections with fresh
ice-ufrag
and
ice-pwd
values. It often clears things up. 3. Look at Your Trickle ICE Settings: Adjust how often you send Trickle ICE candidates to keep everything nicely synchronized. 4. Enhance Error Handling: It’s a good idea to log detailed info about any
ice-ufrag
mismatches and add retry logic with exponential backoff to handle failed negotiations smoothly. 5. Keep Shared State in Sync: Make sure shared state like the remote description is properly synchronized to avoid race conditions. If you’re still running into trouble, feel free to share your error logs, how you’re handling SDP, and details about your load testing. I’m here to help you get this sorted!
s
i used the helm to deploy it on k8s cluster
t
To help you sort out those ufrag mismatch errors in your LiveKit setup on Kubernetes with Helm, especially when things get busy with lots of users, here’s a friendly step-by-step guide: 1. Check Ingress Configuration: Double-check that your Ingress service is set up right in your
values.yaml
file, making sure ICE credentials are handled consistently across all instances. 2. Scale Ingress Appropriately: Think about upping the
replicaCount
or turning on horizontal pod autoscaling (HPA) to keep up with the traffic when loads spike. 3. Verify Load Balancer Configuration: Make sure your load balancer is ready to handle WebRTC traffic and ICE connections, with TLS termination set up properly. 4. Review TURN Server Configuration: Take a look at your TURN servers to confirm they’re correctly set up for NAT traversal and that TLS is configured right for TURN traffic. 5. Allocate Sufficient Resources: Check that your LiveKit pods have enough resource requests and limits to smoothly handle a high number of users. 6. Monitor Logs and Metrics: Use your monitoring tools to keep an eye on CPU, memory, and connection stats, and peek into logs to spot any error patterns. 7. Verify Network Policies: Make sure your firewall rules and Kubernetes network policies are allowing all the necessary ports and protocols for WebRTC to work properly. If you’d like a hand digging deeper, feel free to share your
values.yaml
configuration—especially the Ingress and TURN parts—and any error logs you’ve got. I’m here to help!
s
turnc ERROR: 2025/08/07 090357 Fail to refresh permissions: write tcp4 68.183.67.20750408 >104.248.102.167443: write: broken pipe
t
The error "Fail to refresh permissions: write tcp4 broken pipe" usually pops up when the TCP connection to the TURN server gets interrupted or fails while refreshing permissions. This can happen because of network hiccups, TURN server setup issues, or problems on the client side. Here’s how you can tackle it: 1. Check TURN Server Logs: Take a peek at the TURN server logs to spot any errors like authentication problems or network glitches. 2. Run Connection Tests: Try out LiveKit's Connection Test utility to help pinpoint connectivity issues, including WebRTC and TURN server links. 3. Verify TURN Configuration: Double-check that your TURN server is set up correctly with the right ports (like 3478 for UDP, 443 for TLS) and valid TLS certificates. 4. Check Client-Side Configuration: Make sure your client is using the correct TURN servers and that the
iceTransportPolicy
is set properly (for example, "relay" if you’re using TURN-only connections). 5. Update LiveKit: Keep your LiveKit version up to date, since newer versions often include fixes for TURN-related issues. If you’re still stuck or want a hand, feel free to share your TURN server setup, client settings, or any extra error logs. We’re here to help!