Hello Team I have a livekit cloud app with voice o...
# ask-ai
s
Hello Team I have a livekit cloud app with voice only rooms. Some users report that the other party cannot hear them. The number of these people is not small. When I checked the statistical data of the calls on livekit cloud, I saw that all of them had bandwidth values below 1MB. What do you suggest I do for improvement?
t
Thanks for reaching out, Osman. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! To help with the low bandwidth issues in your voice-only rooms, here are some friendly tips you can try out: 1. Enable DTX for Audio: Make sure Opus DTX (Discontinuous Transmission) is turned on. It helps reduce bitrate when the microphone is quiet or silent, which can really save bandwidth. 2. Simulcast and Stream Allocation: LiveKit gives priority to audio streams when managing bandwidth. Double-check your room settings to make sure they’re set up just right for voice-only use. 3. Monitor and Adjust Room Settings: You can use LiveKit’s logging tools or browser tools like
webrtc-internals
to keep an eye on bandwidth and packet loss. This can help spot any trouble spots. 4. Network Simulation and Testing: Try simulating low-bandwidth conditions with tools like Network Link Conditioner. It’s a great way to see how your app performs when the network isn’t at its best. 5. Bandwidth Calculation: Keep track of bandwidth use per participant to make sure everything matches what you expect. I hope these tips make things smoother for your voice-only rooms! If you want to dive deeper into any of these or need a hand, just let me know—I’m here to help. Sources: Voice Assistants | livekit/livekit_composite | DeepWiki | An introduction to WebRTC Simulcast | CHANGELOG.md | LiveKit Cloud Pricing | livekit/livekit_composite | DeepWiki | Bringing Zoom's end-to-end optimizations to WebRTC
s
Could you be more specific about “Enable DTX for Audio”?
t
Hey there! To get Opus DTX (Discontinuous Transmission) up and running for audio in LiveKit Cloud, here’s a quick guide depending on the SDK you’re using: • JavaScript:
Copy code
const track = await localParticipant.microphone();
await room.publish(track, { dtx: true });
Swift:
let options = AudioPublishOptions(dtx: true)
Flutter:
final options = AudioPublishOptions(dtx: true);
Android:
val options = AudioTrackPublishOptions(dtx = true)
Ingress: When you’re setting up an Ingress configuration, just make sure
disable_dtx
is set to
false
in the `IngressAudioEncodingOptions`:
disable_dtx: false
If you hit any bumps or want a hand with adding this to your app, just give me a shout—I’m here to help! Sources: src/room/track/options.ts | Sources/LiveKit/Types/Options/AudioPublishOptions.swift | lib/src/proto/livekit_models.pb.dart | livekit-android-sdk/src/main/java/io/livekit/android/room/participant/Local... | livekit-protocol/livekit/protocol/models.py
s
Could you be more specific about “*Simulcast and Stream Allocation*”?
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
s
Can you be more specific about “Simulcast”? What settings do I need to make?