I'm glad you're looking into handling concurrent egress requests with your self-hosted LiveKit setup! To make this process smoother, you can take advantage of the built-in load-balancing and resource management features of the egress service. Let me walk you through the key steps:
•
Redis Configuration: The egress service relies on Redis Pub/Sub to communicate and balance the load between workers and the LiveKit server. It’s important to double-check that the Redis address in your egress configuration matches the Redis address used by your LiveKit server.
•
Deploy Multiple Egress Workers: By deploying multiple egress workers, you enable automatic load balancing and better distribution of egress requests. Each worker will assess its current load to determine if it can take on a new request, which helps keep things running smoothly.
•
Resource Allocation: Keep in mind that different egress job types require different amounts of resources. For instance, TrackEgress jobs are relatively lightweight, while room composite jobs need more power. It’s a good idea to allocate at least 4 CPUs and 4 GB of memory for each egress instance to ensure reliable performance.
•
Autoscaling: To keep up with demand, consider using the
livekit_egress_available
Prometheus metric for autoscaling based on actual CPU utilization. This way, you’ll have enough instances ready to handle new requests as they come in.
Following these steps should help you manage concurrent egress requests effectively in your self-hosted environment. If you have any questions or need further assistance, I’m here to help!
Sources:
Universal Egress |
Self-hosting the Egress Service | LiveKit Docs |
Docs::Home |
pkg/server/server_rpc.go |
Universal Egress