when self hosting how to handle concurrent egress ...
# ask-ai
b
when self hosting how to handle concurrent egress requests
t
Thanks for reaching out, Akash. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I'm glad you're looking into handling concurrent egress requests with your self-hosted LiveKit setup! To make this process smoother, you can take advantage of the built-in load-balancing and resource management features of the egress service. Let me walk you through the key steps: • Redis Configuration: The egress service relies on Redis Pub/Sub to communicate and balance the load between workers and the LiveKit server. It’s important to double-check that the Redis address in your egress configuration matches the Redis address used by your LiveKit server. • Deploy Multiple Egress Workers: By deploying multiple egress workers, you enable automatic load balancing and better distribution of egress requests. Each worker will assess its current load to determine if it can take on a new request, which helps keep things running smoothly. • Resource Allocation: Keep in mind that different egress job types require different amounts of resources. For instance, TrackEgress jobs are relatively lightweight, while room composite jobs need more power. It’s a good idea to allocate at least 4 CPUs and 4 GB of memory for each egress instance to ensure reliable performance. • Autoscaling: To keep up with demand, consider using the
livekit_egress_available
Prometheus metric for autoscaling based on actual CPU utilization. This way, you’ll have enough instances ready to handle new requests as they come in. Following these steps should help you manage concurrent egress requests effectively in your self-hosted environment. If you have any questions or need further assistance, I’m here to help! Sources: Universal Egress | Self-hosting the Egress Service | LiveKit Docs | Docs::Home | pkg/server/server_rpc.go | Universal Egress
b
is there any limit of how many concurrent egress requests self hosted livekit egress handles
t
I'm glad you asked about the limits on self-hosted LiveKit egress! There isn't a fixed cap on the number of concurrent egress requests—it really depends on your system's resources and the types of egress jobs you're running: • TrackEgress Jobs: These are quite lightweight and can comfortably handle hundreds of concurrent jobs on a single instance since they don’t require transcoding. • Room Composite or Web Egress Jobs: These tend to be more resource-intensive because of Chrome and transcoding needs, usually needing around 2-6 CPUs per job. For each egress instance, we recommend having at least 4 CPUs and 4 GB of memory. If you find yourself needing to handle more concurrent requests, a great approach is to scale by adding more egress workers and making sure they have enough resources. You can also set up autoscaling with Prometheus metrics to help your system adjust dynamically based on the workload. If you’d like, I’m here to help guide you through setting that up! Sources: Docs::Home | Self-hosting the Egress Service | LiveKit Docs