silly-baker-11446
09/10/2025, 6:23 PMtall-airline-16756
09/10/2025, 6:23 PMtall-airline-16756
09/10/2025, 6:25 PMlivekit/egress
. For smooth performance, please allocate at least 4 CPUs and 4 GB memory per instance. Don’t forget to include the SYS_ADMIN
capability in your ECS task definition to support Chrome sandboxing. Using the awsvpc
network mode helps with fine-grained traffic control. You can provide configuration via the EGRESS_CONFIG_FILE
or EGRESS_CONFIG_BODY
environment variables.
2. Configuration: The Egress service needs a YAML configuration file that includes your api_key
, api_secret
, ws_url
(like <LIVEKIT_SERVER_URL>), Redis connection info, and storage settings for where your output uploads will go. These can be set either as environment variables or directly within the configuration file.
3. Recording Integration: You can set up the Egress service to work with both S3 and local storage by specifying the right storage settings in your config file, such as bucket names, access keys, and local file paths. This flexibility helps you manage recordings just the way you need.
4. Resource Requirements: For production environments, we recommend allocating at least 4 CPUs and 4 GB memory per instance. Using host networking with 10Gbps or higher connectivity will ensure good performance. To keep up with demand, you can autoscale based on the livekit_egress_available
Prometheus metric, which helps maintain enough capacity for incoming jobs.
5. ALB Integration: Good news—LiveKit Egress doesn’t require any special ALB routing rules. It communicates with the LiveKit server and Redis using gRPC/Protobuf over HTTP, so standard networking setups should work just fine.
If you have any more questions or need help with anything else, please feel free to ask. I’m here to help!
Sources: egress/values.yaml | Self-hosting the Egress Service | LiveKit Docs | aws-ecs/cloudformation.yaml | Docs::Home | Docs::Home