What is the recommended resource amount for deploy...
# ask-ai
s
What is the recommended resource amount for deploying a voice-service - I'm still using Livekits managed service but am just deploying the Agent on my python app
t
Thanks for reaching out, Amir. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! If you're looking to deploy a voice service using LiveKit's managed service with the Agent on your Python app, here’s a friendly rundown of the resource recommendations: • CPU and Memory: A good starting point is 4 CPU cores and 8GB of memory per worker. This setup typically handles around 10-25 concurrent sessions, depending on the plugins you’re using. • Number of Workers: For about 25 concurrent sessions, 1 worker should do the trick. If you’re aiming for 100 sessions, you might want to scale up to 4-8 workers. • Networking: Your workers just need outbound WebSocket connections to LiveKit servers. You can also optionally set up a health check endpoint at
<HEALTH_CHECK_ENDPOINT>
to keep an eye on things. • Environment Variables: Make sure to configure
LIVEKIT_URL
,
LIVEKIT_API_KEY
,
LIVEKIT_API_SECRET
, and any plugin-specific keys like
OPENAI_API_KEY
. If you want, I can share more details about deployment platforms like Kubernetes or Render.com, or help you with configuring plugins. Just let me know! Sources: Deploying to production | LiveKit Docs | Deployment and scaling | LiveKit Docs | Deployment and scaling | LiveKit Docs | Deploying to production | LiveKit Docs
s
Tell me more about the HEALTH_CHECK_ENDPOINT
t
The HEALTH_CHECK_ENDPOINT for LiveKit worker deployment helps keep an eye on the health and availability of your worker instances. Here’s a quick rundown: • Default Endpoint: It listens on
<HEALTH_CHECK_ENDPOINT>
(the default port is 8081). • Purpose: While optional, it’s a great idea to use it in production to make sure your workers are up and running smoothly. • Configuration: You can easily change the port using the
port
option in the
WorkerOptions
settings. • Usage: This endpoint is super handy for monitoring, load balancing, or autoscaling your setup in production. If you’d like, I’m here to help you set it up or show you how to make the most of it in your deployment—just let me know! Sources: Deployment and scaling | LiveKit Docs | Deploying to production | LiveKit Docs | livekit-agents/livekit/agents/worker.py
s
Can you show me code for setting it up
t
Here’s a quick example to help you set up the HEALTH_CHECK_ENDPOINT for a LiveKit worker in Python:
Copy code
from livekit import agent
from livekit.agents import Worker, WorkerOptions

# Custom worker configuration with a different health check port
options = WorkerOptions(
    entrypoint_fnc=async def(job_context):
        # Your job entrypoint logic here
        pass,
    port=8082,  # Change the health check port to 8082
)

# Initialize and run the worker
worker = Worker(options)
worker.run()
Just a couple of things to keep in mind: • The default health check port is 8081, but you can easily switch it up using the
port
parameter in
WorkerOptions
. • Once you set the port, you’ll find the health check running at
<http://localhost:<port>/>
. If you’d like, I can also share examples on how to configure this in a Kubernetes deployment—just let me know! Sources: livekit-agents/livekit/agents/worker.py | ingress/templates/deployment.yaml
s
Does the livekit agent need to be open to requests from the public? Which service sends the request to the Agent ?
t
Hey there! Just so you know, the LiveKit agent doesn’t need to be open to public requests. It chats with the LiveKit server through a WebSocket connection, so there’s no need to expose any inbound ports. When a user joins a room, the server sends the request to an available worker, which then runs the agent code. So, all the communication flows smoothly behind the scenes. If you have any questions or want me to explain anything further, just give me a shout—I’m here to help! Sources: Deployment and scaling | LiveKit Docs | Worker lifecycle | LiveKit Docs
s
Do I need to open a port for the livekit server to be able to contact my Agent service?
t
Good news! You don’t need to open any ports for the LiveKit server to reach your Agent service. The Agent service (worker) actually makes an outbound WebSocket connection to the LiveKit server, so there’s no need to worry about inbound ports on your end. If you have any more questions or want me to explain anything else, just let me know—I’m here to help! Sources: Deployment and scaling | LiveKit Docs | Deploying to production | LiveKit Docs | Ports and firewall | LiveKit Docs