how to deploy livekit agents on a fastaapi backend...
# ask-ai
m
how to deploy livekit agents on a fastaapi backend?
t
Thanks for reaching out, Vinod. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To deploy LiveKit agents on a FastAPI backend, you can follow these general steps: 1. Create a
Dockerfile
for your FastAPI application that ends with a command like
CMD ["python", "main.py", "start"]
. 2. Deploy the application to a platform that supports container orchestration, such as Kubernetes or Render.com. LiveKit agents are designed to work well in such environments. For example, in the avatar_agents setup, a FastAPI application serves as a dispatcher for avatar workers, with endpoints for launching workers in specific rooms. The dispatcher uses a
WorkerLauncher
class to manage worker processes. Additionally, LiveKit agents use WebSocket connections to register with the LiveKit server and accept incoming jobs. They do not require exposing inbound ports to the public internet, but you may optionally expose a private health check endpoint (default: http://0.0.0.0:8081/). If you need further assistance or have specific requirements, feel free to share more details! Sources: Deploying to production | LiveKit Docs