how to change the health check server port
# ask-ai
k
how to change the health check server port
t
Thanks for reaching out, Hyeonsu. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
You can change the health check server port in LiveKit by updating the health_port setting in your configuration. The method depends on the specific LiveKit component you are using: • LiveKit Agents: Set the
port
parameter in your
WorkerOptions
configuration. The default is
8081
. • LiveKit Ingress Service: Set the
health_port
field in your YAML configuration or Helm chart values. Example:
health_port: 7888
. • LiveKit Egress Service: Set the
health_port
field in your configuration. Example:
health_port: 8080
. • LiveKit SIP Service: Set the
health_port
field in your configuration. Example:
health_port: 8080
. The health check endpoint will be available at http://0.0.0.0:<your_port>/ after you update the configuration. Sources: pkg/config/service.go | pkg/config/config.go | Deployment and scaling | LiveKit Docs | livekit-agents/livekit/agents/worker.py | Deploying to production | LiveKit Docs
k
what to do when worker doesn't receive job request after few times of requests?
t
1. Check connection and registration: Hey there! First things first, let's make sure your worker is properly connected. Take a quick peek and confirm you've successfully registered with the LiveKit server. Look for that friendly "registered worker" log entry. 2. Verify configuration: Let's double-check your setup. Are your
wsURL
,
apiKey
, and
apiSecret
all set up correctly? If you're using explicit dispatch, give your worker type and agent name a quick once-over. 3. Monitor worker availability: Sometimes workers get a bit overwhelmed. If your worker's load goes beyond the
load_threshold
, it might take a break from receiving jobs. No worries - just check your
load_fnc
and
load_threshold
settings. 4. Review retry logic: Don't stress if connections get a bit wonky. Your worker will try to reconnect multiple times if things don't go smoothly. Keep an eye out for any repeated connection hiccups in the logs. 5. Check job distribution: Think of job distribution like a friendly game of pass-the-parcel. The server uses round-robin, so if a worker doesn't grab a job in time, it gets passed along. Peek at the server logs to see if there are any assignment quirks. 6. Troubleshoot: ◦ Scan worker logs for any error messages or timeouts ◦ Make sure your network is playing nice with the LiveKit server ◦ Keep an eye on your worker's load status ◦ Take a look at server-side logs for any job distribution mysteries ◦ Try out the
simulate_job
method to confirm job reception If you're still scratching your head after these steps, we might need to dive a bit deeper into the server configuration and network setup. Don't worry - we'll get to the bottom of this together! Sources: Deploying to production | LiveKit Docs | README.md | livekit-agents/livekit/agents/worker.py | Deployment and scaling | LiveKit Docs