white-cpu-42259
08/19/2025, 8:21 AMtall-airline-16756
08/19/2025, 8:21 AMtall-airline-16756
08/19/2025, 8:22 AMwhite-cpu-42259
08/19/2025, 8:25 AMtall-airline-16756
08/19/2025, 8:26 AMdef custom_load_fnc(worker):
load = _DefaultLoadCalc.get_load(worker)
print(f"Current worker load: {load}")
return load
opts = WorkerOptions(
entrypoint_fnc=your_entrypoint,
load_fnc=custom_load_fnc,
)
1. Adjust the Load Threshold: If you notice CPU spikes are causing the worker to be unavailable, you can bump up the load threshold a bit:
opts = WorkerOptions(
entrypoint_fnc=your_entrypoint,
load_threshold=0.9, # Raised from the default 0.7
)
1. Try a Custom Load Function: Another option is to create a custom load function that looks at active jobs instead of CPU usage:
def compute_load(worker):
# Base load on active jobs instead of CPU
return min(len(worker.active_jobs) / 10, 1.0)
opts = WorkerOptions(
entrypoint_fnc=your_entrypoint,
load_fnc=compute_load,
load_threshold=0.9,
)
1. Check Your WebSocket Connection: Make sure your worker keeps a stable WebSocket connection so it can receive job requests smoothly. Keep an eye on the logs for any connection hiccups.
2. Running in Development Mode? If you’re testing things out, running in development mode sets the load threshold to infinity, so you won’t run into load-based availability issues.
Hopefully, these tips help you get to the bottom of the intermittent job request problem. If you want to chat more or need any clarification, just let me know—I’m here to help!
Sources: Deploying to production | LiveKit Docs | livekit-agents/livekit/agents/worker.py | Worker options | LiveKit Docswhite-cpu-42259
08/19/2025, 8:27 AMtall-airline-16756
08/19/2025, 8:28 AMWS_AVAILABLE
. It’s important it’s not stuck in a draining state.
3. Check Job Availability: Double-check that jobs are actually being created and sent to your worker. If you can, verify this from the server side too.
4. Debug Request Handling: Try adding some logging to your request handler to confirm that job requests are coming in and being processed:
async def debug_request_fnc(req: JobRequest):
print(f"Received job request: {req.job.id}")
await req.accept() # Default behavior
opts = WorkerOptions(
entrypoint_fnc=your_entrypoint,
request_fnc=debug_request_fnc # Add logging
)
1. Check Server Connectivity: Double-check that your LIVEKIT_URL
, LIVEKIT_API_KEY
, and LIVEKIT_API_SECRET
are correct, and that nothing’s blocking the WebSocket connection on your network.
If you’re comfortable sharing your worker logs, especially around startup and status updates, I’d be happy to take a closer look and help you get to the bottom of this.
Sources: livekit-agents/livekit/agents/worker.py | Deploying to production | LiveKit Docs | Worker options | LiveKit Docswhite-cpu-42259
08/19/2025, 8:29 AMtall-airline-16756
08/19/2025, 8:30 AMWS_AVAILABLE
. If it’s showing WS_FULL
, it won’t get any new jobs.
2. Check WebSocket Connection: Double-check that the WebSocket connection is steady and that the worker is sending heartbeat messages every 30 seconds to keep things alive.
3. Agent Name Configuration: If you’ve set an agent_name
in your WorkerOptions
, jobs need to be assigned directly to that worker. So, confirm that jobs are going to the right agent.
4. Worker Type Match: Make sure the worker type matches the job type you’re expecting (like JT_ROOM
or JT_PUBLISHER
).
5. Monitor Job Request Logs: Take a look at the worker’s logs to see if it’s receiving any job request messages. If not, the server might not be sending jobs to this worker.
6. Server-Side Configuration: Check your LiveKit server settings to ensure it’s set up to dispatch jobs to workers, and that other workers aren’t taking all the available jobs.
7. Health Check Endpoint: Try using the worker’s health check endpoint (like `http://worker-host:port/worker`) to see its current status, active jobs, and configuration.
If you’ve gone through these and things still aren’t working, the issue might be with the server’s dispatch setup or network connectivity. Feel free to reach out if you want me to help you dig deeper or clarify anything!
Sources: protobufs/livekit_agent.proto | livekit-agents/livekit/agents/worker.py | Anatomy of an Agent | LiveKit Docs | Worker options | LiveKit Docswhite-cpu-42259
08/19/2025, 8:31 AMtall-airline-16756
08/19/2025, 8:31 AMdef debug_load_fnc(worker):
load = worker._worker_load # Grab the current load value
print(f"Current worker load: {load}")
return load
opts = WorkerOptions(
entrypoint_fnc=your_entrypoint,
load_fnc=debug_load_fnc,
load_threshold=0.9 # Temporarily bump up the threshold
)
1. Debug How Requests Are Handled: Adding some logging here can help confirm job requests are coming through:
async def debug_request_fnc(req: JobRequest):
print(f"Received job request: {req.job.id}")
await req.accept() # Accept the job
opts = WorkerOptions(
entrypoint_fnc=your_entrypoint,
request_fnc=debug_request_fnc # Logging added here
)
1. Check the Health Endpoint: You can also peek at the worker’s health check endpoint to see what’s going on:
# Just open this in your browser or use curl
<http://worker-host:port/worker>
Don’t forget to swap out your_entrypoint
with your real entrypoint function. If anything’s unclear or you want more examples, just let me know—I’m happy to help!white-cpu-42259
08/19/2025, 8:33 AMtall-airline-16756
08/19/2025, 8:33 AM