i am using python server sdk, the process sometime...
# ask-ai
w
i am using python server sdk, the process sometimes receives a new job request but sometimes it is unable to receive job request without any change
t
Thanks for reaching out, Nirmal. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! Could you share a bit more about your setup and what's going on? For example: • Do you see any error messages or logs when the job request doesn't come through? • What's your worker's load threshold and how's the connection set up? • Are you using a custom request handler function? If so, does it explicitly accept or reject jobs? Knowing these details will really help us figure out what's happening and get you sorted out. Sources: livekit-agents/livekit/agents/ipc/job_proc_executor.py | Anatomy of an Agent | LiveKit Docs | Worker options | LiveKit Docs
w
1. I dont see any error message, logs are mentioned below DEBUGasyncioUsing selector: KqueueSelector 2025-08-19 134640,200 - DEBUG asyncio - Using selector: KqueueSelector DEVlivekit.agentsWatching /Users/Startup 2025-08-19 134640,201 - DEV livekit.agents - Watching /Users/Startup DEBUGasyncioUsing selector: KqueueSelector 2025-08-19 134640,728 - DEBUG asyncio - Using selector: KqueueSelector INFOlivekit.agentsstarting worker 2025-08-19 134640,730 - INFO livekit.agents - starting worker {"version": "1.2.6", "rtc-version": "1.0.12"} INFOlivekit.agentsstarting inference executor 2025-08-19 134640,731 - INFO livekit.agents - starting inference executor INFOlivekit.agentsinitializing process 2025-08-19 134640,734 - INFO livekit.agents - initializing process {"pid": 16355, "inference": true} DEBUGlivekit.agentsinitializing inference runner DEBUGlivekit.agentsinitializing inference runner 2025-08-19 134641,254 - DEBUG livekit.agents - initializing inference runner {"runner": "lk_end_of_utterance_multilingual", "pid": 16321, "inference": true} None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. DEBUGlivekit.agentsinference runner initialized DEBUGasyncioUsing selector: KqueueSelector DEBUGlivekit.agentsinference runner initialized 2025-08-19 134643,173 - DEBUG livekit.agents - inference runner initialized {"runner": "lk_end_of_utterance_multilingual", "elapsed_time": 1.919292915998085, "pid": 16321, "inference": true} DEBUGasyncioUsing selector: KqueueSelector 2025-08-19 134643,173 - DEBUG asyncio - Using selector: KqueueSelector {"pid": 16321, "inference": true} INFOlivekit.agentsprocess initialized 2025-08-19 134643,174 - INFO livekit.agents - process initialized {"pid": 16321, "inference": true, "elapsed_time": 2.44} INFOlivekit.agentsregistered worker 2025-08-19 134643,442 - INFO livekit.agents - registered worker {"id": "AW_Vgx8z6YuNsXA", "url": "URL", "region": "India South", "protocol": 16} 2. I am using livekit cloud and all the thresholds are default values 3. No i am not using any custom request handler function
t
It sounds like the intermittent issue with your worker not receiving job requests might be tied to load thresholds or connection stability. No worries—let’s walk through some steps together to troubleshoot and hopefully fix this: 1. Keep an Eye on CPU Usage: Try adding some logging to monitor your worker’s CPU utilization. If it goes over the default 70% load threshold, that could be why your worker stops accepting jobs. Here’s a quick example:
Copy code
def custom_load_fnc(worker):
    load = _DefaultLoadCalc.get_load(worker)
    print(f"Current worker load: {load}")
    return load

opts = WorkerOptions(
    entrypoint_fnc=your_entrypoint,
    load_fnc=custom_load_fnc,
)
1. Adjust the Load Threshold: If you notice CPU spikes are causing the worker to be unavailable, you can bump up the load threshold a bit:
Copy code
opts = WorkerOptions(
    entrypoint_fnc=your_entrypoint,
    load_threshold=0.9,  # Raised from the default 0.7
)
1. Try a Custom Load Function: Another option is to create a custom load function that looks at active jobs instead of CPU usage:
Copy code
def compute_load(worker):
    # Base load on active jobs instead of CPU
    return min(len(worker.active_jobs) / 10, 1.0)

opts = WorkerOptions(
    entrypoint_fnc=your_entrypoint,
    load_fnc=compute_load,
    load_threshold=0.9,
)
1. Check Your WebSocket Connection: Make sure your worker keeps a stable WebSocket connection so it can receive job requests smoothly. Keep an eye on the logs for any connection hiccups. 2. Running in Development Mode? If you’re testing things out, running in development mode sets the load threshold to infinity, so you won’t run into load-based availability issues. Hopefully, these tips help you get to the bottom of the intermittent job request problem. If you want to chat more or need any clarification, just let me know—I’m here to help! Sources: Deploying to production | LiveKit Docs | livekit-agents/livekit/agents/worker.py | Worker options | LiveKit Docs
w
I have a cpu load monitor, it is definitely not going more than 0.1 %
t
Since your CPU load is really low, it seems like the issue might be connected to worker registration, status updates, or how jobs are routed. Let’s walk through some steps to help figure this out and get things running smoothly: 1. Verify Worker Registration: Take a look at your logs to see if there are messages confirming the worker is registered, including its ID, URL, and region. Also, watch out for any frequent reconnections or WebSocket hiccups. 2. Monitor Worker Status: Make sure the worker is regularly sending status updates and shows up as
WS_AVAILABLE
. It’s important it’s not stuck in a draining state. 3. Check Job Availability: Double-check that jobs are actually being created and sent to your worker. If you can, verify this from the server side too. 4. Debug Request Handling: Try adding some logging to your request handler to confirm that job requests are coming in and being processed:
Copy code
async def debug_request_fnc(req: JobRequest):
    print(f"Received job request: {req.job.id}")
    await req.accept()  # Default behavior

opts = WorkerOptions(
    entrypoint_fnc=your_entrypoint,
    request_fnc=debug_request_fnc  # Add logging
)
1. Check Server Connectivity: Double-check that your
LIVEKIT_URL
,
LIVEKIT_API_KEY
, and
LIVEKIT_API_SECRET
are correct, and that nothing’s blocking the WebSocket connection on your network. If you’re comfortable sharing your worker logs, especially around startup and status updates, I’d be happy to take a closer look and help you get to the bottom of this. Sources: livekit-agents/livekit/agents/worker.py | Deploying to production | LiveKit Docs | Worker options | LiveKit Docs
w
i have already shared the worker logs in the previous messages, it clearly says registered worker
t
It looks like your worker is registered but isn’t receiving job requests yet. Let’s walk through a few things you can check to get this sorted out: 1. Verify Worker Status: Make sure your worker’s status updates show
WS_AVAILABLE
. If it’s showing
WS_FULL
, it won’t get any new jobs. 2. Check WebSocket Connection: Double-check that the WebSocket connection is steady and that the worker is sending heartbeat messages every 30 seconds to keep things alive. 3. Agent Name Configuration: If you’ve set an
agent_name
in your
WorkerOptions
, jobs need to be assigned directly to that worker. So, confirm that jobs are going to the right agent. 4. Worker Type Match: Make sure the worker type matches the job type you’re expecting (like
JT_ROOM
or
JT_PUBLISHER
). 5. Monitor Job Request Logs: Take a look at the worker’s logs to see if it’s receiving any job request messages. If not, the server might not be sending jobs to this worker. 6. Server-Side Configuration: Check your LiveKit server settings to ensure it’s set up to dispatch jobs to workers, and that other workers aren’t taking all the available jobs. 7. Health Check Endpoint: Try using the worker’s health check endpoint (like `http://worker-host:port/worker`) to see its current status, active jobs, and configuration. If you’ve gone through these and things still aren’t working, the issue might be with the server’s dispatch setup or network connectivity. Feel free to reach out if you want me to help you dig deeper or clarify anything! Sources: protobufs/livekit_agent.proto | livekit-agents/livekit/agents/worker.py | Anatomy of an Agent | LiveKit Docs | Worker options | LiveKit Docs
w
can you share python code for it ?
t
Here’s some Python code that might help you debug and check the worker’s status and how it’s handling jobs: 1. Keep an Eye on Worker Status: You can add this snippet to see if the worker is showing as available:
Copy code
def debug_load_fnc(worker):
    load = worker._worker_load  # Grab the current load value
    print(f"Current worker load: {load}")
    return load

opts = WorkerOptions(
    entrypoint_fnc=your_entrypoint,
    load_fnc=debug_load_fnc,
    load_threshold=0.9  # Temporarily bump up the threshold
)
1. Debug How Requests Are Handled: Adding some logging here can help confirm job requests are coming through:
Copy code
async def debug_request_fnc(req: JobRequest):
    print(f"Received job request: {req.job.id}")
    await req.accept()  # Accept the job

opts = WorkerOptions(
    entrypoint_fnc=your_entrypoint,
    request_fnc=debug_request_fnc  # Logging added here
)
1. Check the Health Endpoint: You can also peek at the worker’s health check endpoint to see what’s going on:
Copy code
# Just open this in your browser or use curl
<http://worker-host:port/worker>
Don’t forget to swap out
your_entrypoint
with your real entrypoint function. If anything’s unclear or you want more examples, just let me know—I’m happy to help!
w
can you share documentation link for it
t
Hey there! I’ve gathered some handy documentation links for you about the Python Server SDK worker status and job handling: • Worker Lifecycle Documentation: This one walks you through worker registration, job requests, job lifecycle, and session management. • Job Lifecycle Documentation: Here you'll find details on the entrypoint function, job context, and session management. • Worker Options Documentation: This explains WorkerOptions parameters like load functions, worker status, and how to configure job handling. • Deployment Documentation: Covers everything about load balancing, worker availability, and autoscaling in production. I hope these help you get the info you need! If anything’s unclear or you want to dive deeper into a specific topic, just let me know—I’m here to help. Sources: Deploying to production | LiveKit Docs | Deploying to production | LiveKit Docs | Worker lifecycle | LiveKit Docs | Job lifecycle | LiveKit Docs | Worker options | LiveKit Docs