red-television-3815
07/26/2025, 12:51 PM2025-07-26 12:47:06,558 - DEBUG asyncio - Using selector: EpollSelector
2025-07-26 12:47:06,571 - DEV livekit.agents - Watching /workspace/LocalVoice
Created recordings directory at: /workspace/LocalVoice/recordings
2025-07-26 12:47:12,641 - DEBUG asyncio - Using selector: EpollSelector
2025-07-26 12:47:12,662 - INFO livekit.agents - starting worker {"version": "1.2.2", "rtc-version": "1.0.12"}
2025-07-26 12:47:12,662 - INFO livekit.agents - preloading plugins {"packages": ["livekit.plugins.openai", "livekit.plugins.playai", "livekit.plugins.cartesia", "livekit.plugins.deepgram", "livekit.plugins.elevenlabs", "livekit.plugins.silero", "livekit.plugins.assemblyai", "livekit.plugins.google", "livekit.plugins.turn_detector", "av"]}
2025-07-26 12:47:12,669 - INFO livekit.agents - starting inference executor
2025-07-26 12:47:19,543 - INFO livekit.agents - initializing process {"pid": 22507, "inference": true}
Created recordings directory at: /workspace/LocalVoice/recordings
2025-07-26 12:47:19,861 - DEBUG livekit.agents - initializing inference runner {"runner": "lk_end_of_utterance_en", "pid": 22507, "inference": true}
2025-07-26 12:47:29,553 - INFO livekit.agents - killing process {"pid": 22507, "inference": true}
2025-07-26 12:47:29,554 - ERROR livekit.agents - worker failed
Traceback (most recent call last):
File "/workspace/miniconda3/envs/loc/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/ipc/channel.py", line 47, in arecv_message
return _read_message(await dplx.recv_bytes(), messages)
^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/utils/aio/duplex_unix.py", line 35, in recv_bytes
len_bytes = await self._reader.readexactly(4)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/asyncio/streams.py", line 750, in readexactly
await self._wait_for_data('readexactly')
File "/workspace/miniconda3/envs/loc/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/cli/_run.py", line 79, in _worker_run
await worker.run()
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/worker.py", line 387, in run
await self._inference_executor.initialize()
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/ipc/supervised_proc.py", line 169, in initialize
init_res = await asyncio.wait_for(
^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
tall-airline-16756
07/26/2025, 12:52 PMtall-airline-16756
07/26/2025, 12:52 PMfrom livekit.agents import WorkerOptions
opts = WorkerOptions(
initialize_process_timeout=30.0 # Increase to 30 seconds
)
• Check Network Connectivity: Ensure that your application can connect to the LiveKit server and any external APIs it depends on. Network issues can cause timeouts during startup.
• Monitor System Resources: Verify that your system has sufficient memory and CPU resources. High resource usage can lead to initialization delays.
• Verify LiveKit Server Configuration: Ensure that the required environment variables, such as LIVEKIT_URL
, are correctly set and accessible.
• Test External API Connections: If your application uses external APIs, test these connections separately to ensure they are responsive and not causing delays.
If the issue persists after trying these steps, let me know, and I can assist further.
Sources: livekit-agents/livekit/agents/llm/mcp.pyred-television-3815
07/26/2025, 1:01 PMturn_detection=MultilingualModel(),
2025-07-26 12:59:41,211 - DEBUG livekit.agents - received user transcript {"user_transcript": "Hey, what's up?", "language": "english", "pid": 32663, "job_id": "AJ_idGDMG4ihsxM"}
2025-07-26 12:59:44,213 - ERROR livekit.agents - Error in _bounce_eou_task
Traceback (most recent call last):
File "/workspace/miniconda3/envs/loc/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/ipc/job_proc_lazy_main.py", line 101, in do_inference
inf_resp = await fut
^^^^^^^^^
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/utils/log.py", line 16, in async_fn_logs
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/agents/voice/audio_recognition.py", line 358, in _bounce_eou_task
end_of_turn_probability = await turn_detector.predict_end_of_turn(chat_ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/plugins/turn_detector/multilingual.py", line 62, in predict_end_of_turn
return await super().predict_end_of_turn(chat_ctx, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/site-packages/livekit/plugins/turn_detector/base.py", line 230, in predict_end_of_turn
result = await asyncio.wait_for(
^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/loc/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError {"pid": 32663, "job_id": "AJ_idGDMG4ihsxM"}
tall-airline-16756
07/26/2025, 1:02 PMprobability = await turn_detector.predict_end_of_turn(chat_ctx, timeout=10)
• Check Remote Inference Configuration: Quick check - make sure your LIVEKIT_REMOTE_EOT_URL
is set up correctly and that your remote server is responding. Sometimes it's just a small connection hiccup!
• Ensure Model Files Are Downloaded: Let's make sure all your model files are ready to go. Just run this simple command: python agent.py download-files
• Monitor System Resources: Don't stress - the model is pretty lightweight, needing less than 500MB of RAM and running smoothly on your CPU. Just do a quick check to ensure you've got enough free resources.
• Fallback Behavior: The good news? This model is super smart and can handle timeouts gracefully. If something doesn't work perfectly, it'll automatically fall back to local inference or provide a default value.
Give these steps a try, and if you're still seeing any issues, I'm right here to help you out! We'll get this sorted together. 😊 Sources: livekit-agents/livekit/agents/voice/agent_activity.py | LiveKit turn detector plugin | LiveKit Docsred-television-3815
07/26/2025, 1:34 PMtall-airline-16756
07/26/2025, 1:35 PMLIVEKIT_REMOTE_EOT_URL
environment variable in your system setup. If it's there, the model will chat with a remote server for inference.
• Observe the Behavior:
◦ If LIVEKIT_REMOTE_EOT_URL
is not set or is empty, the model will happily run locally and load its language configs during startup.
◦ If LIVEKIT_REMOTE_EOT_URL
is set, the model will reach out to the remote server for inference and threshold checks.
• Inspect the Code: The _remote_inference_url()
function is your behind-the-scenes detective. If it returns None
, that means local inference mode is in action.
If you decide you'd rather keep things local and want to disable remote inference, just unset or remove the LIVEKIT_REMOTE_EOT_URL variable, and your model will run locally without missing a beat! Sources: livekit-agents/livekit/agents/voice/agent_activity.py | LiveKit turn detector plugin | LiveKit Docs | LiveKit turn detector plugin | LiveKit Docs