stale-gpu-14856
10/01/2025, 7:23 PMbrave-printer-20093
10/01/2025, 9:55 PMcrooked-dawn-90821
10/02/2025, 12:46 PMmany-hair-70963
10/02/2025, 2:17 PMkind-branch-59377
10/02/2025, 7:37 PMquaint-waitress-91864
10/02/2025, 8:06 PMalert-honey-38248
10/03/2025, 6:44 AMbrave-island-45242
10/03/2025, 7:27 PMmany-forest-60185
10/04/2025, 9:33 PMllm="openai/gpt-4.1-mini" : livekit.agents._exceptions.APIStatusError: Error proxying completions: provider: azure model: gpt-4.1-mini-provisioned, message: POST "<https://agent-gateway.cognitiveservices.azure.com/openai/v1/chat/completions>": 400 Bad Request {
"message": "Missing required parameter: 'response_format.json_schema'.",
"type": "invalid_request_error",
"param": "response_format.json_schema",
"code": "missing_required_parameter"
}green-tent-71322
10/06/2025, 2:54 AMrough-gpu-50664
10/06/2025, 11:59 PMbusy-restaurant-25888
10/07/2025, 1:00 PMsession = AgentSession[UserData](
userdata=userdata,
vad=ctx.proc.userdata["vad"],
stt="assemblyai/universal-streaming",
llm="openai/gpt-4.1-mini",
tts="cartesia/sonic-2:6f84f4b8-58a2-430c-8c79-688dad597532",
turn_detection=MultilingualModel()
)helpful-lizard-52428
10/09/2025, 2:04 AMcrooked-dawn-90821
10/11/2025, 5:59 PMrefined-scientist-34781
10/16/2025, 12:39 PMworried-knife-36498
10/18/2025, 11:36 AMrough-gpu-50664
10/21/2025, 9:25 PMnarrow-engineer-85614
10/24/2025, 6:48 PMbulky-pager-62731
10/29/2025, 8:23 AMmodern-restaurant-53740
10/30/2025, 3:05 AMgentle-traffic-46145
10/31/2025, 2:25 PMrapid-van-16677
11/02/2025, 5:49 AMquick-gpu-24854
11/04/2025, 11:17 AMsession.generateReply() - I can hear it speaking
• then I can converse with it as expected
But when deployed on Fly.io, it behaves like this:
• starts with session.generateReply() - I can hear it speaking - GOOD ✅
• then as soon as I start speaking, I get this error:
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info]2025-11-04T11:06:36.590Z [uncaughtException] Error [ERR_IPC_CHANNEL_CLOSED]: Channel closed
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at target.send (node:internal/child_process:753:16)
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at InferenceProcExecutor.doInference (file:///app/node_modules/@livekit/agents/dist/ipc/inference_proc_executor.js:60:15)
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at #doInferenceTask (file:///app/node_modules/@livekit/agents/dist/ipc/job_proc_executor.js:63:50)
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at ChildProcess.<anonymous> (file:///app/node_modules/@livekit/agents/dist/ipc/job_proc_executor.js:49:58)
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at ChildProcess.emit (node:events:531:35)
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at emit (node:internal/child_process:949:14)
2025-11-04T11:06:36Z app[d8d4295f266538] fra [info] at process.processTicksAndRejections (node:internal/process/task_queues:91:21)
I’ve spent a lot of time debugging this and I have no clue what goes wrong.
Any ideas ? Thanks!!!
----------
This is my new voice.AgentSession() :
export function createAgentSession({
vad,
userData,
}: {
vad: silero.VAD;
userData: UserContext;
}): voice.AgentSession {
return new voice.AgentSession({
stt: createSTT(),
llm: createLLM(),
tts: createTTS(),
turnDetection: new livekit.turnDetector.MultilingualModel(),
vad,
voiceOptions: VOICE_OPTIONS,
userData,
});
}elegant-businessperson-51313
11/05/2025, 11:18 AMwhite-postman-10482
11/11/2025, 9:37 PMfuture-continent-10147
11/12/2025, 11:31 AMgentle-traffic-46145
11/12/2025, 7:26 PMbetter-house-57730
11/16/2025, 4:05 PMeu-central and using:
• Livekit Inference - Deepgram >
• Livekit Inference - Gemini >
Are these going through EU endpoints or do we have to setup plugins with regional params (like location for VertexAI)?
Not thinking from the compliance standpoint, but on latency: If Livekit Agents are in Europe but Deepgram/Gemini/Eleven Inference are still going to US; from the latency standpoint, it would be better to keep Livekit Agents in the US anyway, right?