creamy-tiger-14805
09/10/2025, 2:32 PMtall-airline-16756
09/10/2025, 2:32 PMtall-airline-16756
09/10/2025, 2:33 PMagent_name
field being deprecated in deployment and creation requests. Although you might still see this field in some API responses, it’s no longer fully supported or used in the LiveKit Cloud dashboard or operational tools. Instead, the system mainly uses the agent_id
to identify agents.
When you delete and redeploy the agent, it can reset some settings, which might temporarily fix the problem. But to keep things running smoothly, it’s best to rely on the agent_id
rather than the agent_name
for tracking your agents.
If you’d like, feel free to share more about your setup or any other questions you have—I’m here to help you work through this!
Sources: livekit/livekit_cloud_agent.twirp.go | Deploy and scale agents on LiveKit Cloud | Deploy and scale agents on LiveKit Cloud | cmd/lk/agent.go | Deploying to production | LiveKit Docscreamy-tiger-14805
09/10/2025, 2:34 PMtall-airline-16756
09/10/2025, 2:34 PMagent_name
is indeed necessary when placing an outbound call using dispatch. It helps enable explicit agent dispatch, which is essential for telephony integrations. Without specifying it, the system defaults to automatic dispatch behavior, which unfortunately doesn’t support targeted dispatching for outbound calls.
For example, you can easily use the LiveKit CLI to dispatch an agent with a specific name like this:
lk dispatch create \
--new-room \
--agent-name outbound-caller \
--metadata '+15105550100'
Likewise, when using the API, you’ll want to specify agent_name
in the CreateAgentDispatchRequest
to programmatically create a dispatch.
If you’d like, I’m here to help walk you through the setup or answer any other questions you have!
Sources: Agents telephony integration | LiveKit Docs | Making calls using SIP | LiveKit Docs | Dispatching agents | LiveKit Docs | Agent dispatch | LiveKit Docscreamy-tiger-14805
09/10/2025, 2:36 PMtall-airline-16756
09/10/2025, 2:37 PMagent_name
is being managed during deployment. When you set agent_name
in WorkerOptions
, it activates explicit dispatch, which means automatic assignment is turned off and the agent needs to be manually added to a room. However, if the agent_name
isn’t being applied as expected, this could be due to:
• A mismatch in the configuration during deployment.
• Possible hiccups in the deployment process that might not be correctly registering the agent_name
.
• Settings that might have been overlooked, especially in telephony scenarios where explicit dispatch is necessary.
Could you please double-check that the agent_name
is properly set in your WorkerOptions
and that there aren’t any conflicts during deployment? If you’re still running into issues, don’t hesitate to reach out—I’m here to help you troubleshoot this further!
Sources: Agents telephony integration | LiveKit Docs | SIP troubleshooting guide | LiveKit Docs | Accepting incoming calls | LiveKit Docs | Making calls using SIP | LiveKit Docs | Dispatching agents | LiveKit Docscreamy-tiger-14805
09/10/2025, 2:38 PMthe agent_name is properly set in your WorkerOptions
?tall-airline-16756
09/10/2025, 2:39 PMagent_name
is set correctly in your WorkerOptions
. Here’s a simple way to check using the HTTP health check endpoint provided by the worker:
1. First, start your LiveKit agent worker with the agent_name
specified in WorkerOptions
.
2. Next, open the HTTP server endpoint (usually `http://localhost:8081/worker`).
3. Then, take a look at the JSON response and find the agent_name
field.
The response will show some helpful metadata about the worker, like this:
{
"agent_name": "my-telephony-agent",
"worker_type": "JT_ROOM",
"active_jobs": 0,
"sdk_version": "1.0.0",
"project_type": "python"
}
This way, you can easily confirm if the agent_name
is set just right. If you have any questions or need a hand with anything else, feel free to ask—I’m here to help!
Sources: Making calls using SIP | LiveKit Docs | SIP troubleshooting guide | LiveKit Docs | Dispatching agents | LiveKit Docs | Making calls using SIP | LiveKit Docs | Accepting incoming calls | LiveKit Docscreamy-tiger-14805
09/10/2025, 3:37 PMtall-airline-16756
09/10/2025, 3:37 PMextra_instructions
field in the AgentFalseInterruptionEvent
class being deprecated. This field used to help provide extra instructions during false interruption handling, but now LiveKit has made things easier by automatically managing the resumption of interrupted speech.
So, when a false interruption happens—like a brief pause or interruption without meaningful input—the system will pause the agent’s speech. If no new input comes in within the false_interruption_timeout
window, the agent will smoothly resume speaking on its own. This is controlled by the resume_false_interruption
option in AgentSession
, which is turned on by default to keep things seamless for you.
This means you no longer need to manually handle false interruptions using extra_instructions
. The system’s automatic resume feature takes care of it, making your development experience simpler. If you’d like, I’m here to help you update your code to fit this new approach—just let me know!
Sources: livekit-agents/livekit/agents/voice/agent_activity.py | Agents 0.x migration guide | LiveKit Docscreamy-tiger-14805
09/11/2025, 3:16 AMtall-airline-16756
09/11/2025, 3:17 AMAgentSession
by enabling the resume_false_interruption
option and specifying a false_interruption_timeout
. Here’s a simple example to guide you:
async def entrypoint(ctx: JobContext):
session = AgentSession(
vad=silero.VAD.load(),
llm=openai.LLM(model="gpt-4o-mini"),
stt=deepgram.STT(),
tts=cartesia.TTS(),
false_interruption_timeout=1.0, # Wait 1 second before resuming
resume_false_interruption=True, # Enable auto-resume
)
await session.start(agent=Agent(instructions="You are a helpful assistant."), room=ctx.room)
This setup helps your agent automatically pick up speech again after a short pause, as long as no new input comes in during the timeout. If you have any questions or want to explore this further, I’m here to help!
Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...tall-airline-16756
09/11/2025, 3:17 AMAgentSession
by enabling the resume_false_interruption
option and specifying a false_interruption_timeout
. Here's a friendly example to guide you:
async def entrypoint(ctx: JobContext):
session = AgentSession(
vad=silero.VAD.load(),
llm=openai.LLM(model="gpt-4o-mini"),
stt=deepgram.STT(),
tts=cartesia.TTS(),
false_interruption_timeout=1.0, # Wait 1 second before resuming
resume_false_interruption=True, # Enable auto-resume
)
await session.start(agent=Agent(instructions="You are a helpful assistant."), room=ctx.room)
This setup makes sure your agent will automatically pick up and continue speaking after a short pause, as long as no new input comes in during that timeout. If you have any questions or want to explore this further, please feel free to ask—I’m here to help!creamy-tiger-14805
09/11/2025, 3:28 AMtall-airline-16756
09/11/2025, 3:28 AMpip install "livekit-agents[google]~=1.2"
1. Make sure to enable the Cloud Speech-to-Text API in your Google Cloud project.
2. Then, create an instance of Google STT in your code like this:
from livekit.plugins import google
stt = google.STT()
1. Next, set up authentication:
• If you’re using Google AI (including the Gemini API), just set the GOOGLE_API_KEY
environment variable with your API key.
• If you’re working with Vertex AI, you’ll need to set the GOOGLE_APPLICATION_CREDENTIALS
environment variable to point to your service account key file.
Just a quick note: if you’re using the Gemini API, the same GOOGLE_API_KEY
works for Google STT as well, so no extra key is needed. But for Vertex AI, that service account key file is required.
If you have any questions or need a hand with the setup, feel free to reach out—I’m here to help!
Sources: Google integration guide | LiveKit Docs | livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/gemini_t... | Gemini Live API integration guide | LiveKit Docs | Google AI and LiveKit | LiveKit Docscreamy-tiger-14805
09/11/2025, 3:59 AMclass MeragiInboundAgent(Agent):
def __init__(self,
customer_name: str,
chat_ctx=None,
dial_info=dict[str, Any]):
self.__name__ = "meragi-inbound-agent"
super().__init__(
instructions=INSTRUCTIONS.replace("{{customer_name}}", customer_name),
stt=deepgram.STT(),
llm=google.LLM(model="gemini-2.5-flash-lite"),
tts=elevenlabs.TTS(voice_id="H8bdWZHK2OgZwTN7ponr"),
turn_detection=MultilingualModel(),
# llm=openai.realtime.RealtimeModel(model="gpt-4o-mini-realtime-preview",voice="marin")
)
self.dial_info = dial_info
self.customer_name = customer_name
My agent doesn't say the customer name.tall-airline-16756
09/11/2025, 3:59 AM{{customer_name}}
in your instructions isn’t being replaced with the actual customer name. To help you fix this, you’ll want to make sure that the placeholder gets explicitly replaced during initialization. Here’s an example:
class MeragiInboundAgent(Agent):\n def __init__(self,\n customer_name: str,\n chat_ctx=None,\n dial_info=dict[str, Any]):\n self.__name__ = \"meragi-inbound-agent\"\n instructions = INSTRUCTIONS.replace(\"{{customer_name}}\", customer_name)\n super().__init__(\n instructions=instructions,\n stt=deepgram.STT(),\n llm=google.LLM(model=\"gemini-2.5-flash-lite\"),\n tts=elevenlabs.TTS(voice_id=\"H8bdWZHK2OgZwTN7ponr\"),\n turn_detection=MultilingualModel(),\n )\n self.dial_info = dial_info\n self.customer_name = customer_name
This way, the placeholder {{customer_name}}
will be swapped out with the real name (like “John”) before the instructions get passed along to the agent. If you have any questions or need a hand with anything else, just let me know—I’m here to help!creamy-tiger-14805
09/11/2025, 5:03 AMtall-airline-16756
09/11/2025, 5:05 AMagent_name
in WorkerOptions
matches the dispatch rule or SIP configuration. If there’s a mismatch, the agent might not join the room and won’t be able to speak.
2. Name Not Passed to LLM Context: It’s a good idea to verify that the hardcoded name is explicitly included in the system message or instructions sent to the LLM. For example, you can use chatCtx.addMessage({ role: 'system', content: 'The customer’s name is Jayden.' })
.
3. Agent Logic Bypasses Name Usage: Sometimes, if the agent is set up to dynamically fetch user data (like through tools), it might ignore the hardcoded name. In that case, updating the prompt to clearly say, "You already know the customer’s name is Jayden. Greet them by name," can help guide the agent.
4. TTS or Audio Issues: It’s worth checking if the TTS engine is properly configured and that the agent’s audio track is being published in the room. Sometimes, misconfigurations here can lead to no audible speech.
5. Prompt vs. Identity Confusion: Remember, the agent_name
is an internal identifier and doesn’t control what the agent actually says. Make sure the spoken name is included as part of the LLM’s response, not just the agent’s identity.
To help debug, try logging the LLM prompt and response to confirm the name is included. If you see the name in the response but don’t hear it, the issue is likely with TTS or audio setup. Please feel free to reach out if you’d like me to assist you further—I’m here to help!
Sources: examples/src/restaurant_agent.ts | src/components/playground/Playground.tsx | livekit-agents/livekit/agents/worker.py | Anam virtual avatar integration guide | LiveKit Docs | Community & Support | livekit/livekit_composite | DeepWikicreamy-tiger-14805
09/11/2025, 5:13 AMchatCtx.addMessage({ role: 'system', content: 'The customer's name is Jayden.' })
.
how to do this?tall-airline-16756
09/11/2025, 5:14 AMchatCtx.addMessage
to add a system message in LiveKit, here’s a simple way to do it step-by-step:
1. First, create a ChatContext
instance if you haven’t done so already:
const chatCtx = new llmlib.ChatContext();
1. Next, add your system message to the context like this:
chatCtx.addMessage({
role: 'system',
content: 'The customer\'s name is Jayden.'
});
1. Finally, pass the updated chatCtx
to the LLM when you generate a response:
const stream = llm.chat({ chatCtx });
This way, the system message becomes part of the LLM’s context and helps guide its responses. If you have any questions or want me to walk you through it, just let me know—I’m here to help!
Sources: agents/src/llm/chat_context.test.ts