red-accountant-48215
08/20/2025, 12:07 PMtall-airline-16756
08/20/2025, 12:07 PMtall-airline-16756
08/20/2025, 12:07 PMawait
keyword like this:
await agent.update_chat_ctx(updated_chat_ctx)
# Now you can safely perform your next action
The framework also has some handy synchronization tools, like futures and timeouts, to help everything run smoothly. For instance, real-time models like OpenAI’s have a 5-second timeout to avoid getting stuck. Plus, chat context updates are coordinated with ongoing speech and other agent tasks.
If you have any questions or want me to walk you through anything else, just let me know—I’m here to help!
Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...red-accountant-48215
08/20/2025, 12:11 PMred-accountant-48215
08/20/2025, 12:12 PMtall-airline-16756
08/20/2025, 12:12 PM# Check if message already exists
existing_message = chat_ctx.get_by_id(message_id)
if existing_message is None:
chat_ctx.add_message(
role="user",
content="Your message content",
id=message_id
)
• Use the add_message()
Method: This simply adds the message to the end of the context, super straightforward:
chat_ctx.add_message(
role="user",
content="Your message content"
)
• Maintain Chronological Order: If you want to keep things nicely ordered by time, you can insert the message exactly where it belongs using `insert()`:
chat_ctx.insert(message_item)
• Merge Contexts: When you’re combining different chat contexts and want to avoid duplicates, this merge()
approach works well:
existing_ids = {item.id for item in chat_ctx.items}
for item in other_chat_ctx.items:
if item.id not in existing_ids:
idx = chat_ctx.find_insertion_index(created_at=item.created_at)
chat_ctx.items.insert(idx, item)
existing_ids.add(item.id)
These handy methods should help you add the last message without overwriting anything important. If you want me to walk you through any of these or have more questions, just let me know—I’m here to help!
Sources: livekit-agents/livekit/agents/voice/agent_activity.py | Pipeline nodes and hooks | LiveKit Docs | complex-agents/personal_shopper/personal_shopper.pytall-airline-16756
08/20/2025, 12:13 PMsession.generate_reply()
, the agent’s speech is automatically added to the chat context (chat_ctx
) as soon as the speech generation finishes. To help you know exactly when it’s been added, here are a few friendly ways you can keep an eye on it:
• Listen for the conversation_item_added
event: This event fires whenever something new is added to the chat context. You can set up a listener to catch when the agent’s speech pops in:
def on_conversation_item_added(event):
if event.item.role == "assistant":
print(f"Agent speech added to chat_ctx: {event.item.text_content}")
session.on("conversation_item_added", on_conversation_item_added)
• Use the SpeechHandle
callbacks: The generate_reply()
method gives you a SpeechHandle
. You can attach a callback to know exactly when the speech is done:
handle = session.generate_reply(user_input="say hello.")
handle.add_done_callback(lambda _: print("Speech generation complete - should be in chat_ctx now"))
await handle
• Keep an eye on the SpeechHandle
state: You can also check the handle’s properties to see if the speech finished or got interrupted:
handle = session.generate_reply(user_input="say hello.")
await handle
if handle.interrupted:
print("Speech was interrupted - partial content added to chat_ctx")
else:
print("Speech completed normally - full content added to chat_ctx")
If you have any questions or want a hand with setting these up, just let me know—I’m here to help!
Sources: livekit-agents/livekit/agents/voice/agent_activity.py | Agent speech and audio | LiveKit Docsred-accountant-48215
08/20/2025, 12:14 PM.wait_for_playout()
tall-airline-16756
08/20/2025, 12:15 PMhandle = session.generate_reply(user_input="say hello.")
await handle.wait_for_playout()
# Code here will run only after the speech is finished
• Callbacks: These don’t block your code, so other stuff can keep running while the speech is happening. You can add a callback to kick off something once the speech wraps up:
handle = session.generate_reply(user_input="say hello.")
handle.add_done_callback(lambda _: print("Speech done"))
# Code here runs right away, and the callback triggers when speech finishes
Both ways are waiting on the same event (the speech finishing), but it really depends on whether you want to handle things synchronously or asynchronously. If anything’s unclear or you want to dive deeper, just let me know—I’m happy to help!
Sources: examples/voice_agents/multi_agent.py | Agent speech | LiveKit Docsred-accountant-48215
08/20/2025, 12:16 PMawait self.session.generate_reply(
user_input="Proceed with the session by asking me which method to use."
).wait_for_playout()
self.request_method()
request_method
being triggered before the end of the first speech?tall-airline-16756
08/20/2025, 12:17 PMrequest_method
gets triggered before the first speech finishes is because .wait_for_playout()
waits for the whole assistant turn to wrap up. That includes not just the speech playback but also any tool executions and their responses. This means function tools like request_method
can actually start running while the initial speech is still playing.
If you want to make sure the speech part finishes before a specific tool runs, you can use RunContext.wait_for_playout()
right inside the tool itself:
@function_tool
async def my_tool(self, ctx: RunContext):
# Wait for the assistant's spoken response prior to running this tool
await ctx.wait_for_playout()
# Now execute your tool logic
This way, the speech for that step will be done before your tool kicks in. Feel free to ask if you'd like me to explain more or help out with anything else!
Sources: livekit-agents/livekit/agents/voice/agent_activity.py | Agent speech and audio | LiveKit Docsred-accountant-48215
08/20/2025, 12:22 PMtall-airline-16756
08/20/2025, 12:23 PMRunContext
, just include it as a parameter in your function tool definitions. The LiveKit framework will automatically provide the RunContext
whenever the tool is called. Here’s a quick example:
from livekit.agents import function_tool, RunContext
@function_tool
async def my_tool(self, context: RunContext):
# Access the session
session = context.session
# Access user data
userdata = context.userdata
# Access speech handle
speech_handle = context.speech_handle
# Access function call info
function_call = context.function_call
# Example: Wait for speech to complete
await context.wait_for_playout()
return "Speech completed"
The RunContext
gives you handy access to the current session, user data, speech handle, and function call info. This way, you can easily manage speech, handle interruptions, and work with the session state. If anything’s unclear or you want to dive deeper, just let me know—I’m here to help!
Sources: Tool definition and use | LiveKit Docs | livekit-agents/livekit/agents/voice/events.py | realtime/openai-realtime-drive-thru.pyred-accountant-48215
08/20/2025, 12:23 PMtall-airline-16756
08/20/2025, 12:24 PMRunContext
is made to be used only within function tools, so it won’t work outside of them. It’s created behind the scenes by the LiveKit framework when a function tool runs, and it needs things like the session, speech handle, and function call—which are only available in that setting.
If you’re looking to get to session data or use its features outside a function tool, no worries! You can use the AgentSession
object directly. For example, you can call session methods like session.say()
, session.generate_reply()
, or access session.userdata
elsewhere in your agent code, like inside the Agent.on_enter()
method.
Feel free to reach out if you want a hand with grabbing specific session data or setting this up in your agent—I’m here to help!
Sources: Tool definition and use | LiveKit Docs | README.md