mysterious-agent-76276
08/10/2025, 8:57 AMRealtimeModel
for STT+LLM, while TTS is being handled by a separate model. I want to update the context/instructions before it's passed down to the LLM. How do I achieve that without having to sacrifice the low latency, and without having to change my pipeline? What are the exact parameters I should pass to the model and the AgenSession to achieve this?tall-airline-16756
08/10/2025, 8:57 AMtall-airline-16756
08/10/2025, 8:57 AMRealtimeModel
in your STT+LLM pipeline without slowing things down, here are a couple of easy ways to do it:
• Update Instructions: Just use the update_instructions
method to tweak the system prompt without having to restart your session:
await realtime_session.update_instructions(new_instructions)
• Update Chat Context: You can use the update_chat_ctx
method to send updated context to the model on the fly:
await realtime_session.update_chat_ctx(new_chat_ctx)
Both of these work in real-time, so you’ll keep that nice, low latency. Here are a few tips to keep things running smoothly:
• Only update the parts of the context that actually changed to keep things light.
• Make sure your context updates are serialized efficiently.
• Try not to update too often to avoid slowing down performance.
So, use update_instructions
when you want to change system-level instructions, and update_chat_ctx
for updates to the conversation history or context. If you want me to walk you through it or have any questions, just let me know—I’m here to help!mysterious-agent-76276
08/10/2025, 8:59 AMupdate_instructions
in my tts_node
? if so, would it cause another LLM call or would it apply on the next conversation?tall-airline-16756
08/10/2025, 8:59 AMupdate_instructions
inside your tts_node
to tweak the text before it gets turned into speech. Don’t worry, this won’t trigger another LLM call—it just updates the text for the current turn.
If you'd like those updated instructions to stick around for future conversations, you’ll want to update the agent’s instructions outside of the tts_node
. Feel free to ask if you want me to explain this more!mysterious-agent-76276
08/10/2025, 9:00 AMtts_node
to stick around for future conversations?tall-airline-16756
08/10/2025, 9:01 AMtts_node
and make sure those changes stick around for future chats, you can use the update_instructions
method. Here’s a quick example:
class CustomAgent(Agent):
async def tts_node(self, text: AsyncIterable[str], model_settings: ModelSettings):
await self.update_instructions("new instructions")
return Agent.default.tts_node(self, text, model_settings)
This will update the agent’s instructions and, if you’re running in realtime mode, it’ll also refresh the instructions for the current session. Just a heads-up though—whether these updates stick across different sessions or future conversations depends on how your agent’s state is handled. If you want, I’m here to help with more details!
Sources: examples/voice_agents/llamaindex-rag/query_engine.pymysterious-agent-76276
08/10/2025, 9:12 AMawait self.update_instructions("new instructions")
completely override the original instructions, or does it append to it?tall-airline-16756
08/10/2025, 9:13 AMawait self.update_instructions("new instructions")
, it actually replaces the existing instructions instead of adding to them. Here’s the scoop:
• If there’s already an instruction message in the chat, it updates that message with your new instructions.
• If it can’t find an instruction message and add_if_missing=True
(which is the default), it’ll create a brand new system message with your instructions right at the start of the chat.
So basically, it either swaps out the old instructions or adds a fresh system message, but it won’t tack on to what’s already there. If anything’s unclear or you want to dive deeper, just let me know—I’m here to help!
Sources: examples/voice_agents/restaurant_agent.pymysterious-agent-76276
08/10/2025, 9:25 AMtts_node
get called on every system message?tall-airline-16756
08/10/2025, 9:25 AMtts_node
is called every time there’s a system message. It’s a key part of the STT-LLM-TTS pipeline and helps turn the agent’s text responses into speech. So, whenever the agent creates a response that needs to be spoken out loud, the tts_node
steps in to handle the text-to-speech conversion.
Feel free to ask if you want me to explain anything more or if you need extra details—I’m here to help!
Sources: livekit-agents/livekit/agents/telemetry/trace_types.py | Pipeline nodes and hooks | LiveKit Docs