lemon-match-18357
06/26/2025, 12:58 AMtall-airline-16756
06/26/2025, 12:58 AMtall-airline-16756
06/26/2025, 12:58 AMlemon-match-18357
06/26/2025, 12:59 AMtall-airline-16756
06/26/2025, 1:00 AMchat_ctx
parameter when you're initializing a VoicePipelineAgent
. Here's a quick example that might help: initial_ctx = llm.ChatContext().append(
role="system",
text="You are a helpful voice AI assistant."
)
agent = VoicePipelineAgent(
vad=silero.VAD.load(),
stt=deepgram.STT(),
llm=google.LLM(),
tts=cartesia.TTS(),
chat_ctx=initial_ctx,
)
• Wanna modify your chat context before it hits the Google LLM? The before_llm_cb
callback is your friend! Check out this neat trick for keeping your context tidy by trimming it down to the last 15 messages: async def truncate_context(assistant: VoicePipelineAgent, chat_ctx: llm.ChatContext):
if len(chat_ctx.messages) > 15:
chat_ctx.messages = chat_ctx.messages[-15:]
agent = VoicePipelineAgent(
...
llm=google.LLM(),
before_llm_cb=truncate_context,
)
• Heads up - in version 1.0 of the framework, before_llm_cb
got an upgrade to llm_node
, which gives you even more flexibility for tweaking your chat context.
If you're already doing everything right but still not seeing the results you expect, here are a few things to double-check:
• Make sure your system message is formatted just right when you're adding it to chat_ctx
.
• Take a peek at your before_llm_cb
or llm_node
(if you're on version 1.0) to make sure it's not doing anything sneaky with your context.
• Verify that the Google LLM is playing nice with your chat_ctx
parameter.
I'm totally here if you need any more help or want to chat through this! Don't hesitate to reach out. 😊 Sources: Agents 0.x migration guide | LiveKit Docs | VoicePipelineAgent | LiveKit Docslemon-match-18357
06/26/2025, 1:07 AMtall-airline-16756
06/26/2025, 1:07 AM