I switched from openai to google models in my voic...
# ask-ai
l
I switched from openai to google models in my voice agent and now it seems liek it's not getting the system message.
t
Thanks for reaching out, Tigran. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Thank you for reaching out! Could you provide more details about the issue you're experiencing? For example: • Are you seeing any specific error messages or unexpected behavior? • How are you passing the system message to the Google model in your voice agent? • Did you make any other changes besides switching from OpenAI to Google models? This information will help us better understand and address the problem. Sources: GitHub::livekit::agents | examples/voice_agents/llamaindex-rag/retrieval.py
l
No error, we pass the system prompt by appending to _chat_ctx_ in before_llm_callback, no other changes
t
Thank you so much for reaching out! I totally get how tricky system prompts and chat contexts can be. Let me break down how they work with Google models in LiveKit's framework in a way that hopefully makes things crystal clear: • System prompts are super easy to set up - you just append them to the
chat_ctx
parameter when you're initializing a
VoicePipelineAgent
. Here's a quick example that might help:
Copy code
initial_ctx = llm.ChatContext().append(
    role="system",
    text="You are a helpful voice AI assistant."
)
agent = VoicePipelineAgent(
    vad=silero.VAD.load(),
    stt=deepgram.STT(),
    llm=google.LLM(),
    tts=cartesia.TTS(),
    chat_ctx=initial_ctx,
)
• Wanna modify your chat context before it hits the Google LLM? The
before_llm_cb
callback is your friend! Check out this neat trick for keeping your context tidy by trimming it down to the last 15 messages:
Copy code
async def truncate_context(assistant: VoicePipelineAgent, chat_ctx: llm.ChatContext):
    if len(chat_ctx.messages) > 15:
        chat_ctx.messages = chat_ctx.messages[-15:]

agent = VoicePipelineAgent(
    ...
    llm=google.LLM(),
    before_llm_cb=truncate_context,
)
• Heads up - in version 1.0 of the framework,
before_llm_cb
got an upgrade to
llm_node
, which gives you even more flexibility for tweaking your chat context. If you're already doing everything right but still not seeing the results you expect, here are a few things to double-check: • Make sure your system message is formatted just right when you're adding it to
chat_ctx
. • Take a peek at your
before_llm_cb
or
llm_node
(if you're on version 1.0) to make sure it's not doing anything sneaky with your context. • Verify that the Google LLM is playing nice with your
chat_ctx
parameter. I'm totally here if you need any more help or want to chat through this! Don't hesitate to reach out. 😊 Sources: Agents 0.x migration guide | LiveKit Docs | VoicePipelineAgent | LiveKit Docs
l
What is AFC why does it say AFC enabled
t
Hey there! I totally get that you're looking for more clarity about this "AFC enabled" situation. Would you mind sharing a bit more about where you're seeing this message? • Are you spotting it in a specific area of the LiveKit platform or maybe in some logs? • Is this popping up when you're working with a particular feature or trying something out? The more details you can give me, the better I can help you figure this out. I'm here to help and want to make sure I understand exactly what you're experiencing!