astonishing-gpu-67869
09/10/2025, 9:20 AMon_user_turn_complete
node or the llm_node
?
Can I do RAG in both?tall-airline-16756
09/10/2025, 9:20 AMtall-airline-16756
09/10/2025, 9:21 AMon_user_turn_completed
node actually gets triggered before the llm_node
. This is great because it gives you a chance to tweak the chat context or user message before the LLM starts processing it.
You can actually use Retrieval-Augmented Generation (RAG) in both of these nodes, depending on what fits your needs best:
• on_user_turn_completed
: This is perfect for adding context before the LLM kicks in. For example, you can pull in relevant info (like through vector search) and add it to the chat context early on. This way, it’s efficient and helps you avoid any delays from calling external tools.
• llm_node
: This is where you can handle all the LLM-specific logic, like working with custom LLM providers or reshaping the context in more advanced ways. You can also do RAG here by adjusting the chat context right before the LLM crafts its response.
If you’d like to see some concrete examples and get more detailed guidance, I highly recommend checking out the LiveKit RAG examples. They’re super helpful!
Sources: Pipeline nodes and hooks | LiveKit Docs | External data and RAG | LiveKit Docs | Agents v0.x migration guide - Node.js | LiveKit Docs | agents/src/voice/agent.ts | rag/rag_handler.pydamp-waiter-77572
09/10/2025, 9:34 AMtall-airline-16756
09/10/2025, 9:34 AMon_user_turn_completed
and llm_node
functions into your code, here’s a friendly guide on where to place them:
• `on_user_turn_completed`: You’ll want to add this function to either the IntroAgent
or StoryAgent
class, depending on where you'd like to tweak the chat context or user message before the LLM processes it. For instance, defining it as an asynchronous method within the IntroAgent
class is a great way to handle pre-LLM context injection.
• `llm_node`: This one fits nicely into the StoryAgent
class, especially if you’re aiming to centralize all LLM-related logic—like custom LLM providers or advanced context restructuring. It works well as an asynchronous method that adjusts the chat context before the LLM crafts its response.
If you'd like, I’d be happy to provide specific code snippets to show exactly how you can integrate these functions into your existing classes. Just let me know!damp-waiter-77572
09/10/2025, 9:42 AMtall-airline-16756
09/10/2025, 9:43 AMon_user_turn_completed
and llm_node
functions outside of the classes and call them when needed. That said, there are a couple of important things to keep in mind:
• on_user_turn_completed: This function usually ties into the lifecycle of an agent (like IntroAgent
or StoryAgent
). If you decide to define it externally, you'll want to make sure you pass the necessary context (such as ChatContext
, RunContext
, or new_message
) explicitly when you call it.
• llm_node: This one is designed to work within the LLM processing flow. If it’s defined outside, just ensure it has access to all the required parameters like chat_ctx
, tools
, and model_settings
.
A helpful way to do this is by defining these functions as standalone asynchronous functions, then invoking them within the appropriate methods of your agents. Would you like me to share an example to guide you through how to set this up?damp-waiter-77572
09/10/2025, 9:47 AMtall-airline-16756
09/10/2025, 9:47 AMon_user_turn_completed
and llm_node
functions within the relevant agent classes (like IntroAgent
or StoryAgent
) tends to work well. Here’s why:
• Encapsulation: Keeping these functions inside the classes means they have direct access to the agent’s context, state, and lifecycle, so you won’t have to pass around extra parameters.
• Readability: This approach makes your code clearer and easier to maintain since the logic stays closely connected to the agent’s behavior.
• Flexibility: It also lets you override or customize these functions for specific agents without impacting others, which is super handy.
That said, if you find yourself needing to reuse these functions across several agents, defining them externally as utility functions and then calling them from within your agent methods is a smart way to keep things reusable and maintainable.
If you’d like, I’d be happy to walk you through detailed examples of both approaches—just let me know!damp-waiter-77572
09/10/2025, 9:49 AMtall-airline-16756
09/10/2025, 9:49 AMon_user_turn_completed
and llm_node
functions inside the relevant agent classes, like IntroAgent
or StoryAgent
. This way, you get to enjoy a few great benefits:
• Direct Access: These functions can easily tap into the agent's context, state, and lifecycle without needing extra parameters, which keeps things smooth.
• Encapsulation: Keeping the logic within the specific agent helps your code stay modular and much easier to maintain over time.
• Customization: You have the flexibility to tailor how these functions behave for each agent, perfectly fitting your needs.
If you'd like, I’d be happy to walk you through an example of how to define and use these functions right within the agent classes. Just let me know!