i seem to get hallucination from the conversation....
# ask-ai
p
i seem to get hallucination from the conversation. e.g. the start of conversation the assistant has already confirmed identity of the customer_name. but midway the conversation after the verifcation and accoutn status part the assistant asked the human again if they are speaking with customer_name. having state memory loss.
t
Thanks for reaching out, Bryner. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To help avoid repeated questions or the feeling that we’ve forgotten what we talked about earlier in your support chats, here are some handy strategies we use: • Context Preservation: We make sure the conversation history travels along with your chat using the
chat_ctx
parameter. This way, if you get connected to a new agent, they can easily catch up on what you’ve already shared. • Model Context Protocol (MCP): MCP helps us keep track of the conversation across different sessions, so you don’t have to repeat yourself and the chat feels continuous. • Interruption Handling: We’re on the lookout for any interruptions and handle them smoothly to keep the conversation flowing naturally. • Inactivity Detection: We check if you’re still there, so we don’t repeat questions unnecessarily if you’ve stepped away for a bit. Using these approaches, we aim to make your support experience as smooth and hassle-free as possible. Sources: Workflows | LiveKit Docs | Model Context Protocol Integration | livekit/livekit_composite | DeepWiki | Turn detection and interruptions | LiveKit Docs | Voice Assistants | livekit/livekit_composite | DeepWiki
p
give an example
t
Here’s a friendly example of how to keep track of important details during a customer support chat: Scenario: A customer reaches out about an issue with their account. Later on, they bring up something related. Without Keeping Context: • Agent: "Could you please confirm your account name again?" • Customer: "I already told you that earlier." With Context Kept in Mind: • Agent: "Got it, you mentioned earlier that your account name is 'customer_name'. Let’s keep going and get this sorted out for you." By using the
chat_ctx
parameter or Model Context Protocol (MCP), the agent remembers what the customer shared, so they don’t have to repeat themselves—making the whole experience smoother and more pleasant.
p
llm = openai.LLM(model=llm_model, temperature=llm_temperature) isnts this already default that uses chat_ctx . i use openai llm
t
Just a heads-up: OpenAI's LLM doesn’t automatically keep track of your conversation history using
chat_ctx
. You’ll want to set it up explicitly when you’re initializing an agent or task, so your chat stays connected and makes sense. Here’s a quick example:
Copy code
class TechnicalSupportAgent(Agent):
    def __init__(self, chat_ctx: ChatContext):
        super().__init__(instructions="You are a technical support specialist.", chat_ctx=chat_ctx)
So, if you’re working with OpenAI’s LLM, just make sure to pass
chat_ctx
during initialization. That way, your conversations will flow smoothly and remember what you’ve talked about before. Sources: Workflows | LiveKit Docs
p
im not doing a handoff im doint only one session for the agent.
t
Hey there! If you're working within a single session for the agent and not doing any handoffs, the context should naturally stick around throughout the session without needing to explicitly use
chat_ctx
. But if you’re seeing some memory hiccups or the same questions popping up again during the session, it might be related to how the session is managed or how the LLM is set up. Here are a few friendly tips to keep the context flowing smoothly in a single session: • Make sure the conversation history is being passed correctly to the LLM with every interaction. • Double-check the LLM’s settings, like the
temperature
or other parameters, to see if they match what you’re aiming for. • Confirm that the input to the LLM includes the full conversation history so far—that’s key for keeping context intact. If it’s still acting up, don’t worry—it might just need a bit more digging into your setup. You’ve got this!
p
how do i ensure the conversation history is being passed correctly and confrirm the input ot the llm includes the full converstaion history.
t
Hey there! To make sure the conversation history is passed along correctly and included in the input to the LLM, here’s a simple way to do it: 1. Initialize a ChatContext: Start by creating a
ChatContext
object to keep track of the conversation history. 2. Add Messages to Context: Use the
add_message
method to add past messages, making sure to specify who said what (like "user" or "assistant") and the content. 3. Pass Context to Agent: When you kick off a new agent session, just pass your prepared
ChatContext
to the agent’s constructor via the
chat_ctx
parameter. Here’s a quick example:
Copy code
from livekit.agents import ChatContext

# Initialize ChatContext
chat_ctx = ChatContext.empty()

# Add messages to the context
chat_ctx.add_message(role="user", content="Hello, I need help.")
chat_ctx.add_message(role="assistant", content="How can I assist you today?")

# Start the agent session with the context
agent = Agent(
    instructions="Your instructions here",
    llm=...,
    chat_ctx=chat_ctx
)
await session.start(agent=agent)
By following these steps, you’ll make sure the LLM gets the full conversation history, so it can respond in a way that really makes sense. If you have any questions or need a hand, just let me know!
p
is this sure that by default all chat history will be passed to llm every turn?
t
Just a heads-up: OpenAI's LLM doesn’t automatically remember all the chat history with each turn. If you want it to keep track of the conversation, you’ll need to pass the previous chat using the
chat_ctx
parameter in the
Agent
constructor. Otherwise, it treats every turn like a brand new chat. Sources: Workflows | LiveKit Docs | Workflows | LiveKit Docs | Testing and evaluation | LiveKit Docs