polite-dog-12152
08/29/2025, 5:30 AMtall-airline-16756
08/29/2025, 5:30 AMtall-airline-16756
08/29/2025, 5:31 AMKeyError: 'question'
pops up because the LangGraph workflow is trying to grab a 'question' key from its state, but it hasn't been set up yet. No worries though—this usually means you just need to make sure the 'question' key is part of the initial state you pass into the LangGraph workflow.
Here’s a simple way to fix it:
1. Make sure your state includes the 'question' key when you define your LangGraph State
.
2. Inside your entrypoint
function, pull out the 'question' from ctx.job.metadata
. Using .get()
is a handy trick here, since it lets you set a default if 'question' isn’t there.
3. Before you hand off to the LangGraph workflow, initialize the state dictionary with the 'question' you grabbed and any other keys you need.
If you want me to walk you through any of these steps or if something’s still tricky, just give me a shout—I’m here to help!
Sources: livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/__init_... | LangChain integration guide | LiveKit Docs | avatars/tavus/tavus.pypolite-dog-12152
08/29/2025, 5:33 AMtall-airline-16756
08/29/2025, 5:33 AMLLMAdapter
takes care of everything behind the scenes—it turns LiveKit chat messages into a neat dictionary with a "messages"
key that holds a list of message objects. Here’s a little example to make it clearer:
{
"messages": [
SystemMessage(content="You are a helpful assistant.", id="sys_1"),
HumanMessage(content="How does photosynthesis work?", id="usr_1"),
AIMessage(content="Photosynthesis is the process by which plants convert light energy...", id="asst_1")
]
}
Here’s what the LLMAdapter
does automatically for you:
• It changes system
and developer
role messages into SystemMessage
• It turns user
role messages into HumanMessage
• And it converts assistant
role messages into AIMessage
To get this rolling, just wrap your LangGraph workflow with the LLMAdapter
when you create an AgentSession
. If anything’s unclear or you want some more examples, just give me a shout—I’m here to help!
Sources: LangChain integration guide | LiveKit Docs | examples/voice_agents/llamaindex-rag/chat_engine.pypolite-dog-12152
08/29/2025, 5:37 AMtall-airline-16756
08/29/2025, 5:37 AMpolite-dog-12152
08/29/2025, 5:39 AMtall-airline-16756
08/29/2025, 5:40 AMmessages
key with add_messages
to keep track of the conversation history. Here’s a quick example:
from typing import Annotated, TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
1. Use LLMAdapter: Wrap your compiled graph with LLMAdapter
so it can automatically handle input conversion for you. Check this out:
agent = Agent(
instructions="",
llm=langchain.LLMAdapter(graph),
)
1. Initialize State: Make sure LLMAdapter
initializes the state with a messages
key. This helps avoid any KeyError
when nodes try to access state["messages"]
.
2. Handle User Input: Use session.generate_reply(user_input="...")
to feed the user’s input into your graph.
3. Access State Safely: Inside your nodes, you can safely access state["messages"]
since the adapter makes sure it’s always there.
Following these steps will help your state graph handle user input without a hitch. If you want me to walk you through any part or share more examples, just let me know—I’m happy to help!
Sources: tests/test_agent_session.py | LangChain integration guide | LiveKit Docspolite-dog-12152
08/30/2025, 7:09 AMtall-airline-16756
08/30/2025, 7:10 AMupdate_options
method is properly updating the STT and TTS components. It should dynamically adjust settings like language, voice, and speech engine for all active streams.
2. Language Switching Logic: Make sure the _switch_language
method is handling language-specific settings correctly—for instance, applying Hindi-specific configurations only when language_code
is "hi-IN".
3. Function Tools: The @function_tool
decorators for switching languages seem spot on. These let the LLM trigger language changes on the fly.
4. Real-Time Updates: If you’re working with real-time STT (like OpenAI or Deepgram), just confirm that the new language settings kick in during ongoing sessions without needing a restart.
5. Error Handling: Your use of CustomException
for error handling is solid! Just be sure all exceptions are logged with enough detail to make debugging easier.
If you’re running into any specific hiccups—like parameters not updating or errors while switching languages—feel free to share! I’m here to help with some targeted troubleshooting.
Sources: README.md | livekit-plugins/livekit-plugins-baseten/livekit/plugins/baseten/stt.py | Gladia integration guide | LiveKit Docspolite-dog-12152
08/30/2025, 7:11 AMtall-airline-16756
08/30/2025, 7:12 AMpython myagent.py console
This is a great way to check the basics without needing to connect to a LiveKit server.
2. Integration Testing: Start your agent in development mode by running:
python myagent.py dev
Then hop into the LiveKit room using a client SDK or the Agents Playground to see how it interacts in a real scenario.
3. Automated Testing: You can write Python tests using the LiveKit testing framework. Here’s a simple example:
import pytest
from livekit.agents import AgentSession
@pytest.mark.asyncio
async def test_agent():
async with AgentSession() as session:
await session.start(MyAgent())
result = await session.run(user_input="Hello")
assert "Hello" in result.response
1. Agent-to-Agent Testing: You can have one agent test another by turning on AGENT participants in your room setup.
2. Load Testing: Use the LiveKit CLI to simulate multiple rooms and see how your setup performs:
lk perf agent-load-test --rooms 5 --agent-name test-agent --duration 5m
If you want a hand setting up any of these, just let me know—I’m here to help!
Sources: README.md | README.md | README.md | Vision Agent Quickstart | LiveKit Docs | Testing and evaluation | LiveKit Docspolite-dog-12152
08/30/2025, 7:44 AMtall-airline-16756
08/30/2025, 7:45 AMpython myagent.py dev
Next, hop into the LiveKit room using a client SDK or the Agents Playground. Try asking the agent to switch languages (like saying "Switch to Spanish") and see if it confirms the change and replies in the new language.
2. Automated Testing: You can also write tests using the LiveKit testing framework. For example:
from livekit.agents import AgentSession
@pytest.mark.asyncio
async def test_language_switch():
async with AgentSession() as session:
await session.start(MyAgent())
result = await session.run(user_input="Switch to Spanish")
assert "Spanish" in result.response
1. Agent-to-Agent Testing: Another neat trick is to have one agent simulate user interactions and check how the other agent responds. Just set up the room to include AGENT participants for this.
2. Performance Metrics: Don’t forget to enable metrics collection! It helps you keep an eye on things like latency, speech recognition accuracy, and how smooth the language switch feels for users.
If you want a hand setting any of this up or writing specific tests, just let me know—I’m here to help!
Sources: livekit-plugins/livekit-plugins-gladia/README.md | Vision Agent Quickstart | LiveKit Docs | Gladia integration guide | LiveKit Docs | README.md | Testing and evaluation | LiveKit Docs