polite-dog-12152
07/26/2025, 9:07 AMtall-airline-16756
07/26/2025, 9:07 AMtall-airline-16756
07/26/2025, 9:08 AMpip install "livekit-plugins-langchain~=1.1"
• Use LangGraph workflows within an AgentSession
by wrapping them with the `LLMAdapter`:
from langgraph.graph import StateGraph
from livekit.agents import AgentSession, Agent
from livekit.plugins import langchain
# Define your LangGraph workflow
def create_workflow():
workflow = StateGraph(...)
# Add your nodes and edges
return workflow.compile()
# Use the workflow as an LLM
session = AgentSession(
llm=langchain.LLMAdapter(
graph=create_workflow()
),
# ... stt, tts, vad, turn_detection, etc.
)
2. Complete LangGraph Agent ExampleHere is a working example:
import logging
from langgraph.graph import START, StateGraph
from livekit.agents import Agent, AgentSession, JobContext
from livekit.plugins import deepgram, langchain, silero
from livekit.plugins.turn_detector.multilingual import MultilingualModel
class State(TypedDict):
messages: list[BaseMessage]
def create_graph() -> StateGraph:
openai_llm = init_chat_model(model="openai:gpt-4o")
def chatbot_node(state: State):
return {"messages": [openai_llm.invoke(state["messages"])]}
builder = StateGraph(State)
builder.add_node("chatbot", chatbot_node)
builder.add_edge(START, "chatbot")
return builder.compile()
async def entrypoint(ctx: JobContext):
graph = create_graph()
agent = Agent(
instructions="",
llm=langchain.LLMAdapter(graph),
)
session = AgentSession(
vad=ctx.proc.userdata["vad"],
stt=deepgram.STT(model="nova-3", language="multi"),
tts=deepgram.TTS(),
turn_detection=MultilingualModel(),
)
await session.start(agent=agent, room=ctx.room)
3. Adding RAG FunctionalityTo implement Agentic RAG, integrate RAG capabilities using function tools. For example:
from livekit.agents import llm
from livekit.plugins import openai
class RAGHandler:
def __init__(self, index_path, data_path):
self._annoy_index = AnnoyIndex.load(index_path)
with open(data_path, "rb") as f:
self._paragraphs_by_uuid = pickle.load(f)
def start(self, agent):
@agent.fnc_ctx.ai_callable()
async def enrich_with_rag(code: int):
await self._handle_thinking(agent)
await self._enrich_with_rag(agent, agent.chat_ctx)
4. Key Integration Points
• The LLMAdapter
converts LiveKit chat context to LangChain messages, enabling seamless integration.
• Combine LangGraph workflows with RAG functionality by injecting relevant documents or using function tools.
Let me know if you need further clarification or assistance with specific parts of the integration.
Sources: LangChain integration guide | LiveKit Docs | examples/voice_agents/llamaindex-rag/query_engine.py | agent_extensions/agent_extensions/rag/handler.py | rag/rag_handler.pypolite-dog-12152
07/26/2025, 9:32 AMtall-airline-16756
07/26/2025, 9:34 AMhttp_client=httpx.AsyncClient(
timeout=httpx.Timeout(connect=15.0, read=5.0, write=5.0, pool=5.0),
follow_redirects=True,
limits=httpx.Limits(
max_connections=50,
max_keepalive_connections=50,
keepalive_expiry=120,
),
)
2. Handle API Timeout Exceptions No worries! We can catch those tricky timeout moments and handle them gracefully:
except asyncio.TimeoutError as e:
logger.error(f"Timeout during connection/initialization: {e}")
raise APITimeoutError("Timeout connecting to or initializing session") from e
3. Implement Retry Logic Sometimes things just need a second chance. Here's how we can give our system that opportunity:
except ModelTimeoutException as mte:
logger.warning(f"Model timeout error: {mte}\nAttempting to recover...")
await self._restart_session(mte)
4. Provide User Feedback Keep things friendly by letting users know what's happening:
thinking_messages = [
"Let me look that up...",
"One moment while I check...",
"I'll find that information for you...",
]
async def enrich_with_rag():
await agent.say(random.choice(thinking_messages))
# Perform RAG lookup...
5. Monitor and Optimize - Set timeout values that make sense for your system - Use connection pooling to keep things smooth - Keep an eye on those logs to spot any patterns 6. Graceful Timeout Handling Always have a backup plan for when things don't go quite right:
def _on_timeout() -> None:
if not fut.done():
fut.set_exception(
llm.RealtimeError(
"generate_reply timed out waiting for generation_created event."
)
)
I hope these tips help you smooth out those timeout bumps! Feel free to reach out if you need any more advice. We've got this! 😊 Sources: livekit-agents/livekit/agents/__init__.py | agent_extensions/agent_extensions/rag/handler.py | Egress & Media Processing | livekit/livekit_composite | DeepWikipolite-dog-12152
07/26/2025, 9:49 AMtall-airline-16756
07/26/2025, 9:50 AMasyncio.exceptions.InvalidStateError
can be when you're working with LiveKit. Let me break down what's happening and share some friendly solutions that should help smooth things out. 1. AudioSource Frame Capture Errors Sometimes your audio frame queue can get a bit overwhelmed – no worries! An easy fix is to give it some extra breathing room when you're creating your `AudioSource`:
audio_source = rtc.AudioSource(
sample_rate=sample_rate,
num_channels=channels,
queue_size_ms=2000 # Think of this like giving your audio some extra personal space
)
By bumping up the queue_size_ms
, you're basically telling your system, "Hey, take your time processing these audio frames!" A larger value like 2000ms gives you more buffer and helps prevent those pesky capture failures. 2. Future State Management Errors Another common hiccup happens when you're trying to set a result on a future that's already done its job. No stress – here's a super clean way to handle it:
import contextlib
import asyncio
with contextlib.suppress(asyncio.InvalidStateError):
future.set_result(value)
This trick is super handy in LiveKit for things like: - Wrapping up speech handles - Finishing audio playback - Managing inference requests - Finalizing run results Best Practices - For audio streaming: Start with a queue size between 1000-2000ms if you're processing audio continuously. - When handling futures: Use that contextlib.suppress()
method to keep things smooth. - Stay updated: Keep your LiveKit SDK current and keep an eye on your application logs. Ran into any other tricky spots? I'm here to help you troubleshoot! Feel free to reach out if you need more guidance. 😊 Sources: Media Samples & Testing | livekit/livekit_composite | DeepWiki | Resolving "InvalidState - failed to capture frame" errors in AudioSource | livekit-agents/livekit/agents/ipc/proc_client.pypolite-dog-12152
07/26/2025, 9:54 AMtall-airline-16756
07/26/2025, 9:55 AMfrom livekit.agents import (
Agent,
AgentSession,
RoomInputOptions,
RoomOutputOptions,
llm,
)
from livekit.plugins import openai
2. Example Voice Agent Setup Check out this simple way to create your voice agent:
from livekit.agents import AgentSession, Agent
from livekit.plugins import openai
session = AgentSession(
llm=openai.realtime.RealtimeModel(
voice="coral"
)
)
await session.start(
room=ctx.room,
agent=Agent(instructions="You are a helpful voice AI assistant.")
)
3. Verify Installation Just to make sure everything's smooth, run this:
pip install livekit-agents
4. Check Documentation Pro tip: Always peek at the latest LiveKit docs to stay up to date with any changes. If you're still scratching your head, I'm here to help! Just give me a shout, and we'll figure this out together. 🤝 Sources: livekit-agents/livekit/agents/resources/__init__.py | agent/requirements.txt