average-architect-28889
07/26/2025, 8:38 AMtall-airline-16756
07/26/2025, 8:38 AMtall-airline-16756
07/26/2025, 8:38 AMchat_ctx
parameter to the Agent
constructor. Here's how you can do it:
from livekit.agents import ChatContext, Agent
from livekit.agents.multimodal import MultimodalAgent
# Create a chat context with chat history
chat_ctx = ChatContext()
chat_ctx.append(
text="Context about the user: you are talking to a software engineer who's building voice AI applications. "
"Greet the user with a friendly greeting and ask how you can help them today.",
role="assistant",
)
# Pass the chat context to the agent
agent = MultimodalAgent(
model=model,
chat_ctx=chat_ctx,
)
For custom agents, you can also set the chat context in the constructor:
from livekit.agents import ChatContext, Agent
class HelpfulAssistant(Agent):
def __init__(self, chat_ctx: ChatContext):
super().__init__(
instructions="You are a helpful voice AI assistant.",
chat_ctx=chat_ctx
)
Additionally, you can update the chat context during runtime using agent.update_chat_ctx()
or during agent handoffs by passing the chat context to the new agent.
Let me know if you need further details!
Sources: Workflows | LiveKit Docs | agent.py | Agents 0.x migration guide | LiveKit Docs | livekit-agents/livekit/agents/llm/chat_context.pyaverage-architect-28889
07/26/2025, 9:04 AMtall-airline-16756
07/26/2025, 9:05 AMfrom livekit.plugins import turn_detector
session = AgentSession(
# other parameters...
turn_detection=turn_detector.EOUModel(),
)
Before you get started, just a quick tip – make sure to download those model weights first:
python my_agent.py download-files
Method 2: Custom Turn Detector Wrapper Got some more complex needs? No worries! Here's how you can create a custom wrapper that gives you a bit more flexibility:
from livekit.plugins.turn_detector.multilingual import MultilingualModel
from livekit.agents import llm
class CustomTurnDetector:
def __init__(self, userdata=None):
self._model = MultilingualModel()
self.last_eou_probability = None
self.userdata = userdata
async def predict_end_of_turn(self, chat_ctx: llm.ChatContext) -> float:
probability = await self._model.predict_end_of_turn(chat_ctx)
self.last_eou_probability = probability
print(f"EOU Probability: {probability * 100:.3f}%")
return probability
# Use in AgentSession
session = AgentSession(
# other parameters...
turn_detection=CustomTurnDetector(userdata),
)
JavaScript/TypeScript Integration For my JavaScript and TypeScript friends, here's how you can integrate the EOUModel:
import { EOUModel } from '@livekit/agents-plugin-silero/turnDetector';
const eouModel = new EOUModel(0.15); // unlikelyThreshold parameter
// The model provides methods like:
// - predictEndOfTurn(chatCtx): Promise<number>
// - supportsLanguage(language?: string): boolean
Configuration Options Want to fine-tune your turn detection? You've got options:
session = AgentSession(
turn_detection="vad", # or custom model
min_endpointing_delay=0.5,
max_endpointing_delay=6.0
)
Just between us, the EOU model is pretty smart – it looks at the last four conversation turns to make predictions and dynamically adjusts those VAD silence timeouts. Pretty cool, right? Need any more help or have questions? I'm totally here for you – just give me a shout! 😊average-architect-28889
07/26/2025, 9:15 AMtall-airline-16756
07/26/2025, 9:16 AMfrom livekit import agents
from livekit.agents import Agent, ChatContext, AgentSession
class Assistant(Agent):
def __init__(self, chat_ctx: ChatContext):
super().__init__(chat_ctx=chat_ctx, instructions="You are a helpful voice AI assistant.")
async def entrypoint(ctx: agents.JobContext):
# Example: Load user-specific data from metadata
metadata = json.loads(ctx.job.metadata)
user_name = metadata["user_name"]
await ctx.connect()
session = AgentSession(
# other parameters like stt, llm, tts, etc.
)
initial_ctx = ChatContext()
initial_ctx.add_message(role="assistant", content=f"The user's name is {user_name}.")
await session.start(
room=ctx.room,
agent=Assistant(chat_ctx=initial_ctx),
# other options like room_input_options
)
await session.generate_reply(
instructions="Greet the user by name and offer your assistance."
)
Alternative Approach: Here's another cool way you can pass context variables directly to the Agent constructor:
class ContextAgent(Agent):
def __init__(self, context_vars=None):
instructions = """
You are a helpful agent. The user's name is {name}.
They are {age} years old and live in {city}.
"""
if context_vars:
instructions = instructions.format(**context_vars)
super().__init__(
instructions=instructions,
stt=deepgram.STT(),
llm=openai.LLM(model="gpt-4o"),
tts=openai.TTS(),
vad=silero.VAD.load()
)
async def entrypoint(ctx: JobContext):
context_variables = {
"name": "Shayne",
"age": 35,
"city": "Toronto"
}
session = AgentSession()
await session.start(
agent=ContextAgent(context_vars=context_variables),
room=ctx.room
)
This approach is super flexible – it lets you customize your agent's context dynamically based on specific user or task details. Pretty neat, right? If you need any more help understanding this, just give me a shout! 😊 Sources: External data and RAG | LiveKit Docs | complex-agents/nutrition-assistant/agent.py | Vision | LiveKit Docsaverage-architect-28889
07/26/2025, 9:31 AMtall-airline-16756
07/26/2025, 9:32 AMfrom livekit.agents.llm import function_tool
# Create tools from your external functions
external_tools = [
function_tool(your_external_function, name="function_name", description="Function description")
]
# Pass tools to Agent constructor
agent = YourAgent(tools=external_tools)
Method 2: Update Tools After Agent Creation This is like giving your agent some new superpowers on the fly:
async def entrypoint(ctx: JobContext):
session = AgentSession()
agent = AddFunctionAgent()
async def _random_number() -> int:
num = random.randint(0, 100)
<http://logger.info|logger.info>(f"random_number called: {num}")
return num
await agent.update_tools(
agent.tools
+ [function_tool(_random_number, name="random_number", description="Get a random number")]
)
await session.start(
agent=agent,
room=ctx.room
)
Method 3: Define Tools Within Agent Class This method is perfect if you want to keep everything neat and tidy inside your Agent class:
class FunctionAgent(Agent):
def __init__(self):
super().__init__(
instructions="Your instructions here",
# other parameters...
)
@function_tool
async def your_tool_wrapper(self, context: RunContext):
# Call your external class/function here
result = await your_external_class.some_method()
return result
Key Points: - Functions defined within an Agent using the @function_tool decorator are automatically accessible to the LLM. - The Agent constructor accepts a tools parameter for passing external function tools. - You can use the function_tool helper to create tools from regular functions. - Functions take in a RunContext object, which provides access to the current agent state. If you're looking for the most flexible approach, I'd totally recommend Method 2. Just create your function tools from external functions and add them to the agent using update_tools(). Got questions? I'm all ears! Feel free to reach out if anything needs more explanation. 😊 Sources: Agents v0.x migration guide | LiveKit Docs | livekit-agents/livekit/agents/voice/run_result.py | tool_calling/call_function_tool.pyaverage-architect-28889
07/26/2025, 9:45 AMtall-airline-16756
07/26/2025, 9:46 AMfrom livekit.agents import function_tool, Agent, RunContext
class MyAgent(Agent):
@function_tool()
async def lookup_weather(self, context: RunContext, location: str) -> dict[str, Any]:
"""Look up weather information for a given location."""
return {"weather": "sunny", "temperature_f": 70}
2. Manual Registration You've got some flexible options here too! Let me walk you through them: - Via Constructor: Super easy - just pass tools directly when creating the agent:
class AgentA(Agent):
def __init__(self):
super().__init__(
tools=[lookup_user],
# other parameters...
)
- Via update_tools()
Method: Want to add or swap tools on the fly? No problem!
# Add a tool
await agent.update_tools(agent.tools + [tool_a])
# Replace all tools
await agent.update_tools([tool_a, tool_b])
- For External Tools: Got tools from other sources? We've got you covered:
# Register with the agent
if hasattr(agent, '_tools') and isinstance(agent._tools, list):
agent._tools.extend(tools)
<http://logger.info|logger.info>(f"Registered {len(tools)} MCP tools with agent")
Key Points: - Tools with @function_tool register automatically - how cool is that? - Tools passed via the tools parameter play nicely with automatically registered ones. - The update_tools() method lets you refresh your tool set completely. - You can always peek at your current tools using agent.tools before making changes. Got questions? I'm all ears! Feel free to ask if anything needs more explanation. 😊 Sources: Tool definition and use | LiveKit Docs | mcp_client/agent_tools.py | livekit-agents/livekit/agents/voice/agent_activity.pyaverage-architect-28889
07/26/2025, 10:07 AMtall-airline-16756
07/26/2025, 10:08 AMsession.start
Think of this method like setting up a new workspace - there are some key things you'll want to have ready: - Must-Haves: - room: Your LiveKit room (like your virtual meeting space). - agent: Your digital assistant who knows what to do. - Nice-to-Haves: - room_input_options: How you want to handle incoming interactions. - room_output_options: How you want to manage your audio and other outputs. Here's a friendly example of how this might look:
await session.start(
room=ctx.room,
agent=Assistant(tools=[get_weather, get_median_home_price, search_web, tell_joke]),
room_input_options=RoomInputOptions(close_on_disconnect=False),
room_output_options=RoomOutputOptions(audio_enabled=True)
)
Adding Your Agent's Superpowers (Tools) There are several cool ways to give your agent new skills: 1. Using the Magic @function_tool
Decorator You can easily teach your agent new tricks right in its class:
from livekit.agents import function_tool, Agent, RunContext
class MyAgent(Agent):
@function_tool()
async def lookup_weather(self, context: RunContext, location: str) -> dict[str, Any]:
"""Look up weather information for a given location."""
return {"weather": "sunny", "temperature_f": 70}
2. Handing Tools Directly to Your Agent Just like giving a new gadget to a friend:
class AgentA(Agent):
def __init__(self):
super().__init__(
tools=[lookup_user],
# other cool settings...
)
3. Updating Tools on the Fly Your agent can learn new skills anytime:
# Add a tool
await agent.update_tools(agent.tools + [new_tool])
# Or completely change toolset
await agent.update_tools([tool_a, tool_b])
4. Creating Standalone Tools You can also create independent tools:
@function_tool()
async def lookup_user(context: RunContext, user_id: str) -> dict:
"""Look up a user's information by ID."""
return {"name": "John Doe", "email": "<mailto:john.doe@example.com|john.doe@example.com>"}
5. For the Tech Wizards: Raw Schemas For those who love diving deep, you can create super-custom tools:
weather_schema = {
"name": "get_weather",
"description": "Retrieve the current weather for a city.",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "The city to get the weather for"},
},
"required": ["city"],
},
}
@function_tool(raw_schema=weather_schema)
async def get_weather(raw_arguments: dict[str, Any]) -> dict[str, Any]:
city = raw_arguments["city"]
return {"temperature": 70, "city": city}
Quick Takeaways: - Tools with @function_tool register automatically. How convenient! 🎉 - You can mix and match how you add tools. - The update_tools() method is like your agent's skill upgrade button. - Your function's description helps the agent understand its new skill. Need more help? I'm right here! 👋 Sources: livekit-agents/livekit/agents/llm/tool_context.py | Tool definition and use | LiveKit Docsaverage-architect-28889
07/26/2025, 10:10 AMtall-airline-16756
07/26/2025, 10:11 AMsession = AgentSession(
llm=openai.LLM(model="gpt-4o-mini"),
)
await session.start(
agent=MyAgent(),
room=ctx.room,
room_input_options=RoomInputOptions(text_enabled=True, audio_enabled=False),
room_output_options=RoomOutputOptions(transcription_enabled=True, audio_enabled=False),
)
Got questions? I'm here to help! Feel free to ask about any of these options, and I'll break it down in a way that makes total sense. 😊 Sources: Sources/LiveKit/Types/Options/RoomOptions.swift | livekit-android-sdk/src/main/java/io/livekit/android/RoomOptions.kt | examples/voice_agents/weather_agent.py | src/room/defaults.ts