microscopic-airport-67304
08/18/2025, 4:05 PMtall-airline-16756
08/18/2025, 4:05 PMtall-airline-16756
08/18/2025, 4:06 PMfrom livekit.agents.llm import FunctionContext, ai_callable
import aiohttp
class WeatherFunctionContext(FunctionContext):
@ai_callable(description="Get the current weather in a given location")
async def get_weather(
self,
location: Annotated[str, TypeInfo(description="The city and state, e.g., San Francisco, CA")],
) -> str:
"""Get the current weather for a given location."""
url = f"<https://wttr.in/{location}?format=j1>"
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
weather_data = await response.json()
return f"The weather in {location} is {weather_data['current_condition'][0]['weatherDesc']}."
raise Exception(f"Failed to get weather for {location}")
Step 2: Create the Agent with Anthropic LLM
from livekit.agents import Agent, AgentSession
from livekit.plugins.anthropic import LLM
class WeatherAgent(Agent):
def __init__(self) -> None:
super().__init__(
instructions="You are a weather assistant. Use the get_weather function to provide accurate weather information.",
llm=LLM(),
)
async def on_enter(self):
self.session.generate_reply()
async def entrypoint(ctx: JobContext):
session = AgentSession()
await session.start(
agent=WeatherAgent(),
room=ctx.room
)
Step 3: Use the Function in the LLM
When someone asks about the weather, the LLM will naturally pick up on the need to call the get_weather
function and share the result with them.
Step 4: Handle Function Results
The LLM then uses the info from the function call to craft a helpful response, like this:
The weather in New York is partly cloudy.
This example shows you how to smoothly integrate function calling with an Anthropic LLM in LiveKit, making your agents even smarter and more useful.
Sources: livekit-agents/livekit/agents/voice/run_result.py | examples/pydantic_models_to_grammar_examples.pymicroscopic-airport-67304
08/18/2025, 4:06 PMtall-airline-16756
08/18/2025, 4:07 PMfrom livekit.agents import llm
from livekit.agents.pipeline import VoicePipelineAgent
from livekit.plugins.anthropic import LLM
# Define a function context with the tool
class WeatherFnc(llm.FunctionContext):
@llm.ai_callable()
async def get_weather(
self,
location: Annotated[str, llm.TypeInfo(description="Location to get weather for")]
):
"""Get the current weather for a given location."""
# Simulate an API call to get weather data
url = f"<WEATHER_API_URL>/{location}?format=%C+%t"
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
weather_data = await response.text()
return f"The weather in {location} is {weather_data}."
else:
raise f"Failed to get weather data, status code: {response.status}"
# Initialize the Anthropic LLM
anthropic_llm = LLM(model="claude-3-sonnet")
# Create the function context
weather_ctx = WeatherFnc()
# Initialize the agent with the LLM and function context
agent = VoicePipelineAgent(
llm=anthropic_llm,
fnc_ctx=weather_ctx,
# Add other necessary components like STT, TTS, VAD, etc.
)
This little example shows you how the get_weather
function is wrapped with ai_callable()
, so the LLM knows it can use it. When someone asks about the weather, the LLM calls this function to fetch the latest weather info and includes it right in the reply.
If you want me to walk you through any part of this or need more examples, just let me know—I’m here to help!