rough-finland-1269
02/27/2025, 9:29 PMgray-monkey-83525
02/27/2025, 9:29 PMllm_callable
in a MultimodalAgent involves using the function calling capabilities of the agent. However, the specific setup for llm_callable
isn't explicitly detailed in the provided data sources. You might want to explore the function context and callable functions as described in the LiveKit function context documentation.
Would you like more detailed guidance on how to implement function calling within a MultimodalAgent?
Sources:
- LiveKit function context documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.rough-finland-1269
02/27/2025, 9:34 PMgray-monkey-83525
02/27/2025, 9:34 PMllm_callable
in a MultimodalAgent, you can define a function within a FunctionContext
and mark it as callable by the LLM using the @llm.ai_callable
decorator. Here's a step-by-step example:
1. Define a FunctionContext: Create a class that inherits from llm.FunctionContext
and define your callable function within it.
2. Use the @llm.ai_callable
Decorator: This decorator marks the function as available to the LLM. You can provide a description and argument types using Python's type annotations.
3. Integrate with MultimodalAgent: Pass the function context to the MultimodalAgent when initializing it.
Here's a Python example:
import aiohttp
from typing import Annotated
from livekit.agents import llm
from livekit.agents.multimodal import MultimodalAgent
# Define a class that inherits from llm.FunctionContext
class AssistantFnc(llm.FunctionContext):
# Use the llm.ai_callable decorator to mark this function as callable
@llm.ai_callable()
async def get_weather(self, location: Annotated[str, llm.TypeInfo(description="The location to get the weather for")]):
"""Called when the user asks about the weather. This function will return the weather for the given location."""
url = f"https://wttr.in/{location}?format=%C+%t"
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
weather_data = await response.text()
return f"The weather in {location} is {weather_data}."
else:
raise Exception(f"Failed to get weather data, status code: {response.status}")
# Initialize the function context
fnc_ctx = AssistantFnc()
# Initialize the MultimodalAgent with the function context
multimodal_agent = MultimodalAgent(model=your_model, fnc_ctx=fnc_ctx)
In this example, the get_weather
function is defined as callable by the LLM, allowing it to retrieve weather information based on a location provided by the user.
Would you like more information on any specific part of this setup?
Sources:
- Function calling with Voice Agents 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.shy-article-62777
03/10/2025, 8:58 AMtall-airline-16756
03/10/2025, 8:58 AMtall-airline-16756
03/10/2025, 8:58 AM