what is the best way to organize funtional tools o...
# ask-ai
d
what is the best way to organize funtional tools outside of an agent class
t
Thanks for reaching out, Theo. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To organize functional tools outside of an agent class, you can follow these approaches: • Define tools outside the agent class: Create tools as standalone functions and provide them to multiple agents. For example, you can define a tool using the
@function_tool
decorator and then include it in the toolset of different agents. • Create tools programmatically: Use
function_tool
as a function rather than a decorator to dynamically create tools based on specific needs. • Utilize the
RunContext
parameter:
This allows you to access the current session, agent, and state, making it easier to share tools between multiple agents. Let me know if you need further clarification or examples! Sources: Tool definition and use | LiveKit Docs
d
so it will always need to get passed RunContext?
t
No, RunContext is not always required for tools in LiveKit. While tools support a special
context
argument (RunContext) to access the current session, function call, and other details, it is optional. The system is designed to handle cases where RunContext is not provided, so you can create tools that do not rely on it if your use case does not require access to session-specific information. Let me know if you need further clarification! Sources: Tool definition and use | LiveKit Docs | Tree View: | dse-workbench
d
give me an example of creating tools programatically
t
Here is an example of creating tools programmatically in LiveKit:
Copy code
from livekit.agents import function_tool, RunContext

class Assistant(Agent):
    def _set_profile_field_func_for(self, field: str):
        async def set_value(context: RunContext, value: str):
            # custom logic to set input
            return f"field {field} was set to {value}"
        
        return set_value

    def __init__(self):
        super().__init__(
            tools=[
                function_tool(self._set_profile_field_func_for("phone"),
                    name="set_phone_number",
                    description="Call this function when user has provided their phone number."),
                function_tool(self._set_profile_field_func_for("email"),
                    name="set_email",
                    description="Call this function when user has provided their email."),
                # ... other tools ...
            ],
            # instructions, etc ...
        )
In this example, tools are dynamically created using the
function_tool
function. This allows you to reuse the same logic for different tools by passing specific parameters (e.g., "phone" or "email"). Let me know if you need further assistance! Sources: Tool definition and use | LiveKit Docs
d
give an example without an agent class
t
Currently, the examples provided in the documentation focus on creating tools programmatically within the context of an agent class. If you are looking to create tools entirely outside of an agent class, you can still use the
function_tool
function to define tools independently. Here’s a simplified example:
Copy code
from livekit.agents import function_tool, RunContext

# Define a tool programmatically
async def set_value(context: RunContext, field: str, value: str):
    # Custom logic to set input
    return f"Field {field} was set to {value}"

# Create the tool using function_tool
set_value_tool = function_tool(
    set_value,
    name="set_value_tool",
    description="Set a specific field to a given value."
)

# The tool can now be used independently or passed to an agent if needed.
This approach allows you to define tools without embedding them in an agent class. Let me know if you need further clarification or additional examples! Sources: Tool definition and use | LiveKit Docs
d
is this the same as adding a decorator to a function?
t
Using
function_tool
as a decorator and as a function in LiveKit achieves similar outcomes but differs in flexibility and use cases: • As a decorator: This is a straightforward way to define a tool directly on a function. For example:
Copy code
@function_tool()
async def lookup_user(context: RunContext, user_id: str) -> dict:
    """Look up a user's information by ID."""
    return {"name": "John Doe", "email": "<mailto:john.doe@example.com|john.doe@example.com>"}
As a function: This approach allows you to create tools programmatically at runtime, offering more flexibility. For example:
Copy code
function_tool(self._set_profile_field_func_for("phone"),
              name="set_phone_number",
              description="Call this function when user has provided their phone number.")
The function approach is particularly useful when you need to dynamically create multiple tools from the same base function, customize their names or descriptions, or load them from external sources. Let me know if you need further clarification! Sources: Tool definition and use | LiveKit Docs
d
would one cause more latency than the other
t
The documentation does not specify any differences in latency between using
function_tool
as a decorator versus as a function. Both approaches ultimately register the tool for use, so any potential latency differences would likely be negligible and depend more on the tool's implementation rather than the method of creation. If you have specific performance concerns, I recommend testing both approaches in your environment to measure any differences. Let me know if you'd like further assistance!
d
what is the most likely cause of issues with tools being slow or incorrect? the prompting?
t
The documentation does not provide specific details about the causes of tools being slow or incorrect in LiveKit. However, potential factors could include: • Prompting: Poorly designed prompts or incomplete instructions can lead to incorrect or suboptimal tool behavior. • Tool Implementation: Inefficient code or logic within the tool itself can cause delays or errors. • External Dependencies: If the tool relies on external APIs or services, their performance can impact the tool's speed and accuracy. • System Resources: Limited computational resources or high system load can also contribute to latency. If you suspect prompting is the issue, refining the tool's description and instructions can often improve its performance. Let me know if you'd like help troubleshooting further!
d
should the tool use vary between different models
t
Yes, tool usage can vary between different models in LiveKit. The framework provides options to control how models use tools through parameters like
tool_choice
, which can be set to: • auto: Let the model decide when to use tools (default). • required: Force the model to use tools. • none: Disable tool usage entirely. Additionally, some integrations (e.g., xAI Grok and Perplexity) support a
parallel_tool_calls
parameter, allowing models to make multiple tool calls simultaneously. These settings allow you to tailor tool usage based on the capabilities and behavior of different models. Let me know if you'd like further details or examples! Sources: xAI LLM integration guide | LiveKit Docs | Perplexity LLM integration guide | LiveKit Docs | Amazon Bedrock LLM integration guide | LiveKit Docs