Everyone, when using ```from livekit.plugins impor...
# ask-ai
b
Everyone, when using
Copy code
from livekit.plugins import openai
to dynamically register tools, haven’t you run into some strange issues? I’ve encountered two odd problems and would like to share them: 1. "Please ensure that the number of function response parts is equal to the number of function call parts of the function call turn." 2. When tools are present and
tool_choice
is set to
auto
, if I ask a question outside of the tools, the model always replies that it doesn’t have that capability. I’d like to know whether this is an issue with the official plugin, or if I’m misusing it. I’ve followed the official examples exactly.
t
Thanks for reaching out, zhuyu. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I'd love to help you get this sorted out! Could you please share a bit more about your setup and how you're using things? For example: • Are you working with the latest version of the LiveKit SDK and the OpenAI plugin? • Would you be comfortable sharing the exact code snippet or configuration where the error popped up? • Which specific tools are you dynamically registering, and how are they being used in your project? Having these details will really help us understand whether the issue is coming from the plugin itself or how it's being implemented. Looking forward to your response!
b
I’m using the following versions:
Copy code
livekit-agents==1.2.2
livekit-plugins-openai==1.2.2
I’ve configured a total of 5 tools. Among them, two are webhook tools, where I implemented the request interfaces myself. The other tools are all function_tools, returning JSON. I configure them dynamically. Here’s my code:
Copy code
def build_dynamic_tool_v2(tool: dict):
    """
    Build compatible LiveKit Tool for Function / Webhook / Client
    """
    func_name = tool.get("name")
    if not func_name:
        raise ValueError("func_name is required for build_dynamic_tool_v2")

    func_description = tool.get("description")
    if not func_description:
        raise ValueError("func_description is required for build_dynamic_tool_v2")

    # get strategy
    strategy = ToolFactory.get_strategy(tool.get("type", "function"))
    raw_schema = strategy.construct_raw_schema(tool, {})

    async def handler(raw_arguments: dict[str, Any], context: RunContext):
        # support dynamic_variable assignment
        assignments = tool.get("assignments", [])
        for a in assignments:
            dyn_var = a.get("dynamic_variable")
            if dyn_var and dyn_var not in raw_arguments:
                raw_arguments[dyn_var] = a.get("value_path", "")

        <http://logger.info|logger.info>("Executing tool %s with arguments: %s", func_name, raw_arguments)
        result = await strategy.execute(tool, raw_arguments)
        return result

    # return LiveKit function_tool
    return function_tool(
        handler,
        raw_schema=raw_schema
    )

class MinimaxAgent(Agent):

    def __init__(self, instructions, tools, llm_component):
        super().__init__(instructions=instructions, tools=tools, llm=llm_component)

    def llm_node(
        self,
        chat_ctx: llm.ChatContext,
        tools: list[FunctionTool | RawFunctionTool],
        model_settings: ModelSettings,
    ) -> (
        AsyncIterable[llm.ChatChunk | str]
        | Coroutine[Any, Any, AsyncIterable[llm.ChatChunk | str]]
        | Coroutine[Any, Any, str]
        | Coroutine[Any, Any, llm.ChatChunk]
        | Coroutine[Any, Any, None]
    ):
        return super().llm_node(chat_ctx, tools, ModelSettings(tool_choice="auto"))

    llm_component = openai.LLM(
        base_url=OPENROUTER_LLM_CONFIG["base_url"],
        api_key=OPENROUTER_LLM_CONFIG["api_key"],
        model=OPENROUTER_LLM_CONFIG["model"],
        temperature=OPENROUTER_LLM_CONFIG["temperature"],
        tool_choice="auto",
        parallel_tool_calls=False
    )
@bumpy-architect-64868
@bumpy-architect-64868
t
Thank you so much for sharing the details about your setup and code! I really appreciate the clarity you've provided. Based on what you've shared, I’d love to offer some insights and friendly recommendations to help you navigate these issues: 1. Error: "Please ensure that the number of function response parts is equal to the number of function call parts of the function call turn" This error usually pops up when there’s a mismatch between the function calls the model makes and the responses coming back from your tools. Here are a few tips to smooth things out: • Make sure every function call the model initiates has exactly one matching response from the corresponding tool. This includes carefully handling streamed arguments and wrapping up responses when the
finish_reason
is received. • Try turning on the
LK_OPENAI_DEBUG
environment variable to log the flow of function calls and responses. It’s a great way to double-check that everything is lining up correctly. • Take a look at the
ChatCompletionChunk
stream to confirm all
tool_call
events are properly finalized with a
finish_reason
of "tool_calls" before sending back a response. 1. Behavior with
tool_choice="auto"
When you set
tool_choice
to
auto
, the model decides on its own whether to use the tools based on the conversation. If you notice it skipping tools or acting unexpectedly, here are some things to consider: • Ensure your tool descriptions and parameters are clear and well-defined. This really helps the model understand when it should bring a tool into the conversation. • If you want the tool usage to be more predictable, you might try setting
tool_choice="required"
to make sure the model uses at least one tool, or specify a particular function to call. Your dynamic tool registration logic looks solid from what I can see! Just a couple of friendly reminders to double-check: • Confirm that the
raw_schema
for each tool is correctly built and matches the expected input/output format. • Make sure the
handler
function is processing all necessary arguments and returning a valid response. • Verify that the
strategy.execute
method is properly handling the tool logic and returning the results you expect. If you’re still running into challenges, please don’t hesitate to share any specific logs or debug outputs. I’m here to help you troubleshoot further and get everything working smoothly!
Thank you so much for sharing the details of your setup and code! I really appreciate you taking the time to provide this information. Based on what you've shared, here are a few key points that might help address the issues you're facing: • Error: "Please ensure that the number of function response parts is equal to the number of function call parts of the function call turn": This error usually happens when there's a mismatch between the function calls made by the model and the responses from the tools. To resolve this, please make sure that each function call results in exactly one response, and that they follow the correct order. For easier debugging, you can enable the
LK_OPENAI_DEBUG
environment variable to see detailed logs of the function call and response flow. • Dynamic Tool Registration: Your method of dynamically registering tools using
@function_tool
is spot on! Just be sure to update the tool list dynamically with
agent.update_tools()
before making any LLM requests. Also, double-check that each tool name and schema is unique and properly defined to avoid any conflicts. • Tool Choice Behavior: When
tool_choice
is set to
auto
, the model decides on its own whether to use the tools based on the conversation context. If you want to ensure a tool is always used, you might want to set
tool_choice
to
required
, which forces at least one tool call. Please feel free to reach out if you have any more questions or if there's anything else you'd like to explore in your setup. I'm here to help!
b
I’d like help with the second issue. The screenshot I attached contains the logs of the problem. My issue is that after I dynamically registered tools, when I ask the model something outside of the tools’ scope, the model is restricted and responds that it cannot fulfill the request. What I actually want is for the model to be able to freely decide whether to trigger a tool or just reply directly. @bumpy-architect-64868
t
I’d love to help you with your second issue, where the model says it can’t fulfill requests outside the tools' scope. You can actually configure the model to freely choose whether to trigger a tool or reply directly, and this is managed by the
tool_choice
parameter in the LiveKit OpenAI plugin. Here’s a friendly guide to get you started: 1. Set
tool_choice
to `auto`:
This lets the model decide on its own whether to use a tool or provide a direct response based on what it understands from the input and available tools. This is the default setting, but you can also specify it explicitly when you initialize the LLM:
Copy code
from livekit.plugins import openai

llm_component = openai.LLM(
    model="gpt-4o-mini",
    tool_choice="auto",  # Allows the model to decide
    parallel_tool_calls=False
)
1. Make Sure Tool Descriptions Are Clear: The model depends on the tool descriptions to know when to use them. Clear and specific descriptions help avoid unnecessary tool calls and make the interaction smoother. 2. Override Per Request (Optional): If you want to tweak this behavior for certain requests, you can override the
tool_choice
parameter right in the
chat()
method for that particular conversation. With these steps, your model should feel confident deciding when to invoke a tool or just reply directly. If anything feels unclear or you’d like more help, please don’t hesitate to reach out—I’m here for you! Sources: Telnyx LLM integration guide | LiveKit Docs | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py | OpenAI LLM integration guide | LiveKit Docs | xAI LLM integration guide | LiveKit Docs | Together AI LLM integration guide | LiveKit Docs
b
I have already provided you and my code above. I configured auto, but it didn't take effect. Instead, it restricted the tool. I checked all the documentation but found no answers to the relevant issues. No one in the community has encountered the same problem. Could you please carefully read my context message and help me solve this problem @bumpy-architect-64868 @flaky-orange-20484 @bitter-noon-73951 @boundless-airport-97849
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
b
Thank you for your help @bumpy-architect-64868