brainy-continent-98791
09/11/2025, 9:07 AMfrom livekit.plugins import openai
to dynamically register tools, haven’t you run into some strange issues?
I’ve encountered two odd problems and would like to share them:
1. "Please ensure that the number of function response parts is equal to the number of function call parts of the function call turn."
2. When tools are present and tool_choice
is set to auto
, if I ask a question outside of the tools, the model always replies that it doesn’t have that capability.
I’d like to know whether this is an issue with the official plugin, or if I’m misusing it. I’ve followed the official examples exactly.tall-airline-16756
09/11/2025, 9:07 AMtall-airline-16756
09/11/2025, 9:08 AMbrainy-continent-98791
09/11/2025, 9:14 AMlivekit-agents==1.2.2
livekit-plugins-openai==1.2.2
I’ve configured a total of 5 tools. Among them, two are webhook tools, where I implemented the request interfaces myself. The other tools are all function_tools, returning JSON. I configure them dynamically.
Here’s my code:
def build_dynamic_tool_v2(tool: dict):
"""
Build compatible LiveKit Tool for Function / Webhook / Client
"""
func_name = tool.get("name")
if not func_name:
raise ValueError("func_name is required for build_dynamic_tool_v2")
func_description = tool.get("description")
if not func_description:
raise ValueError("func_description is required for build_dynamic_tool_v2")
# get strategy
strategy = ToolFactory.get_strategy(tool.get("type", "function"))
raw_schema = strategy.construct_raw_schema(tool, {})
async def handler(raw_arguments: dict[str, Any], context: RunContext):
# support dynamic_variable assignment
assignments = tool.get("assignments", [])
for a in assignments:
dyn_var = a.get("dynamic_variable")
if dyn_var and dyn_var not in raw_arguments:
raw_arguments[dyn_var] = a.get("value_path", "")
<http://logger.info|logger.info>("Executing tool %s with arguments: %s", func_name, raw_arguments)
result = await strategy.execute(tool, raw_arguments)
return result
# return LiveKit function_tool
return function_tool(
handler,
raw_schema=raw_schema
)
class MinimaxAgent(Agent):
def __init__(self, instructions, tools, llm_component):
super().__init__(instructions=instructions, tools=tools, llm=llm_component)
def llm_node(
self,
chat_ctx: llm.ChatContext,
tools: list[FunctionTool | RawFunctionTool],
model_settings: ModelSettings,
) -> (
AsyncIterable[llm.ChatChunk | str]
| Coroutine[Any, Any, AsyncIterable[llm.ChatChunk | str]]
| Coroutine[Any, Any, str]
| Coroutine[Any, Any, llm.ChatChunk]
| Coroutine[Any, Any, None]
):
return super().llm_node(chat_ctx, tools, ModelSettings(tool_choice="auto"))
llm_component = openai.LLM(
base_url=OPENROUTER_LLM_CONFIG["base_url"],
api_key=OPENROUTER_LLM_CONFIG["api_key"],
model=OPENROUTER_LLM_CONFIG["model"],
temperature=OPENROUTER_LLM_CONFIG["temperature"],
tool_choice="auto",
parallel_tool_calls=False
)
brainy-continent-98791
09/11/2025, 9:14 AMbrainy-continent-98791
09/11/2025, 9:15 AMtall-airline-16756
09/11/2025, 9:15 AMfinish_reason
is received.
• Try turning on the LK_OPENAI_DEBUG
environment variable to log the flow of function calls and responses. It’s a great way to double-check that everything is lining up correctly.
• Take a look at the ChatCompletionChunk
stream to confirm all tool_call
events are properly finalized with a finish_reason
of "tool_calls" before sending back a response.
1. Behavior with tool_choice="auto"
When you set tool_choice
to auto
, the model decides on its own whether to use the tools based on the conversation. If you notice it skipping tools or acting unexpectedly, here are some things to consider:
• Ensure your tool descriptions and parameters are clear and well-defined. This really helps the model understand when it should bring a tool into the conversation.
• If you want the tool usage to be more predictable, you might try setting tool_choice="required"
to make sure the model uses at least one tool, or specify a particular function to call.
Your dynamic tool registration logic looks solid from what I can see! Just a couple of friendly reminders to double-check:
• Confirm that the raw_schema
for each tool is correctly built and matches the expected input/output format.
• Make sure the handler
function is processing all necessary arguments and returning a valid response.
• Verify that the strategy.execute
method is properly handling the tool logic and returning the results you expect.
If you’re still running into challenges, please don’t hesitate to share any specific logs or debug outputs. I’m here to help you troubleshoot further and get everything working smoothly!tall-airline-16756
09/11/2025, 9:15 AMLK_OPENAI_DEBUG
environment variable to see detailed logs of the function call and response flow.
• Dynamic Tool Registration: Your method of dynamically registering tools using @function_tool
is spot on! Just be sure to update the tool list dynamically with agent.update_tools()
before making any LLM requests. Also, double-check that each tool name and schema is unique and properly defined to avoid any conflicts.
• Tool Choice Behavior: When tool_choice
is set to auto
, the model decides on its own whether to use the tools based on the conversation context. If you want to ensure a tool is always used, you might want to set tool_choice
to required
, which forces at least one tool call.
Please feel free to reach out if you have any more questions or if there's anything else you'd like to explore in your setup. I'm here to help!brainy-continent-98791
09/11/2025, 9:23 AMtall-airline-16756
09/11/2025, 9:24 AMtool_choice
parameter in the LiveKit OpenAI plugin.
Here’s a friendly guide to get you started:
1. Set tool_choice
to `auto`: This lets the model decide on its own whether to use a tool or provide a direct response based on what it understands from the input and available tools. This is the default setting, but you can also specify it explicitly when you initialize the LLM:
from livekit.plugins import openai
llm_component = openai.LLM(
model="gpt-4o-mini",
tool_choice="auto", # Allows the model to decide
parallel_tool_calls=False
)
1. Make Sure Tool Descriptions Are Clear: The model depends on the tool descriptions to know when to use them. Clear and specific descriptions help avoid unnecessary tool calls and make the interaction smoother.
2. Override Per Request (Optional): If you want to tweak this behavior for certain requests, you can override the tool_choice
parameter right in the chat()
method for that particular conversation.
With these steps, your model should feel confident deciding when to invoke a tool or just reply directly. If anything feels unclear or you’d like more help, please don’t hesitate to reach out—I’m here for you!
Sources: Telnyx LLM integration guide | LiveKit Docs | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py | OpenAI LLM integration guide | LiveKit Docs | xAI LLM integration guide | LiveKit Docs | Together AI LLM integration guide | LiveKit Docsbrainy-continent-98791
09/11/2025, 9:28 AMtall-airline-16756
09/11/2025, 9:28 AMbrainy-continent-98791
09/11/2025, 9:29 AM