aloof-scooter-47164
09/11/2025, 1:41 AMtool_execution_started_cb=tool_execution_started_cb,
tool_execution_completed_cb=tool_execution_completed_cb,
thise are part of the agent_activity.py
file, i want to be able to intercept when we're going to start a function tool calltall-airline-16756
09/11/2025, 1:41 AMtall-airline-16756
09/11/2025, 1:42 AMtool_execution_started_cb
and tool_execution_completed_cb
callbacks! You can easily override them by providing your own functions when you call perform_tool_executions
. These callbacks are a great way to tap into the tool execution process and handle events just the way you need.
Here’s a simple example to get you started:
def my_custom_tool_started(fnc_call):
print(f"Starting to execute: {fnc_call.name}")
def my_custom_tool_completed(out):
if out.fnc_call_out:
print(f"Tool {out.fnc_call.name} completed with output: {out.fnc_call_out.output}")
exe_task, tool_output = perform_tool_executions(
session=self._session,
speech_handle=speech_handle,
tool_ctx=tool_ctx,
tool_choice=model_settings.tool_choice,
function_stream=fnc_stream,
tool_execution_started_cb=my_custom_tool_started,
tool_execution_completed_cb=my_custom_tool_completed,
)
With these callbacks, you can log important events, update your UI, or handle outputs exactly how you want. If anything’s unclear or you’d like me to walk you through it step-by-step, just let me know—I’m here to help!
Sources: examples/voice_agents/silent_function_call.py | AI Agents Framework | livekit/livekit_composite | DeepWiki | plugins/openai/src/llm.tsaloof-scooter-47164
09/11/2025, 1:43 AMperform_tool_executions
is not a method i handle myself right?tall-airline-16756
09/11/2025, 1:44 AMFunctionTool
and listen for events like function_tools_executed
, which gives you info about the tools that ran and their results. I’m here to help if you’d like me to walk you through how to use these features!
Sources: livekit-agents/livekit/agents/voice/agent_activity.py | agents/src/llm/tool_context.tsaloof-scooter-47164
09/11/2025, 1:44 AMtall-airline-16756
09/11/2025, 1:45 AMtool_execution_started_cb
callback. This handy callback is triggered right when a tool execution begins, giving you the chance to observe or respond to the tool call in real time.
That said, since perform_tool_executions
is handled internally, you won’t be able to pass this callback directly unless you have access to the internal pipeline. But don’t worry—there’s another way! You can override the llm_node
method in your custom agent class. This lets you peek into the raw llm.ChatChunk
as it’s produced by the LLM, including any tool calls before they actually run.
If you'd like, I’d be happy to walk you through how to implement the llm_node
override. Just let me know!aloof-scooter-47164
09/11/2025, 1:47 AMtall-airline-16756
09/11/2025, 1:47 AMllm_node
method in your custom agent class. This way, you’ll be able to process each ChatChunk
that the LLM emits and spot when a tool call happens.
Here’s a friendly example to guide you:
from livekit.agents import Agent, ChatChunk
from livekit.plugins import openai, deepgram, silero
class FunctionAgent(Agent):
def __init__(self):
super().__init__(
instructions="You are a helpful assistant.",
stt=deepgram.STT(),
llm=openai.LLM(model="gpt-4o"),
tts=openai.TTS(),
vad=silero.VAD.load(),
allow_interruptions=True,
)
async def llm_node(self, chat_ctx, tools, model_settings):
tool_call_detected = False
# Stream responses from the default LLM node
async for chunk in super().llm_node(chat_ctx, tools, model_settings):
# Check if this is a ChatChunk with a tool call
if isinstance(chunk, ChatChunk) and chunk.delta and chunk.delta.tool_calls:
if not tool_call_detected:
# Say a message once when the first tool call is detected
self._activity.agent.say("Sure, I’ll check that for you.")
tool_call_detected = True
yield chunk # Forward the chunk to the next stage
This approach lets you:
• Spot tool calls right as they happen by looking at chunk.delta.tool_calls
.
• Take custom actions, like logging or gently notifying users, whenever a tool call is found.
• Keep the default LLM behavior intact by forwarding chunks with yield
.
If you’d like, I’m here to help you with any part of this implementation—just let me know!
Sources: agent.py | Docs::Agents | Modifying LLM output before TTS | LiveKit Docstall-airline-16756
09/11/2025, 1:47 AMllm_node
method in your custom agent class. This way, you’ll be able to process each ChatChunk
that the LLM emits and catch when a tool call happens.
Here’s a friendly example to guide you:
from livekit.agents import Agent, ChatChunk
from livekit.plugins import openai, deepgram, silero
class FunctionAgent(Agent):
def __init__(self):
super().__init__(
instructions="You are a helpful assistant.",
stt=deepgram.STT(),
llm=openai.LLM(model="gpt-4o"),
tts=openai.TTS(),
vad=silero.VAD.load(),
allow_interruptions=True,
)
async def llm_node(self, chat_ctx, tools, model_settings):
tool_call_detected = False
# Stream responses from the default LLM node
async for chunk in super().llm_node(chat_ctx, tools, model_settings):
# Check if this is a ChatChunk with a tool call
if isinstance(chunk, ChatChunk) and chunk.delta and chunk.delta.tool_calls:
if not tool_call_detected:
# Say a message once when the first tool call is detected
self._activity.agent.say("Sure, I’ll check that for you.")
tool_call_detected = True
yield chunk # Forward the chunk to the next stage
This approach lets you:
• Spot tool calls as they happen, before they run.
• Take extra steps like logging or gently informing your users.
• Keep the usual LLM behavior intact by forwarding chunks with yield
.
If you’d like, I’m here to help you with any part of this implementation—just let me know!aloof-scooter-47164
09/11/2025, 1:50 AMtall-airline-16756
09/11/2025, 1:51 AMllm_node
override can be a bit frustrating. Let’s walk through some common reasons this might happen and how you can address them:
• LLM Response Does Not Contain Tool Calls: First, please check that your LLM is set up to use tools. Make sure the tool_choice
parameter is correctly configured and that your tools are properly registered. If tool_choice
is set to "none"
, the LLM won’t generate any tool calls.
• Stream Parsing Logic: Keep in mind that the llm_node
override needs to handle streaming incrementally. Tool calls come in chunks—the first chunk includes the tool name and ID, and the following chunks stream the arguments. Double-check that your parsing logic accounts for this pattern.
• Missing or Empty tool_calls
Field: If the chunk.delta.tool_calls
field is missing or empty, detection won’t work. It’s a good idea to verify that the LLM is indeed generating tool calls based on your inputs and the tools available.
• Tool Registration: Lastly, ensure your tools are properly registered and passed into the llm_node
. Without registered tools, the LLM won’t include tool calls in its output.
To help you debug, you might find it useful to log the incoming chunks inside your llm_node
override. This way, you can see exactly what’s coming through and check the structure:
async for chunk in super().llm_node(chat_ctx, tools, model_settings):
print(chunk) # Log the chunk to inspect its content
if isinstance(chunk, ChatChunk) and chunk.delta and chunk.delta.tool_calls:
print("Tool call detected:", chunk.delta.tool_calls)
yield chunk
Please feel free to reach out if you’d like me to help you troubleshoot further—I’m here to support you!aloof-scooter-47164
09/11/2025, 1:53 AMtall-airline-16756
09/11/2025, 1:54 AMtools
parameter.
• Verify Tool Availability: The tool_choice
parameter should be set to "auto"
(which is the default) so the model can invoke tools whenever needed. Also, double-check that tools are sent to the OpenAI Realtime API using the session update mechanism.
• Stream Parsing: Tool calls come through as tool_calls
deltas in the LLM response stream. Be sure your llm_node
override is set up to handle these chunks properly. It’s a great idea to log the incoming chunks to see their structure and content clearly.
• Connection Stability: A stable connection to the OpenAI Realtime API is key. Connection hiccups can interfere with detecting tool calls, so keeping an eye on that helps a lot.
To help you debug, here’s a handy snippet to log the incoming chunks in your llm_node
override:
async for chunk in super().llm_node(chat_ctx, tools, model_settings):
print(chunk) # Inspect chunk content
if isinstance(chunk, ChatChunk) and chunk.delta and chunk.delta.tool_calls:
print("Tool call detected:", chunk.delta.tool_calls)
yield chunk
If you have any questions or need a hand with this, please don’t hesitate to reach out. I’m here to help!
Sources: livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/stt.py | agent.py | examples/src/basic_tool_call_agent.ts