wooden-beard-26644
08/18/2025, 5:14 PMtall-airline-16756
08/18/2025, 5:14 PMtall-airline-16756
08/18/2025, 5:15 PMllm_node
method in your agent class. This lets you keep an eye on the response chunks for any tool calls and skip speaking when they pop up.
Here’s a simple way to think about it:
• Override the llm_node
method so you can track tool calls with a flag.
• Whenever a tool call shows up in a response chunk, stop the agent from speaking that part.
• Once the tool call is done, let the agent speak as usual.
If you’d like me to walk you through the details or if anything’s unclear, just give me a shout!wooden-beard-26644
08/18/2025, 5:15 PMtall-airline-16756
08/18/2025, 5:16 PMclass FunctionAgent(Agent):
"""A LiveKit agent that uses MCP tools but doesn’t speak on tool calls."""
def __init__(self):
super().__init__(
instructions="You are a helpful assistant.",
stt=deepgram.STT(),
llm=openai.LLM(model="gpt-4"),
tts=openai.TTS(),
vad=silero.VAD.load(),
allow_interruptions=True
)
async def llm_node(self, chat_ctx, tools, model_settings):
"""Keep an eye out for tool calls and stay silent when they happen."""
tool_call_detected = False
async for chunk in super().llm_node(chat_ctx, tools, model_settings):
if isinstance(chunk, ChatChunk) and chunk.delta and chunk.delta.tool_calls:
tool_call_detected = True
# Skip speaking if a tool call is detected
continue
if tool_call_detected and not chunk.delta.tool_calls:
tool_call_detected = False
yield chunk
Quick rundown:
• The llm_node
method watches each response chunk to spot tool calls.
• We use a flag (tool_call_detected
) to remember when a tool call happens.
• When it does, the agent skips speaking that part of the response.
If you want me to walk you through this or help with anything else, just let me know!wooden-beard-26644
08/18/2025, 5:18 PMtall-airline-16756
08/18/2025, 5:18 PMChatChunk
into your LiveKit agent setup, here’s how you can do it:
For Python:
from livekit.agents.llm import ChatChunk
For JavaScript/TypeScript:
import { ChatChunk } from '@livekit/agents-llm';
Just make sure you’ve got the right LiveKit package added to your project. If you run into any hiccups or want a hand with anything else, I’m here to help!
Sources: agents/src/llm/index.ts | livekit-agents/livekit/agents/llm/__init__.pywooden-beard-26644
08/18/2025, 5:37 PMtall-airline-16756
08/18/2025, 5:37 PMcurrent_speech.done()
method — sometimes it might return False
even when the agent isn’t actually speaking. Here’s why that can happen:
1. Pending Tasks: The underlying Future
that tracks if the speech is done might still be waiting on some tasks or generation steps to finish.
2. Interrupted Speech: If the speech got interrupted, it might take a moment before it officially marks itself as done because it’s still cleaning up active tasks.
3. Unmarked Completion: The _mark_done()
method, which updates the done()
status once everything’s finished or interrupted, hasn’t been called yet.
Here’s how you can handle this smoothly:
• First, check if there’s an active speech handle before you look at `done()`:
if agent_session.current_speech is None:
# Agent isn't speaking
pass
• If you need to wait for the speech to finish, just do this:
await agent_session.current_speech.wait_for_playout()
• And if you want to clear any lingering tasks that might keep the speech active, try this:
if agent_session.current_speech:
agent_session.current_speech.interrupt()
Feel free to reach out if you want me to explain anything further or help out with this!
Sources: livekit-agents/livekit/agents/voice/agent_session.py