Is there any way to have the agent not speak if an...
# ask-ai
w
Is there any way to have the agent not speak if an LLM response had a tool call?
t
Thanks for reaching out, Alex. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
If you want your agent to stay quiet whenever an LLM response includes a tool call, a good way is to override the
llm_node
method in your agent class. This lets you keep an eye on the response chunks for any tool calls and skip speaking when they pop up. Here’s a simple way to think about it: • Override the
llm_node
method so you can track tool calls with a flag. • Whenever a tool call shows up in a response chunk, stop the agent from speaking that part. • Once the tool call is done, let the agent speak as usual. If you’d like me to walk you through the details or if anything’s unclear, just give me a shout!
w
Can you give me a code example of how to do that?
t
Here’s a simple example of how you can set up the agent so it stays quiet whenever a tool call happens:
Copy code
class FunctionAgent(Agent):
    """A LiveKit agent that uses MCP tools but doesn’t speak on tool calls."""

    def __init__(self):
        super().__init__(
            instructions="You are a helpful assistant.",
            stt=deepgram.STT(),
            llm=openai.LLM(model="gpt-4"),
            tts=openai.TTS(),
            vad=silero.VAD.load(),
            allow_interruptions=True
        )

    async def llm_node(self, chat_ctx, tools, model_settings):
        """Keep an eye out for tool calls and stay silent when they happen."""
        tool_call_detected = False
        async for chunk in super().llm_node(chat_ctx, tools, model_settings):
            if isinstance(chunk, ChatChunk) and chunk.delta and chunk.delta.tool_calls:
                tool_call_detected = True
                # Skip speaking if a tool call is detected
                continue
            if tool_call_detected and not chunk.delta.tool_calls:
                tool_call_detected = False
            yield chunk
Quick rundown: • The
llm_node
method watches each response chunk to spot tool calls. • We use a flag (
tool_call_detected
) to remember when a tool call happens. • When it does, the agent skips speaking that part of the response. If you want me to walk you through this or help with anything else, just let me know!
w
how do i import ChatChunk
t
Hey there! To bring
ChatChunk
into your LiveKit agent setup, here’s how you can do it: For Python:
Copy code
from livekit.agents.llm import ChatChunk
For JavaScript/TypeScript:
Copy code
import { ChatChunk } from '@livekit/agents-llm';
Just make sure you’ve got the right LiveKit package added to your project. If you run into any hiccups or want a hand with anything else, I’m here to help! Sources: agents/src/llm/index.ts | livekit-agents/livekit/agents/llm/__init__.py
w
current_speech.done() returns False even when the agent isn't speaking
t
Hey there! Just a quick heads-up about the
current_speech.done()
method — sometimes it might return
False
even when the agent isn’t actually speaking. Here’s why that can happen: 1. Pending Tasks: The underlying
Future
that tracks if the speech is done might still be waiting on some tasks or generation steps to finish. 2. Interrupted Speech: If the speech got interrupted, it might take a moment before it officially marks itself as done because it’s still cleaning up active tasks. 3. Unmarked Completion: The
_mark_done()
method, which updates the
done()
status once everything’s finished or interrupted, hasn’t been called yet. Here’s how you can handle this smoothly: • First, check if there’s an active speech handle before you look at `done()`:
Copy code
if agent_session.current_speech is None:
    # Agent isn't speaking
    pass
• If you need to wait for the speech to finish, just do this:
Copy code
await agent_session.current_speech.wait_for_playout()
• And if you want to clear any lingering tasks that might keep the speech active, try this:
Copy code
if agent_session.current_speech:
    agent_session.current_speech.interrupt()
Feel free to reach out if you want me to explain anything further or help out with this! Sources: livekit-agents/livekit/agents/voice/agent_session.py