<@U084APK40NS> When I tell the bot to do a tool ca...
# ask-ai
a
@millions-hair-24956 When I tell the bot to do a tool call, it gives weird output showing the tool call, how do I make sure it doesn't say that
g
To ensure that the bot does not verbally output the tool call details, you can use the
before_tts_cb
callback to filter out unwanted text before it is sent to the text-to-speech system. This callback allows you to modify or strip out specific parts of the text that you do not want to be spoken aloud. According to LiveKit documentation, you can define a
before_tts_cb
function to process the text before it is synthesized. This function can be used to remove or alter any tool call information from the text. Would you like more details on how to implement this? Sources: - LiveKit documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
a
@millions-hair-24956 show me an example
g
Certainly! Here's an example of how you can implement the
before_tts_cb
callback to filter out tool call details from the text before it is sent to the text-to-speech system:
Copy code
from livekit.agents.pipeline import VoicePipelineAgent
from livekit.agents import llm

def before_tts_cb(assistant: VoicePipelineAgent, text: str | AsyncIterable[str]):
    if isinstance(text, str):
        # Remove tool call details from the text
        return text.replace("tool_call_details", "")
    else:
        async def process_stream():
            async for chunk in text:
                # Remove tool call details from each chunk
                yield chunk.replace("tool_call_details", "")
        
        return process_stream()

# Initialize the VoicePipelineAgent with the before_tts_cb
agent = VoicePipelineAgent(
    vad=silero.VAD.load(),
    stt=deepgram.STT(),
    llm=openai.LLM(),
    tts=cartesia.TTS(),
    before_tts_cb=before_tts_cb
)

# Start the agent for a particular room and participant
agent.start(room, participant)
In this example, the
before_tts_cb
function checks if the text is a string or an
AsyncIterable
. It then removes any occurrences of "tool_call_details" from the text. You can replace "tool_call_details" with the specific text or pattern you want to filter out. Would you like more details on this implementation? Sources: - LiveKit documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
a
@millions-hair-24956 what does this mean? {"message": "Error in _stream_synthesis_task\nTraceback (most recent call last):\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/agents/utils/log.py\", line 16, in async_fn_logs\n return await fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/agents/pipeline/agent_output.py\", line 273, in _stream_synthesis_task\n async for seg in tts_source:\n ...<9 lines>...\n tts_stream.push_text(seg)\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/agents/utils/aio/itertools.py\", line 47, in tee_peer\n item = await iterator.__anext__()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/agents/pipeline/pipeline_agent.py\", line 1102, in _llm_stream_to_str_generator\n async for chunk in stream:\n ...<7 lines>...\n yield content\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/agents/llm/llm.py\", line 246, in __anext__\n raise exc from None\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/agents/llm/llm.py\", line 156, in _main_task\n return await self._run()\n ^^^^^^^^^^^^^^^^^\n File \"/Users/sethiankho/Code/finosu/webapp/python/omnibot/venv/lib/python3.13/site-packages/livekit/plugins/openai/llm.py\", line 736, in _run\n raise APIConnectionError(retryable=retryable) from e\nlivekit.agents._exceptions.APIConnectionError: Connection error.", "level": "ERROR", "name": "livekit.agents.pipeline", "pid": 57349, "job_id": "AJ_YKQHpE4q8N5S", "timestamp": "2025-02-27T203653.736714+00:00"}
g
The error message indicates an
APIConnectionError
occurred during the
_stream_synthesis_task
in the LiveKit pipeline. This error typically arises when there is a connection issue with the API service, in this case, likely related to OpenAI. The traceback shows that the error was raised in the
_run
method of the OpenAI LLM plugin, suggesting a failure to establish or maintain a connection to the OpenAI service. To resolve this, you should check your network connection, ensure that your API keys are correct, and verify that the OpenAI service is operational. If the issue persists, consider implementing retry logic to handle transient connection errors. Would you like more details on troubleshooting this error? Sources: - https://livekit-users.slack.com/archives/C07FY8WHGPM/p1732345136.594869 - https://livekit-users.slack.com/archives/C07FY8WHGPM/p1727414645.070779 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.