creamy-tiger-14805
08/29/2025, 5:04 AMtall-airline-16756
08/29/2025, 5:04 AMtall-airline-16756
08/29/2025, 5:05 AMAgent
class.
2. Override the tts_node
method to add some logic that checks if TTS should run based on what the LLM outputs.
3. If you want, you can also tweak the llm_node
method to help prepare or signal this decision.
4. Then, return either a modified or empty stream from tts_node
to control the audio synthesis.
This way, you can flexibly skip TTS when certain conditions are met. If you want, I’m here to help you walk through the implementation—just let me know!
Sources: livekit-agents/livekit/agents/voice/generation.py | Pipeline nodes and hooks | LiveKit Docscreamy-tiger-14805
08/29/2025, 5:19 AMtts_node
, any examples for the same?tall-airline-16756
08/29/2025, 5:20 AMtts_node
in LiveKit, the way to go is by subclassing the Agent
class and creating your own tts_node
method. This gives you the flexibility to tweak the text before it reaches the TTS engine, change up the audio output, or even plug in a custom TTS engine.
Here are a few examples to get you started:
Example 1: Simple Text Replacement
Sources: pipeline-tts/tts_node.py | Agent speech and audio | LiveKit Docs | examples/voice_agents/structured_output.pycreamy-tiger-14805
08/29/2025, 8:07 AMtall-airline-16756
08/29/2025, 8:08 AM