The text after being tokenized is not directly sen...
# ask-ai
a
The text after being tokenized is not directly sent to the TTS. It is waiting for the next response to receive as well before sending
t
Thanks for reaching out, Aditya. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
LiveKit does not send each tokenized text unit directly to the TTS engine as soon as it arrives. Instead, it buffers the tokens and segments them—by default, on a sentence basis—to produce more natural, smooth speech output. This means that the system waits to receive the next response(s) to complete a meaningful segment, such as a sentence, before pushing the batch to TTS for synthesis. This buffering is typically managed with a configurable minimum word count (for example, 8 words) or sentence boundaries. However, if you need more immediate TTS output, LiveKit supports streaming synthesis where the system processes and sends audio segments as they are generated. You can control this behavior by configuring the tokenizer (word-based or sentence-based) and using flush operations, which push buffered text to TTS regardless of whether a complete sentence has formed. The overall flow is configurable, allowing you to strike a balance between low latency and natural-sounding speech output depending on your requirements. If you have a specific TTS provider, use case, or configuration in mind, I can provide more targeted guidance. Sources: Live Conversations with AI using ChatGPT and WebRTC | livekit-agents/livekit/agents/voice/agent_activity.py | agents/src/tts/tts.ts | AI voice agents | LiveKit Docs
a
Give me an exaxmple of this.
t
Hey there! Let me walk you through a cool way LiveKit handles text-to-speech buffering. Think of it like preparing a smooth conversation, but with code! • *Default (Buffered by Sentence or Word Count):*`// JavaScript example for sentence-based buffering const stream = tts.stream(); stream.pushText("Hello, how are you?"); // Don't worry, it's not sent just yet. stream.pushText(" I'm fine, thanks for asking."); // Now we've got a full sentence! // It'll smoothly send the sentence to TTS, just like natural speech.` • *Forcing Immediate Output (Flush):*`// Push several short fragments stream.pushText("This is an"); stream.pushText(" example"); // Need it right now? No problem: stream.flush(); // Boom! All buffered text goes straight to TTS` • *Equivalent pattern in Python with flush sentinel:*`# For engines using async tokenization streams word_stream.push_text("This is the first part of the message.") word_stream.push_text(" And here is the second part.") word_stream.end_input() # Sends all completed token batches to TTS, easy peasy` Pretty neat, right? This approach lets you fine-tune how your text sounds—whether you want it to flow naturally or zip out quickly. The
flush()
method is your friend when you want to minimize any waiting time. Hope this helps make your TTS experience super smooth! Sources: plugins/elevenlabs/src/tts.ts | tests/test_tts.py