busy-monitor-72069
09/12/2025, 9:56 PMtall-airline-16756
09/12/2025, 9:56 PMtall-airline-16756
09/12/2025, 9:56 PMbusy-monitor-72069
09/12/2025, 9:57 PMtall-airline-16756
09/12/2025, 9:58 PMchunk_length_schedule
parameter to set your desired chunk sizes. Here's a simple example:
from livekit.plugins import elevenlabs
tts_instance = elevenlabs.TTS(
chunk_length_schedule=[50, 60, 70, 80] # Reduced chunk sizes for lower latency
)
By default, the chunk sizes are larger, but tweaking them like this can help lower latency and improve responsiveness.
• For other TTS plugins (like Rime): Chunk sizes are based on sentence boundaries. Here are a few ways you can manage this:
• Keep sentences shorter in your LLM output.
• Adjust punctuation within the text stream to influence chunking.
• Use a StreamAdapter
combined with a tokenizer such as blingfire.SentenceTokenizer
to have more control over how text is segmented.
Here’s a quick example to illustrate:
from livekit.agents.tts import StreamAdapter
from livekit.agents.tokenize import blingfire
tts = rime.TTS(model="arcana", speaker="astra")
tokenizer = blingfire.SentenceTokenizer()
tts_wrapped = StreamAdapter(tts=tts, sentence_tokenizer=tokenizer)
stream = tts_wrapped.stream()
• Custom Chunking: If you want full control, you can override the tts_node
in a custom Agent
class to create your own chunking logic. This method works well across all TTS providers.
Please feel free to reach out if you’d like me to walk you through any of these steps or if you have any other questions. I’m here to help!
Sources: ElevenLabs integration guide | LiveKit Docs | examples/other/text-to-speech/rime_tts.py | pipeline-tts/short_replies_only.py | Docs::Agents | Pipeline nodes and hooks | LiveKit Docsbusy-monitor-72069
09/12/2025, 10:00 PMtall-airline-16756
09/12/2025, 10:01 PMbusy-monitor-72069
09/12/2025, 10:01 PMtall-airline-16756
09/12/2025, 10:02 PM