able-jackal-84140
07/14/2025, 5:35 AMtall-airline-16756
07/14/2025, 5:35 AMtall-airline-16756
07/14/2025, 5:36 AMable-jackal-84140
07/14/2025, 5:36 AMtall-airline-16756
07/14/2025, 5:37 AMfrom livekit.agents import tokenize
from livekit.plugins.google import tts
# For Google TTS (example):
custom_tokenizer = tokenize.basic.SentenceTokenizer(min_sentence_len=5)
tts_service = tts.TTS(tokenizer=custom_tokenizer)
• JavaScript: If you're working with JavaScript, you can set your custom buffer when initializing the tokenizer:
// Replace YOUR_CUSTOM_COUNT with your chosen value
#tokenizer = new tokenize.basic.SentenceTokenizer(undefined, YOUR_CUSTOM_COUNT);
• Pro tip: If you want to make a universal change, you can directly modify the BUFFERED_WORDS_COUNT
constant in the TTS plugin's source code—this sets the buffer size globally for that provider.
Just a heads up—buffer size is all about finding that sweet spot. A lower value means less delay but might sound a bit choppy, while a higher value gives you smoother, more natural speech but adds a bit more waiting time.
Curious about specifics for your TTS provider or programming language? Just ask, and I'll help you pinpoint the exact file, function, or parameter you need!
Sources: livekit-plugins/livekit-plugins-elevenlabs/livekit/plugins/elevenlabs/tts.p... | agents/src/tokenize/token_stream.tsable-jackal-84140
07/14/2025, 5:39 AMtall-airline-16756
07/14/2025, 5:39 AMable-jackal-84140
07/14/2025, 5:46 AMtall-airline-16756
07/14/2025, 5:48 AMfrom livekit.plugins.elevenlabs import tts
eleven_tts = tts.TTS(
streaming_latency=3, # latency in seconds
chunk_length_schedule=[80,120,200,260], # chunk sizes
auto_mode=True, # enables lowest latency with full sentences
# ...other options
)
Pro tip: If you play with lower streaming_latency
or smaller chunk sizes, you'll get faster audio - but just keep in mind it might make the speech sound a bit less smooth.
• For Hume AI TTS:
hume_tts = tts.TTS(
instant_mode=True, # enables ultra-low latency streaming
# ...other options
)
• For Google Cloud TTS:
google_tts = tts.TTS(
use_streaming=True, # enables streaming synthesis
# ...
)
Other providers have their own special tricks - like Rime's reduce_latency=True
or Inworld's sample and bit rate tweaks. Each one's a bit unique!
Curious about the specifics for your particular setup? Just shoot me a message about your provider or programming language, and I'll help you nail down the perfect configuration. 😊
Sources: TTSOptions | LiveKit Agents | Hume TTS integration guide | LiveKit Docs | ElevenLabs integration guide | LiveKit Docs | livekit-plugins/livekit-plugins-hume/livekit/plugins/hume/tts.py