able-jackal-84140
07/14/2025, 7:13 AMWARNING livekit.agents - SynthesizeStream: handling multiple segments in a single instance is deprecated. Please create a new SynthesizeStream instance for each segment. Most TTS plugins now use pooled WebSocket connections via ConnectionPool.
I am flushing the text to send it faster to the TTStall-airline-16756
07/14/2025, 7:13 AMtall-airline-16756
07/14/2025, 7:14 AMSynthesizeStream
instance for multiple text segments is now deprecated in the LiveKit agents framework. Instead, you should create a new SynthesizeStream
instance for each segment. Most TTS plugins now use pooled WebSocket connections via ConnectionPool
for efficiency.
To resolve the warning:
• Create a new SynthesizeStream
for each segment.
• After pushing text and flushing, call end_input()
and clean up the stream (e.g., with aclose()
if available).
• Consider using an async with
context manager for automatic resource cleanup if supported by your SDK.
Example pattern:
for segment in segments:
async with tts.stream() as stream:
stream.push_text(segment)
stream.flush()
stream.end_input()
async for audio in stream:
# Handle audio frames
pass
Creating a new stream per segment and handling cleanup as shown will avoid the deprecation warning and offer the best resource management and compatibility with the current LiveKit agents framework.
Further details and best practices include batching short segments, always cleaning up resources, and using context managers where possible.
Sources: livekit-agents/livekit/agents/tts/stream_adapter.pyable-jackal-84140
07/14/2025, 7:16 AMtall-airline-16756
07/14/2025, 7:17 AMflush()
to send text to a SynthesizeStream
instance, and I want to help you understand some important updates. 😊
• Just a heads up – every time you flush after the first segment, you'll see some warning logs popping up.
• The current method might not take full advantage of the cool connection pooling and optimizations in newer TTS plugins.
• We're looking ahead, and future versions might not support handling multiple segments in the same stream instance.
Quick summary: Your code is working right now, but to keep things smooth and avoid any potential hiccups, we recommend creating a fresh SynthesizeStream
instance for each text segment. It'll save you from those pesky warnings!
Need help figuring out the best approach for your specific use case? I'm all ears and happy to walk you through it! 👍
Sources: tests/test_tts.py | examples/src/tts.tsable-jackal-84140
07/14/2025, 7:22 AMtall-airline-16756
07/14/2025, 7:23 AMpushText(text)
and then flush()
for each chunk you receive. This method basically streams your audio the moment each segment is ready.
Quick heads up: While this approach is awesome for getting immediate audio and keeping delays minimal, it's technically marked as "deprecated" for future updates. The latest framework suggestions recommend creating a new SynthesizeStream
for each segment. But don't worry - for real-time, low-latency situations, using flush()
on a single stream is still totally supported and used by several TTS providers.
Here's the simple breakdown:
1. Call stream.pushText(textChunk)
as soon as a new chunk arrives
2. Call stream.flush()
right after each chunk
3. Process each audio frame instantly for immediate playback
If you're all about future-proofing, you might want to create a new stream per segment. But if speed is your priority, stick with the flush()
method I just explained.
Got any questions about how to make this work smoothly for your specific project? I'm here to help you figure it out!
Sources: tests/fake_tts.py | agents/src/tts/stream_adapter.ts | SynthesizeStream | LiveKit Agents