able-jackal-84140
07/10/2025, 5:47 PMtall-airline-16756
07/10/2025, 5:47 PMtall-airline-16756
07/10/2025, 5:48 PMroom.localParticipant.streamText({ topic })
to open a writable stream and send text chunks as they become available. Recipients register a handler like room.registerTextStreamHandler('my-topic', ...)
which processes and displays each chunk to update the UI incrementally.
• Partial LLM/AI Responses: For use cases such as streaming responses from an LLM backend, you can read and send new chunks as the backend generates content—either with LiveKit's text streams or with custom server-client streaming protocols. On the client side, append each incoming chunk to update the display before the full answer arrives.
• Delays and Intermediate Responses: During processing lags, you can send interim static or dynamic messages (e.g., “Checking...”, “Still working on it”) or play audio to maintain engagement until the response is ready.
Text streams in LiveKit are delivered in real-time only to currently connected users and do not provide persistence. Multiple streams can be used for concurrency. These methods enable engaging user experiences by providing visible progress while backend processing continues in the background.
Let me know if you need example code or details for a specific use case.
Sources: Sending text | LiveKit Docs | Live Conversations with AI using ChatGPT and WebRTC | tools/server/tests/utils.py | VoiceAgent/Chat/Receive/TranscriptionStreamReceiver.swift | delay-handling/README.mdocable-jackal-84140
07/10/2025, 6:02 PMtall-airline-16756
07/10/2025, 6:03 PMimport random
from livekit.agents import VoicePipelineAgent
thinking_messages = [
"Let me look that up...",
"One moment while I check...",
"I'll find that information for you..."
]
async def enrich_with_rag():
await agent.say(random.choice(thinking_messages))
# ...perform lookup...
• Dynamic LLM-Generated Message (create contextual waiting reply):
from livekit.agents import llm
async def enrich_with_rag():
thinking_ctx = llm.ChatContext().append(
role="system",
text="Generate a very short message to indicate that we're looking up the answer in the docs"
)
thinking_stream = agent._llm.chat(chat_ctx=thinking_ctx)
await agent.say(thinking_stream, add_to_chat_ctx=False)
# ...perform lookup...
• Audio File Playback (play a pre-recorded audio prompt):
async def enrich_with_rag():
await play_wav_once("let_me_check_that.wav", ctx.room)
# ...perform lookup...
These methods are like little digital nudges that say, "I'm on it!" They help keep your users feeling connected and reassured, no matter how long something might take. Pick the approach that feels most natural for your specific situation. 👍
Sources: agent_extensions/agent_extensions/rag/handler.py | livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime... | External data and RAG | LiveKit Docsable-jackal-84140
07/10/2025, 6:03 PM