how can i stream some messages or response when th...
# ask-ai
a
how can i stream some messages or response when the user is waiting?
t
Thanks for reaching out, Aditya. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
You can stream partial messages or responses to users while they wait using LiveKit's text streaming features: • Incremental Text Streaming: Use
room.localParticipant.streamText({ topic })
to open a writable stream and send text chunks as they become available. Recipients register a handler like
room.registerTextStreamHandler('my-topic', ...)
which processes and displays each chunk to update the UI incrementally. • Partial LLM/AI Responses: For use cases such as streaming responses from an LLM backend, you can read and send new chunks as the backend generates content—either with LiveKit's text streams or with custom server-client streaming protocols. On the client side, append each incoming chunk to update the display before the full answer arrives. • Delays and Intermediate Responses: During processing lags, you can send interim static or dynamic messages (e.g., “Checking...”, “Still working on it”) or play audio to maintain engagement until the response is ready. Text streams in LiveKit are delivered in real-time only to currently connected users and do not provide persistence. Multiple streams can be used for concurrency. These methods enable engaging user experiences by providing visible progress while backend processing continues in the background. Let me know if you need example code or details for a specific use case. Sources: Sending text | LiveKit Docs | Live Conversations with AI using ChatGPT and WebRTC | tools/server/tests/utils.py | VoiceAgent/Chat/Receive/TranscriptionStreamReceiver.swift | delay-handling/README.mdoc
a
Give me ode for the third point
t
Hey there! Let me walk you through some super helpful ways to keep your users in the loop while they're waiting. Think of it like letting a friend know you're working on their request. 😊Static Message Example (send a random predefined message):
Copy code
import random
from livekit.agents import VoicePipelineAgent

thinking_messages = [
    "Let me look that up...",
    "One moment while I check...",
    "I'll find that information for you..."
]

async def enrich_with_rag():
    await agent.say(random.choice(thinking_messages))
    # ...perform lookup...
Dynamic LLM-Generated Message (create contextual waiting reply):
Copy code
from livekit.agents import llm

async def enrich_with_rag():
    thinking_ctx = llm.ChatContext().append(
        role="system",
        text="Generate a very short message to indicate that we're looking up the answer in the docs"
    )
    thinking_stream = agent._llm.chat(chat_ctx=thinking_ctx)
    await agent.say(thinking_stream, add_to_chat_ctx=False)
    # ...perform lookup...
Audio File Playback (play a pre-recorded audio prompt):
Copy code
async def enrich_with_rag():
    await play_wav_once("let_me_check_that.wav", ctx.room)
    # ...perform lookup...
These methods are like little digital nudges that say, "I'm on it!" They help keep your users feeling connected and reassured, no matter how long something might take. Pick the approach that feels most natural for your specific situation. 👍 Sources: agent_extensions/agent_extensions/rag/handler.py | livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime... | External data and RAG | LiveKit Docs
a
Give me the code for the third point