fierce-jordan-44682
08/29/2025, 3:37 AMtall-airline-16756
08/29/2025, 3:38 AMtall-airline-16756
08/29/2025, 3:38 AMfierce-jordan-44682
08/29/2025, 3:39 AMtall-airline-16756
08/29/2025, 3:39 AMstream_text()
method in Python or streamText()
in Node.js to send text bit by bit. Here’s a quick example:
// Node.js example\nconst writer = await room.localParticipant.streamText({ topic: 'my-topic' });\nawait writer.write('Hello');\nawait writer.write(' world');\nawait writer.close();
• Keep Audio Streaming Going: Audio streaming runs independently. Just use AudioSource
and LocalAudioTrack
to publish your audio to the room like this:
// Python example\naudio_source = AudioSource(sample_rate=44100, num_channels=1)\ntrack = LocalAudioTrack.create_audio_track("mic", audio_source)\nawait room.local_participant.publish_track(track)
• Sync Text and Audio: On the backend, you can handle text and audio streams together smoothly. For instance, TextAudioSynchronizer
helps align the text with the audio playback.
This way, your text input streams along as extra context while your audio keeps flowing without a hitch. If you want me to walk you through more details or examples, just give me a shout!
Sources: livekit-rtc/livekit/rtc/participant.py | packages/livekit-rtc/src/participant.ts | examples/basic_text_stream/src/main.rs | Text-to-speech (TTS) integrations | LiveKit Docs | agents/src/multimodal/agent_playout.tsfierce-jordan-44682
08/29/2025, 3:40 AMtall-airline-16756
08/29/2025, 3:40 AMstreamText
in LiveKit by itself doesn’t automatically trigger a response from an LLM. It’s really just a way to send or stream text bit by bit between people in a room.
That said, if you send the text to the lk.chat
topic and there’s a LiveKit Agent set up to listen and respond to that topic, then yes, it can trigger an LLM response as part of how the agent works. If you’d rather avoid that, you could use a custom topic (like my-topic
) instead of lk.chat
.
Feel free to ask if you want me to explain anything more or help out with this!
Sources: Sending text | LiveKit Docs | Sending text | LiveKit Docs | src/room/participant/LocalParticipant.ts | Agents v0.x migration guide - Node.js | LiveKit Docsfierce-jordan-44682
08/29/2025, 3:41 AMtall-airline-16756
08/29/2025, 3:42 AMsendText
method in LiveKit doesn’t automatically trigger an LLM response. It’s mainly used to send text between participants or to an AI agent that’s listening on a specific topic (like lk.chat
).
If you have an AI agent set up to handle messages on the lk.chat
topic, it might pass the text along to an LLM to craft a response. But this really depends on how that agent is configured. So, if you want to avoid triggering the LLM, just make sure to send your text to a custom topic that the agent isn’t watching.
Feel free to reach out if you want me to explain anything further or if you need a hand with this!
Sources: Sending text | LiveKit Docs | Text and transcriptions | LiveKit Docs | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/realtime/real...fierce-jordan-44682
08/29/2025, 3:52 AMtall-airline-16756
08/29/2025, 3:53 AMsendText()
or streamText()
to send your text data to the lk.chat
topic. Here’s an example:
// Using sendText\nconst text = "Hello, this is a test message.";\nawait room.localParticipant.sendText(text, { topic: 'lk.chat' });\n\n// Using streamText\nconst streamWriter = await room.localParticipant.streamText({ topic: 'lk.chat' });\nawait streamWriter.write("Hello ");\nawait streamWriter.write("from LiveKit");\nawait streamWriter.close();
• Configure the Agent: Make sure your agent is ready to accept text input by turning on text_enabled=True
in the input options. Like this:
await session.start(\n agent=MyAgent(),\n room=ctx.room,\n room_input_options=RoomInputOptions(text_enabled=True, audio_enabled=False)\n)
• Handle Text in the Agent: Your agent will take care of any text sent to lk.chat
. You can add your own custom logic here to handle the text without involving an LLM.
• Receive Agent Output: The agent’s responses come through the lk.transcription
topic. Just register a handler to catch and process these responses like so:
room.registerTextStreamHandler('lk.transcription', (reader, participantInfo) => {\n for await (const chunk of reader) {\n console.log(`Response chunk: ${chunk}`);\n }\n});
And that’s it! This setup lets you stream text to the agent smoothly without triggering any LLM processing unless you want it to.
Sources: Realtime data | LiveKit Docs | Sending text | LiveKit Docs | Agents 0.x migration guide | LiveKit Docs | Agents v0.x migration guide - Node.js | LiveKit Docs | examples/other/chat-stream-receiver.py