If i want to have a textbox on the frontend, that ...
# ask-ai
f
If i want to have a textbox on the frontend, that allows user to type things, to provide additional context for agent, but I dont want it to invoke a LLM call, how do i do it?
t
Thanks for reaching out, Mok. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
If you want to add a textbox on the frontend for user input without immediately triggering an LLM call, you can simply set up a standard text input and decide exactly when to send the input. Here are a few friendly examples to get you started: • React: Try using a controlled input field that updates local state and only sends the data when you explicitly hit a button or submit the form. For example: Sources: complex-agents/nova-sonic/nova-sonic-form-agent/components/livekit/chat/cha... | examples/demo/index.html | lib/src/ui/prejoin/text_input.dart | Text and transcriptions | LiveKit Docs
f
but what if i want it to continuously stream to the backend? so that the audio can still work, but the text is sent as extra context?
t
Hey there! If you want to keep streaming text input to the backend while also maintaining audio streaming, LiveKit’s text streaming API works great alongside audio streaming. Here’s a simple way to do it: • Stream Text to the Backend: You can use the
stream_text()
method in Python or
streamText()
in Node.js to send text bit by bit. Here’s a quick example:
Copy code
// Node.js example\nconst writer = await room.localParticipant.streamText({ topic: 'my-topic' });\nawait writer.write('Hello');\nawait writer.write(' world');\nawait writer.close();
Keep Audio Streaming Going: Audio streaming runs independently. Just use
AudioSource
and
LocalAudioTrack
to publish your audio to the room like this:
Copy code
// Python example\naudio_source = AudioSource(sample_rate=44100, num_channels=1)\ntrack = LocalAudioTrack.create_audio_track("mic", audio_source)\nawait room.local_participant.publish_track(track)
Sync Text and Audio: On the backend, you can handle text and audio streams together smoothly. For instance,
TextAudioSynchronizer
helps align the text with the audio playback. This way, your text input streams along as extra context while your audio keeps flowing without a hitch. If you want me to walk you through more details or examples, just give me a shout! Sources: livekit-rtc/livekit/rtc/participant.py | packages/livekit-rtc/src/participant.ts | examples/basic_text_stream/src/main.rs | Text-to-speech (TTS) integrations | LiveKit Docs | agents/src/multimodal/agent_playout.ts
f
but by using streamText, would that cause the LLM to react to the text? meaning would that trigger a response by the LLM?
t
Just to clarify, using
streamText
in LiveKit by itself doesn’t automatically trigger a response from an LLM. It’s really just a way to send or stream text bit by bit between people in a room. That said, if you send the text to the
lk.chat
topic and there’s a LiveKit Agent set up to listen and respond to that topic, then yes, it can trigger an LLM response as part of how the agent works. If you’d rather avoid that, you could use a custom topic (like
my-topic
) instead of
lk.chat
. Feel free to ask if you want me to explain anything more or help out with this! Sources: Sending text | LiveKit Docs | Sending text | LiveKit Docs | src/room/participant/LocalParticipant.ts | Agents v0.x migration guide - Node.js | LiveKit Docs
f
If i were to use sendText, would that trigger an LLM response then>
t
Hey there! Just a quick heads-up: the
sendText
method in LiveKit doesn’t automatically trigger an LLM response. It’s mainly used to send text between participants or to an AI agent that’s listening on a specific topic (like
lk.chat
). If you have an AI agent set up to handle messages on the
lk.chat
topic, it might pass the text along to an LLM to craft a response. But this really depends on how that agent is configured. So, if you want to avoid triggering the LLM, just make sure to send your text to a custom topic that the agent isn’t watching. Feel free to reach out if you want me to explain anything further or if you need a hand with this! Sources: Sending text | LiveKit Docs | Text and transcriptions | LiveKit Docs | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/realtime/real...
f
How do i make use of data track to stream text data to agent without invoking LLM?
t
If you want to stream text data to a LiveKit agent without calling an LLM, you can do this easily using data tracks with text streams. Here’s a simple way to get it done: • Send Text to the Agent: Just use
sendText()
or
streamText()
to send your text data to the
lk.chat
topic. Here’s an example:
Copy code
// Using sendText\nconst text = "Hello, this is a test message.";\nawait room.localParticipant.sendText(text, { topic: 'lk.chat' });\n\n// Using streamText\nconst streamWriter = await room.localParticipant.streamText({ topic: 'lk.chat' });\nawait streamWriter.write("Hello ");\nawait streamWriter.write("from LiveKit");\nawait streamWriter.close();
Configure the Agent: Make sure your agent is ready to accept text input by turning on
text_enabled=True
in the input options. Like this:
Copy code
await session.start(\n    agent=MyAgent(),\n    room=ctx.room,\n    room_input_options=RoomInputOptions(text_enabled=True, audio_enabled=False)\n)
Handle Text in the Agent: Your agent will take care of any text sent to
lk.chat
. You can add your own custom logic here to handle the text without involving an LLM. • Receive Agent Output: The agent’s responses come through the
lk.transcription
topic. Just register a handler to catch and process these responses like so:
Copy code
room.registerTextStreamHandler('lk.transcription', (reader, participantInfo) => {\n  for await (const chunk of reader) {\n    console.log(`Response chunk: ${chunk}`);\n  }\n});
And that’s it! This setup lets you stream text to the agent smoothly without triggering any LLM processing unless you want it to. Sources: Realtime data | LiveKit Docs | Sending text | LiveKit Docs | Agents 0.x migration guide | LiveKit Docs | Agents v0.x migration guide - Node.js | LiveKit Docs | examples/other/chat-stream-receiver.py