ambitious-dusk-46857
08/12/2025, 4:12 AMtall-airline-16756
08/12/2025, 4:12 AMtall-airline-16756
08/12/2025, 4:13 AMtext_only.py
agent is made just for text-only chats in LiveKit, but to get it up and running smoothly in the LiveKit Playground, there are a few things to keep in mind:
• Make sure you start the agent with the right command: python text_only.py dev
.
• Double-check that your environment variables (LIVEKIT_URL
, LIVEKIT_API_KEY
, LIVEKIT_API_SECRET
) are all set correctly in your .env
file.
• Take a look at your LiveKit Playground settings to be sure it’s set up for text interactions. If you’re running it yourself, don’t forget to enable the chat
feature in NEXT_PUBLIC_APP_CONFIG
.
• Also, confirm the agent is subscribed to the lk.chat
topic to receive text input and is publishing to the lk.transcription
topic for text output.
If you’re still running into trouble, no worries! Just let me know any error messages you’re seeing, whether you’re using the hosted or self-hosted playground, and any changes you’ve made to the agent or playground setup. I’m here to help!
Sources: examples/other/chat-stream-receiver.py | Text and transcriptions | LiveKit Docsambitious-dusk-46857
08/12/2025, 4:14 AMAlso, confirm the agent is subscribed to the lk.chat topic to receive text input and is publishing to the lk.transcription topic for text output.
tall-airline-16756
08/12/2025, 4:14 AMtext_only.py
agent, here’s what you need to do:
1. Start the agent in development mode by running: python text_only.py dev
.
2. Open up the LiveKit Playground in your browser. If you’re using the hosted version, just head over to your configured playground URL.
3. Jump into the chat interface in the playground to send your text and see the agent’s responses.
Just a quick reminder: make sure your environment variables (LIVEKIT_API_KEY
, LIVEKIT_API_SECRET
, LIVEKIT_URL
) are all set up correctly so the agent can connect smoothly to the LiveKit server.
Sources: Agents playground | LiveKit Docs | Text and transcriptions | LiveKit Docs | livekit-agents/livekit/agents/voice/chat_cli.py | .env.exampleambitious-dusk-46857
08/12/2025, 4:15 AMAlso, confirm the agent is subscribed to the lk.chat topic to receive text input and is publishing to the lk.transcription topic for text output.
tall-airline-16756
08/12/2025, 4:16 AMlk.chat
topic and publish to the lk.transcription
topic. Here’s a quick checklist to help you out:
1. First, double-check that the agent is initialized with RoomInputOptions(text_enabled=True)
. This turns on text input and subscribes to the lk.chat
topic.
2. Next, confirm that RoomOutputOptions(transcription_enabled=True)
is enabled so the agent can publish to the lk.transcription
topic.
3. Take a look at the agent’s code to see if methods like registerTextStreamHandler
for lk.chat
and publishTranscription
for lk.transcription
are in place.
4. It’s also a good idea to turn on logging in the agent. This way, you can keep an eye on when messages come in on lk.chat
and when transcriptions go out to lk.transcription
.
5. On the frontend side, set up a text stream handler to check that messages are being received on both topics. Here’s a handy example:
room.registerTextStreamHandler('lk.transcription', async (reader, participantInfo) => {
const message = await reader.readAll();
console.log(`New message from ${participantInfo.identity}: ${message}`);
});
Following these steps should give you a clear picture of your agent’s subscription and publishing status. If you run into any bumps, I’m here to help!
Sources: Text and transcriptions | LiveKit Docs | packages/core/src/observables/dataChannel.ts | agents/src/constants.tsambitious-dusk-46857
08/12/2025, 4:17 AMtall-airline-16756
08/12/2025, 4:18 AMlk.chat
and lk.transcription
topics:
from livekit import rtc
class TextStreamHandlerImpl:
async def on_text_received(self, reader: rtc.TextStreamReader, participant_identity: str):
"""Handle incoming text streams for both lk.chat and lk.transcription topics"""
async for chunk in reader:
print(f"Received text chunk from {participant_identity}: {chunk}")
# Feel free to add more processing here
async def main(room_name: str):
# ... your existing connection setup ...
# Let's create an instance of our handler
text_handler = TextStreamHandlerImpl()
# Now, we'll register handlers for both topics
room.register_text_stream_handler(
'lk.chat',
lambda reader, participant_identity: text_handler.on_text_received(reader, participant_identity)
)
room.register_text_stream_handler(
'lk.transcription',
lambda reader, participant_identity: text_handler.on_text_received(reader, participant_identity)
)
# ... and then continue with the rest of your code ...
This snippet listens for text streams on both topics and prints out the messages it receives. You can easily build on this by adding whatever extra logic you need. Hope this helps!
Sources: Text and transcriptions | LiveKit Docs | Text and transcriptions | LiveKit Docs | examples/other/text_only.pyambitious-dusk-46857
08/12/2025, 4:21 AMtall-airline-16756
08/12/2025, 4:21 AMon_text_received
wasn’t properly awaited. No worries though—just make sure all your asynchronous operations are awaited. Here’s a fixed version of the code for you:
import asyncio
from livekit import rtc
class TextStreamHandlerImpl:
async def on_text_received(self, reader: rtc.TextStreamReader, participant_identity: str):
try:
async for chunk in reader:
print(f"Received text chunk from {participant_identity}: {chunk}")
except Exception as e:
print(f"Error reading text stream: {e}")
async def main():
# Initialize and connect to the room
room = rtc.Room()
# Assume you have your LiveKit URL and token
await room.connect("Your LiveKit URL", "Your Token")
# Create an instance of the handler
text_handler = TextStreamHandlerImpl()
# Register the text stream handler
room.register_text_stream_handler(
'lk.chat',
lambda reader, participant_identity: asyncio.create_task(text_handler.on_text_received(reader, participant_identity))
)
# Keep the connection alive
while True:
await asyncio.sleep(1)
if __name__ == "__main__":
asyncio.run(main())
Here’s what we changed:
• Used asyncio.create_task
so the coroutine gets awaited properly.
• Wrapped everything inside asyncio.run
to handle the event loop smoothly.
With these tweaks, you should be all set—no more runtime warnings, and your text streams will be handled just right. If you hit any other bumps, feel free to ask!