How can I send a message as an agent that is just ...
# ask-ai
m
How can I send a message as an agent that is just text and will not be played as audio to user? Ideally the message would remain in the
lk.transcription
topic.
t
Thanks for reaching out, Harry. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I'm happy to help you with sending a text message as an agent that stays within the
lk.transcription
topic and isn’t played as audio. Here’s a simple way to set it up: 1. Disable Audio Output: When you initialize the
AgentSession
, just set
audio_enabled=False
in
RoomOutputOptions
. This way, the agent won’t publish any audio tracks and will send only text to
lk.transcription
. 2. Send Text via `lk.transcription`: With audio disabled, the agent’s text responses will automatically go through the
lk.transcription
topic as text-only messages. 3. Frontend Handling: Make sure your frontend listens to the
lk.transcription
topic by using
registerTextStreamHandler
. Any messages without the
lk.transcribed_track_id
attribute will be text-only agent responses. If you'd like, I’m here to provide more details or examples to make the implementation smoother for you. Just let me know! Sources: Text and transcriptions | LiveKit Docs | How to Set Up Speech-to-Text Input with Text-Only Output in LiveKit Agents | examples/other/translation/multi-user-translator.py
I'd be happy to guide you through sending a text message as an agent that stays within the
lk.transcription
topic without being played as audio. Here’s a simple way to set it up: 1. Disable Audio Output: When you initialize the
AgentSession
, set
audio_enabled=False
in
RoomOutputOptions
. This step is key because it makes sure the agent only sends text to
lk.transcription
and doesn’t publish any audio tracks.
Copy code
session = AgentSession(   
    llm=openai.LLM(),   
    room_input_options=RoomInputOptions(audio_enabled=False),   
    room_output_options=RoomOutputOptions(audio_enabled=False, transcription_enabled=True),
)
1. Frontend Handling: On the frontend side, please make sure you listen to the
lk.transcription
topic using
registerTextStreamHandler
. This way, messages that don’t have the
lk.transcribed_track_id
attribute are recognized as text-only agent responses.
Copy code
room.registerTextStreamHandler('lk.transcription', async (reader, participantInfo) => {   
    const message = await reader.readAll();   
    if (reader.info.attributes['lk.transcribed_track_id']) {   
        console.log(`Transcription with audio: ${message}`);   
    } else {   
        console.log(`Text-only agent message: ${message}`);   
    }   
});
By following these steps and disabling
audio_enabled
, your agent will send messages as text-only within the
lk.transcription
topic, without creating any audio output. If you have any questions along the way, feel free to ask—I'm here to help!
m
I’ve done those things. Can you show me an example of how to Send Text via
lk.transcription
t
Thanks for reaching out! I wanted to share a little insight about the
lk.transcription
topic. It’s actually a read-only channel that LiveKit Agents use to send transcriptions to the frontend, so it’s not meant for sending user input. If you want to send text input to the agent, the best way is to use the
lk.chat
topic. Here’s a quick example to help you out: JavaScript Example:
Copy code
const text = 'Hello, how are you today?';
await room.localParticipant.sendText(text, {
  topic: 'lk.chat',
});
Flutter Example:
Copy code
await lp.sendText(text, options: sdk.SendTextOptions(topic: 'lk.chat'));
If you’re also interested in receiving messages from
lk.transcription
, just let me know! I’d be happy to provide an example for that too. We’re here to help you every step of the way. Sources: lib/controllers/app_ctrl.dart | Text and transcriptions | LiveKit Docs | Docs::Agents
m
That’s not what I want. I am looking to send text as the agent, to the user. Perhaps the
chat_context.add_message
method is best?
t
To send text as the agent, you can use the
chat_context.add_message
method to update the agent's conversational memory. Just a heads-up, this won’t broadcast the message to participants. Here’s a simple example to guide you:
Copy code
chat_ctx.add_message(\n    role="assistant",\n    content="Hello, how can I help you today?"\n)
If you'd like the agent to actually “speak” the message to participants, you can use
session.say()
instead:
Copy code
await session.say("Hello, how can I help you today?")
By default,
session.say()
also updates the chat context for you. But if you prefer to handle the context yourself, you can turn off this feature like this:
Copy code
await session.say("Hello!", add_to_chat_ctx=False)\nchat_ctx.add_message(role="assistant", content="Hello!")
Please feel free to reach out if you’d like me to explain anything further or if you want more examples—I’m here to help! Sources: livekit-agents/livekit/agents/voice/agent_activity.py | Testing and evaluation | LiveKit Docs | Testing and evaluation | LiveKit Docs | complex-agents/drive-thru/test_agent.py | Agent speech and audio | LiveKit Docs
m
Sounds good. You said that
say
will automatically add the message to the chat context. Will it also add the message to the
lk.transcription
topic?
t
Yes, when you use
session.say
, the message is automatically added to the
lk.transcription
topic in real time. This way, the text gets published to the
lk.transcription
text stream and stays perfectly in sync with the audio playback if audio is enabled. If you have any questions or need me to explain anything further, please don’t hesitate to ask—I’m here to help! Sources: How to Set Up Speech-to-Text Input with Text-Only Output in LiveKit Agents | Text and transcriptions | LiveKit Docs | Text and transcriptions | LiveKit Docs | VoiceAgent/Chat/Receive/TranscriptionStreamReceiver.swift
Absolutely! When you use
session.say
, the message is automatically added to the
lk.transcription
topic in real time. This way, the text gets published as a transcription, perfectly synchronized with the audio playback if you have audio enabled. If you have any questions or want me to explain anything further, just let me know—I’m here to help!