lively-terabyte-73999
07/09/2025, 12:47 PMtall-airline-16756
07/09/2025, 12:47 PMtall-airline-16756
07/09/2025, 12:48 PMAudioSource
with the appropriate sample rate and number of channels.
2. Create a LocalAudioTrack
using the AudioSource
.
3. Publish the LocalAudioTrack
to the room's local participant.
4. Continuously generate and feed audio frames to the AudioSource
for publishing.
Here is a code example illustrating this process in Python:
import numpy as np
from livekit import rtc
# Create an AudioSource
sample_rate = 48000
num_channels = 1
audio_source = rtc.AudioSource(sample_rate, num_channels)
# Create a LocalAudioTrack
track_name = "example-track"
local_audio_track = rtc.LocalAudioTrack.create_audio_track(track_name, audio_source)
# Publish the track
room = rtc.Room()
await room.connect("<ws://localhost:7880>", token)
publication = await room.local_participant.publish_track(local_audio_track, rtc.TrackPublishOptions())
# Generate audio frames
audio_frame = rtc.AudioFrame.create(sample_rate, num_channels, 480)
audio_data = np.frombuffer(audio_frame.data, dtype=np.int16)
while True:
time = np.arange(480) / sample_rate
sine_wave = (2 ** 15 - 1) * np.sin(2 * np.pi * 440 * time)
np.copyto(audio_data, sine_wave.astype(np.int16))
await audio_source.capture_frame(audio_frame)
This example continuously generates a 440 Hz sine wave and publishes it to the room. Adjust the audio generation logic to suit your AI agent's audio output.
Make sure you have the LiveKit SDK properly installed and configured.
Sources: Publishing tracks | LiveKit Docs | Audio and video | LiveKit Docs | examples/publish-wav/index.tslively-terabyte-73999
07/09/2025, 12:49 PMtall-airline-16756
07/09/2025, 12:51 PM// Set up audio track
const source = new AudioSource(sampleRate, channels);
const track = LocalAudioTrack.createAudioTrack('audio', source);
const options = new TrackPublishOptions();
options.source = TrackSource.SOURCE_MICROPHONE;
await room.localParticipant.publishTrack(track, options);
// Send audio frames
const frame = new AudioFrame(
buffer.slice(written, written + frameSize),
sampleRate,
channels,
Math.trunc(frameSize / channels),
);
await source.captureFrame(frame);
The magic happens here: we create an AudioSource
, turn it into a local audio track, and then send those audio frames to the room. Just swap out buffer
, written
, and frameSize
with your specific audio data, and you're all set! Feel free to ask if you need any more guidance. 😊
Sources: packages/livekit-rtc/README.mdlively-terabyte-73999
07/09/2025, 1:16 PMtall-airline-16756
07/09/2025, 1:17 PMroom.registerTextStreamHandler('my-topic', (reader, participantIdentity) => {
// Process the text stream
});
Basically, if no handler is registered for a topic, the stream gets politely ignored - hence the debug message you're seeing.
Pro tip: Make sure the topic you register matches the topic when you're sending text streams. It's like making sure you're speaking the same language:
room.localParticipant.sendText('Hello, world!', 'my-topic');
Sources: src/room/Room.ts | packages/livekit-rtc/src/room.ts | livekit-rtc/livekit/rtc/room.py | Sources/LiveKit/Core/Room+DataStream.swift | README.mdlively-terabyte-73999
07/09/2025, 1:40 PMtall-airline-16756
07/09/2025, 1:40 PMchat-messages
, ai-agent-input
, or transcription
. This helps keep everything organized and easy to understand.
One quick heads up: make sure you're not accidentally registering multiple handlers for the same topic, as that can cause some annoying errors.
Have a specific idea brewing? I'd be totally happy to brainstorm the perfect topic name with you!
Sources: packages/livekit-rtc/src/room.ts | Sending text | LiveKit Docs | Runtime/Scripts/DataStream.cs | livekit-android-sdk/src/main/java/io/livekit/android/room/datastream/incomi...lively-terabyte-73999
07/09/2025, 3:49 PMlively-terabyte-73999
07/09/2025, 3:49 PMtall-airline-16756
07/09/2025, 3:50 PMlively-terabyte-73999
07/09/2025, 3:57 PMtall-airline-16756
07/09/2025, 3:58 PM