@ctx.room.on("track_subscribed") def on_track_...
# ask-ai
m
@ctx.room.on("track_subscribed") def on_track_subscribed(track: rtc.Track, publication: rtc.RemoteTrackPublication, participant: rtc.RemoteParticipant): if track.kind == rtc.TrackKind.KIND_AUDIO: logger.warning("====================Track log=====================") logger.info("track subscribed: %s from participant %s (%s)", publication.sid, participant.sid, participant.identity) stream = rtc.AudioStream(track) does this track_subcribed give only local_participant audio I can't get agent audio
t
Thanks for reaching out, Rajadurai. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
The
track_subscribed
event in LiveKit gives you audio tracks from remote participants only, not from the local participant. This event happens when the local participant subscribes to a remote participant's track, so you can then use that audio for rendering or processing. If you’re not hearing the agent’s audio, it’s a good idea to double-check that the agent’s audio track is being published properly and that the local participant is subscribed to it. Please feel free to reach out if you’d like any help troubleshooting this—I’m here to support you! Sources: Docs::Home | livekit/src/room/mod.rs | src/room/Room.ts
m
best way to get agent and remote participant tracks
t
I'm happy to help you get both agent and remote participant tracks in LiveKit, and the best approach really depends on what you're aiming to do: • For AI Agents: You can use the
useVoiceAssistant
hook. This handy hook automatically grabs the agent's audio and video tracks, including those from avatar workers. Here's a quick example:
Copy code
const { agent, audioTrack, videoTrack, state } = useVoiceAssistant();
For Remote Participants: You might want to use
useTracks
or
useParticipantTracks
to access tracks from all participants or just specific ones. Check out these examples:
Copy code
// All camera and mic tracks in the room
const allTracks = useTracks([Track.Source.Camera, Track.Source.Microphone]);

// Specific participant's tracks
const participantTracks = useParticipantTracks(
  [Track.Source.Camera, Track.Source.Microphone],
  participant.identity
);
If you'd like, I’m here to walk you through these or answer any questions you have about implementing them! Sources: Virtual avatar integrations | LiveKit Docs | packages/core/src/observables/track.ts
m
I'm using python SDK
t
Hi there! To help you get both agent (local) and remote participant tracks using the Python SDK in LiveKit, here’s a friendly step-by-step guide: • Access Remote Participant Tracks: You can listen for the
track_subscribed
event to handle tracks as they come in:
Copy code
@room.on("track_subscribed")
def on_track_subscribed(track: rtc.Track, publication: rtc.RemoteTrackPublication, participant: rtc.RemoteParticipant):
    if track.kind == rtc.TrackKind.KIND_AUDIO:
        audio_stream = rtc.AudioStream(track)
        asyncio.create_task(process_audio(audio_stream))
Access Agent (Local) Tracks: For managing the agent’s own tracks, use
room.local_participant
. Here’s how you can publish a local audio track:
Copy code
source = rtc.AudioSource(SAMPLE_RATE, NUM_CHANNELS)
track = rtc.LocalAudioTrack.create_audio_track("example-track", source)
await room.local_participant.publish_track(track)
Iterate Over Participants: If you want to go through all participants and their tracks, this snippet will help:
Copy code
for participant in room.remote_participants.values():
    for publication in participant.track_publications.values():
        if publication.track:
            track = publication.track
            # Process the track
If you have any questions or need a hand with implementation, please don’t hesitate to ask—I’m here to help! Sources: Inside a session | LiveKit Docs | livekit-rtc/livekit/rtc/__init__.py | Receiving and publishing tracks | LiveKit Docs | Receiving and publishing tracks | LiveKit Docs | Virtual avatar integrations | LiveKit Docs