# LiveKit Audio Track Subscription Issue - Help Re...
# ask-ai
b
# LiveKit Audio Track Subscription Issue - Help Request ## Problem Description We have a LiveKit voice agent that is not receiving audio from the user's microphone. The agent can speak (TTS works), but it cannot hear the user (STT shows no activity). The logs show no "Track subscribed: audio" events. ## Current Implementation ### Frontend (Next.js with LiveKit React Components) ***VoiceUIBall.tsx component:***
Copy code
typescript
<LiveKitRoom
  room={room}
  token={connectionDetails.token}
  serverUrl={connectionDetails.serverUrl}
  connect={true}
  audio={{
    echoCancellation: false,
    noiseSuppression: false,
    autoGainControl: false
  }}
  video={false}
  onConnected={() => {
    console.log('LiveKit connected with audio enabled')
    // We check if audio tracks are automatically published
    if (room.localParticipant) {
      const audioTracks = room.localParticipant.audioTrackPublications
      console.log('Audio track publications:', audioTracks.size)
      // If not published, we manually publish
      if (audioTracks.size === 0) {
        navigator.mediaDevices.getUserMedia({
          audio: {
            echoCancellation: false,
            noiseSuppression: false,
            autoGainControl: false
          }
        }).then(async (stream) => {
          const audioTrack = stream.getAudioTracks()[0]
          if (audioTrack) {
            await room.localParticipant.publishTrack(audioTrack)
          }
        })
      }
    }
  }}
>
  <VoiceUIBallInner ... />
</LiveKitRoom>
### Backend (Python LiveKit Agent) ***voice_agent.py:***
Copy code
python
from livekit.agents import (
    Agent,
    AgentSession,
    AutoSubscribe,
    JobContext,
    RoomInputOptions,
    RoomOutputOptions,
)

# In the entrypoint function:
await session.start(
    agent=briseis_agent,
    room=ctx.room,
    room_input_options=RoomInputOptions(
        # We tried adding this but got error:
        # auto_subscribe=AutoSubscribe.AUDIO_ONLY,  # TypeError: unexpected keyword argument
    ),
    room_output_options=RoomOutputOptions(
        audio_enabled=True,
        transcription_enabled=True
    ),
)

# We have event handlers but they never fire for audio tracks:
@ctx.room.on("track_subscribed")
def on_track_subscribed(track: rtc.Track, publication: rtc.RemoteTrackPublication, participant: rtc.RemoteParticipant):
    <http://logger.info|logger.info>(f"Track subscribed: {track.kind} from {participant.identity}, source: {publication.source}")
    if track.kind == "audio":
        <http://logger.info|logger.info>(f"Audio track subscribed! Muted: {publication.muted}, Subscribed: {publication.subscribed}")
## Questions 1. ***Frontend Audio Publishing***: - Is our approach correct for publishing the user's microphone track in Next.js? - Should we use the
audio
prop on
LiveKitRoom
or manually publish tracks? - Are we missing any configuration for proper audio track publishing? 2. ***Backend Audio Subscription***: - How do we configure the Python agent to automatically subscribe to participant audio tracks? - Where should
AutoSubscribe
be configured? We tried
RoomInputOptions
but it doesn't accept
auto_subscribe
. - Is audio subscription automatic or do we need to manually subscribe to tracks? 3. ***Track Subscription Flow***: - What is the correct flow for audio track subscription between frontend and backend? - Should the agent automatically subscribe to all audio tracks or do we need explicit configuration? - Why might the "track_subscribed" event not be firing even though the frontend publishes audio? ## What We've Tried - Setting
audio={true}
and
audio={object}
on LiveKitRoom component - Manually publishing microphone track after connection - Adding AutoSubscribe.AUDIO_ONLY to RoomInputOptions (resulted in TypeError) - Checking track publications on both frontend and backend ## Expected Behavior 1. User clicks voice button and allows microphone access 2. Frontend publishes microphone track to LiveKit room 3. Python agent automatically subscribes to the audio track 4. Agent receives "track_subscribed" event and can process user speech 5. STT processes user speech and agent responds ## Actual Behavior 1. User allows microphone and track appears to be published (frontend logs show success) 2. Agent never receives "track_subscribed" event 3. No audio is processed by STT 4. Agent can speak but cannot hear user Any guidance on the correct implementation for audio track subscription would be greatly appreciated!
t
Thanks for reaching out, Andrew. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Here are the steps and clarifications to address your issues with audio track publishing and subscription in LiveKit: 1. Frontend Audio Publishing • To publish the user's microphone track, you can use
room.localParticipant.setMicrophoneEnabled(true)
for basic microphone access. This is simpler and ensures the microphone track is published automatically. • If you need more control, you can manually publish tracks using
localParticipant.publishTrack()
. Your current implementation of manually publishing the microphone track after connection is valid, but ensure that the track is not already being published automatically by the
audio
prop in
LiveKitRoom
. • Ensure that the
audio
prop in
LiveKitRoom
is correctly configured. If you are manually publishing tracks, you can set
audio={false}
to avoid conflicts. 2. Backend Audio Subscription • Backend agents automatically subscribe to all published tracks by default. You do not need to configure
AutoSubscribe
unless you want to limit subscriptions. • The
AutoSubscribe
configuration is set during the agent's connection setup. For example, you can use
auto_subscribe=AutoSubscribe.SUBSCRIBE_ALL
or other options like
AUDIO_ONLY
when connecting the agent. • Ensure that your event handler for
track_subscribed
is correctly implemented. The provided example in your backend code looks correct, but verify that the agent is properly connected to the room and that the event handler is being registered. 3. Track Subscription Flow • The correct flow is as follows: a. The frontend publishes the user's microphone track using
setMicrophoneEnabled(true)
or
publishTrack()
. b. The backend agent automatically subscribes to the published track unless configured otherwise. c. The
track_subscribed
event is triggered on the backend, allowing you to process the audio track. • If the
track_subscribed
event is not firing, verify the following: ◦ The frontend is successfully publishing the audio track. Check the logs to confirm that the track is being published. ◦ The backend agent is properly connected to the room and has not disabled automatic subscription. ◦ There are no errors or misconfigurations in the room setup that might prevent track subscription. By following these steps, you should be able to resolve the issue and ensure that the backend agent can subscribe to and process the user's audio track. Sources: Publishing tracks | LiveKit Docs | Stream music over WebRTC using React and WebAudio | Audio and video | LiveKit Docs | packages/react/etc/components-react.api.md