I'm currently developing an AI voice agent that ac...
# ask-ai
m
I'm currently developing an AI voice agent that actively participates in meetings using LiveKit Meet and the livekit/agents project. The agent successfully hears and responds to the first participant who speaks, but it doesn't seem to detect or respond to subsequent participants, even when using the push-to-talk functionality mentioned in the documentation. After examining the source code, I see that the default RoomIO implementation appears to link the agent to a single participant. I'm looking to extend or modify this behavior to support multiple simultaneous participants and would greatly appreciate any guidance on the best approach.
t
Thanks for reaching out, Khalid. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To extend or modify the behavior of your AI voice agent to support multiple simultaneous participants in LiveKit Meet, you can refer to the multi-user-transcriber.py example from the LiveKit agents project. This example demonstrates how to handle multiple participants in a LiveKit room by: • Creating separate agent sessions for each participant that connects to the room. • Tracking these sessions in a dictionary keyed by participant identity. • Setting up event handlers for participant connections and disconnections. • Creating a RoomIO instance for each participant with specific input and output options. Key configuration details include: • Setting
text_enabled=False
as text input is not supported for multiple room participants. • Enabling transcription with
transcription_enabled=True
. • Setting
audio_enabled=False
for the output options. This approach allows the agent to manage and respond to multiple participants effectively. Let me know if you need further details or assistance! Sources: GitHub::livekit::agents | .github/next-release/changeset-a6c0fcd6.md