Does the Agent session automatically disconnect fr...
# ask-ai
w
Does the Agent session automatically disconnect from there being no conversation detected by the VAD?
t
Thanks for reaching out, Arthur. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Just a heads-up: the Agent session doesn’t automatically disconnect if no conversation is detected by the VAD. But don’t worry—LiveKit offers a handy
user_away_timeout
option in the
AgentSession
settings. This sets the user’s state to "away" after a period of silence (which is 15 seconds by default). You can easily customize this timeout or even turn it off by setting
user_away_timeout=None
. When the user’s state switches to "away," a
user_state_changed
event fires. This is a great spot for developers to add their own touch—like checking in with the user or closing the session if they don’t respond after a few tries. Just keep in mind, automatic disconnection isn’t on by default; you’ll need to set it up yourself using event handlers. Sources: livekit-agents/livekit/agents/voice/agent_activity.py | Events and error handling | LiveKit Docs | Events and error handling | LiveKit Docs
w
Why is that event only being fired one time. It fires, I talk, which should cause it to reset, then when I stop talking again for the specified time, it should cause the event to be emitted again. Is this not the case?
t
Hey there! The
user_state_changed
event kicks in whenever the user's speaking state changes, based on what the Voice Activity Detection (VAD) module picks up. But just a heads-up—it won’t keep firing for back-to-back speech segments unless there’s enough silence between them to signal an "end of speech." This is all managed by the
min_endpointing_delay
setting, which basically sets the minimum silence time needed before the user’s state switches back to "listening." If the pause between what’s said is shorter than this delay (default is 500ms), the VAD doesn’t see it as the end of speech, so the state stays "speaking," and no new
user_state_changed
event gets triggered. If you want to catch multiple
user_state_changed
events, just make sure there’s enough quiet time (longer than
min_endpointing_delay
) between speech parts. And if you need, you can tweak this parameter to fit your situation better! Sources: Events and error handling | LiveKit Docs | Events and error handling | LiveKit Docs | AI voice agents | LiveKit Docs | Improving voice AI's turn detection with transformers | livekit-agents/livekit/agents/voice/audio_recognition.py