rich-monitor-70665
05/15/2025, 2:35 PMtall-airline-16756
05/15/2025, 2:35 PMtall-airline-16756
05/15/2025, 2:35 PMturn_detection
parameter in llm=openai.realtime.RealtimeModel()
can be configured with the following options:
• *Server VAD (default):*`turn_detection=TurnDetection(
type="server_vad",
threshold=0.5,
prefix_padding_ms=300,
silence_duration_ms=500,
create_response=True,
interrupt_response=True,
)`
• *Semantic VAD:*`turn_detection=TurnDetection(
type="semantic_vad",
eagerness="auto", # can also be "low", "medium", or "high"
create_response=True,
interrupt_response=True,
)`
For semantic_vad
, the eagerness
property can be set to:
• auto
(default) - Equivalent to medium
• low
- Lets users take their time speaking
• medium
- Balanced approach
• high
- Chunks audio as soon as possible
If you want to set the default option programmatically, you can use the server_vad
configuration as shown above.
Sources: OpenAI Realtime API integration guide | LiveKit Docs