busy-monitor-72069
09/10/2025, 10:26 AMtall-airline-16756
09/10/2025, 10:26 AMtall-airline-16756
09/10/2025, 10:27 AM@livekit/rtc-node
or livekit-server-sdk
. To access and configure modalities, you’ll want to use the LiveKit Agents SDK for Node.js.
Here’s a friendly example of how you can set up modalities in the Agents SDK:
import { RealtimeModel } from '@livekit/agents';
const model = new RealtimeModel({
modalities: ['text', 'audio'], // Enables both text and audio responses
instructions: 'Please provide brief and informative responses.',
voice: 'alloy',
turn_detection: { type: 'server_vad' },
temperature: 0.8,
max_output_tokens: 2048,
});
You have a couple of options for the modalities
parameter:
• `['text', 'audio']`: This enables both text and audio responses, which is the default setting.
• `['text']`: This option gives you text-only responses, and if you want audio, you can add external TTS.
If you’re not using the Agents SDK, you won’t be able to access modalities just yet. But no worries — I’m here to help if you’d like guidance on setting this up or have any other questions. Just let me know!
Sources: web/src/components/modalities-selector.tsx | README.md | Adjusting Model Parameters | LiveKit Docs | Azure OpenAI Realtime API and LiveKit | LiveKit Docs | OpenAI Realtime API integration guide | LiveKit Docs