narrow-egg-50997
03/26/2025, 6:38 PMtall-airline-16756
03/26/2025, 6:38 PMtall-airline-16756
03/26/2025, 6:38 PMtext
property for accumulated text content or the text_stream
for streaming text content asynchronously.
• Audio content: Accessible through the audio
property for accumulated audio content or the audio_stream
for streaming audio content asynchronously.
The content type is specified by the content_type
property, which can be set to either 'text' or 'audio'.
Sources: livekit.plugins.openai.realtime API documentationnarrow-egg-50997
03/26/2025, 6:39 PMtall-airline-16756
03/26/2025, 6:40 PMsay
method of the VoicePipelineAgent
class. This method allows you to play speech through the voice assistant.
The say
method accepts the following parameters:
• `source`: This can be a string, an LLMStream
, or an AsyncIterable<string>
.
• `allowInterruptions`: A boolean value that defaults to true
.
• `addToChatCtx`: A boolean value that defaults to true
.
Here is the method signature:
say(source, allowInterruptions?, addToChatCtx?): Promise<SpeechHandle>
This method returns a Promise
that resolves to a SpeechHandle
object. Additionally, audio content is handled through streams, with methods like _handle_response_audio_delta
and _handle_response_audio_transcript_delta
processing audio frames and transcripts for streaming.
Sources: VoicePipelineAgent | LiveKit Agents | livekit.plugins.openai.realtime API documentation