To send a message from the frontend to the backend to the voicePipeline agent, specifically to the LLM, and stream the response back, you can follow this process:
• User audio is transmitted from the frontend to LiveKit's edge network via WebRTC.
• The audio is routed to your backend agent over low-latency connections.
• The agent uses the Agents framework integration to relay the audio to OpenAI's model via WebSocket.
• Speech from OpenAI is streamed back through WebSocket to the agent and then relayed to the user via WebRTC.
The framework provides a
chat
method that returns an
LLMStream
object, which can be used to push text and receive LLM responses in a streaming fashion.
Sources:
LiveKit Docs