explain me the code flow how the events received f...
# ask-ai
a
explain me the code flow how the events received from stt are being passed to llm and then to the tts?
t
Thanks for reaching out, Aditya. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Could you provide more details about your setup or the specific context? For example, are you using a particular LiveKit SDK or integration, and do you have specific components or code snippets in mind? This will help in providing a more accurate explanation.
a
its general
t
Hey there! Let me break down how LiveKit processes events from speech to text, through AI, and back to speech in a super friendly way. • STT Node: Your audio gets transformed into text using the
stt_node()
function - pretty cool, right? • LLM Node: Then, our smart AI takes that text and thinks up a clever response with the
llm_node()
function. • TTS Node: Finally, the AI's response gets turned back into speech using the
tts_node()
function. Here's how it all comes together behind the scenes: 1. The system shifts into "thinking mode" - just like when you're pondering a great idea! 2. It runs the AI inference by calling
perform_llm_inference()
with the agent's
llm_node
. 3. The AI output gets split into two streams using
tee
. 4. If audio is enabled, it converts the text to speech by calling
perform_tts_inference()
with the agent's
tts_node
. 5. The system waits for a green light before sharing the response. The best part? This setup is super flexible, so developers can tweak things at any point in the process. How awesome is that? Sources: Voice pipeline nodes | LiveKit Docs | GitHub::livekit::agents | livekit-agents/livekit/agents/voice/agent_activit...