enough-sunset-14907
06/06/2025, 1:50 AMtall-airline-16756
06/06/2025, 1:50 AMtall-airline-16756
06/06/2025, 1:51 AMpython from livekit.agents import ModelSettings, llm, FunctionTool, Agent from typing import AsyncIterable async def llm_node( self, chat_ctx: llm.ChatContext, tools: list[FunctionTool], model_settings: ModelSettings ) -> AsyncIterable[llm.ChatChunk]: # Insert custom preprocessing here async for chunk in Agent.default.llm_node(self, chat_ctx, tools, model_settings): # Insert custom postprocessing here yield chunk 2. Streaming TTS Node: LiveKit provides a StreamAdapter class that enables streaming TTS even if the underlying TTS implementation doesn’t natively support it. This adapter uses a sentence tokenizer to split text for incremental synthesis. For example: python from livekit.agents import ModelSettings, rtc, Agent from typing import AsyncIterable async def tts_node( self, text: AsyncIterable[str], model_settings: ModelSettings ) -> AsyncIterable[rtc.AudioFrame]: # Insert custom text processing here async for frame in Agent.default.tts_node(self, text, model_settings): # Insert custom audio processing here yield frame The StreamAdapter class handles the connection between the LLM output and TTS by: - Breaking text into sentences using a sentence tokenizer - Synthesizing each sentence individually - Streaming the audio frames as they become available This setup allows for real-time streaming of both the LLM's text output and the corresponding synthesized speech, creating a seamless and responsive experience. Sources: Voice pipeline nodes | LiveKit Docs | Tree View: | dse-workbenchenough-sunset-14907
06/06/2025, 1:54 AMtall-airline-16756
06/06/2025, 1:54 AM