prehistoric-printer-73777
08/15/2025, 9:51 PMtall-airline-16756
08/15/2025, 9:51 PMtall-airline-16756
08/15/2025, 9:52 PMllm_node
hook that prints out the LLM response in LiveKit, you can do this by overriding the llm_node
method in your custom Agent class. This way, you get to peek at and handle the LLM's output before it moves along the pipeline.
Here’s a friendly example to get you started:
import logging
from pathlib import Path
from dotenv import load_dotenv
from livekit.agents import JobContext, WorkerOptions, cli
from livekit.agents.voice import Agent, AgentSession
from livekit.plugins import deepgram, openai, silero
logger = logging.getLogger("llm-response-logger")
logger.setLevel(<http://logging.INFO|logging.INFO>)
load_dotenv()
class PrintingAgent(Agent):
def __init__(self):
super().__init__(
instructions="""
You are a helpful agent.
""",
stt=deepgram.STT(),
llm=openai.LLM(),
tts=openai.TTS(),
vad=silero.VAD.load()
)
async def on_enter(self):
self.session.generate_reply()
async def llm_node(
self, chat_ctx, tools, model_settings=None
):
async def process_stream():
async with self.llm.chat(chat_ctx=chat_ctx, tools=tools, tool_choice=None) as stream:
async for chunk in stream:
if chunk is None:
continue
# Extract content from the chunk
content = getattr(chunk.delta, 'content', None) if hasattr(chunk, 'delta') else str(chunk)
if content:
<http://logger.info|logger.info>(f"LLM Response: {content}")
yield chunk
return process_stream()
async def entrypoint(ctx: JobContext):
session = AgentSession()
await session.start(
agent=PrintingAgent(),
room=ctx.room
)
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
Quick rundown:
• The PrintingAgent
class takes charge by overriding the llm_node
method.
• Inside, the process_stream
async generator goes through each chunk of the LLM’s output.
• We grab the content from each chunk and log it so you can see what the LLM is saying.
• Then, we yield those chunks so the rest of the pipeline keeps running smoothly.
With this setup, you’ll see the LLM’s response printed out in the console before it gets passed to the TTS component. If you want to tweak anything or have questions, just give me a shout—I’m here to help!prehistoric-printer-73777
08/15/2025, 9:57 PMtall-airline-16756
08/15/2025, 9:58 PMdelete_room
method from the LiveKit API to wrap things up for all participants. Here’s a simple example to get you started:
import asyncio
from livekit import api
from livekit.agents import get_job_context
async def end_call():
ctx = get_job_context()
if ctx is None:
return
try:
await ctx.api.room.delete_room(
api.DeleteRoomRequest(room=ctx.room.name)
)
except Exception as e:
<http://logger.info|logger.info>(f"Error ending call: {e}")
Here’s how it might look inside an Agent Class:
from livekit.agents import Agent, function_tool, RunContext
class MyAgent(Agent):
@function_tool()
async def end_call(self, ctx: RunContext):
"""End the call by deleting the room"""
<http://logger.info|logger.info>("Ending call")
await ctx.api.room.delete_room(api.DeleteRoomRequest(room=ctx.room.name))
Just a few things to keep in mind:
• Use try-except blocks to catch any errors and log them nicely, so you’re not left guessing what went wrong.
• Make sure the function only runs when there’s a valid job context available—it helps avoid unexpected issues.
• Logging helpful info can make debugging and monitoring a breeze.
If you have any questions or want me to walk you through anything else, just let me know—I’m here to help!
Sources: Agents telephony integration | LiveKit Docs | complex-agents/drive-thru/frontend/components/livekit/agent-control-bar/age... | Job lifecycle | LiveKit Docs