witty-kangaroo-6889
08/18/2025, 1:57 AMtall-airline-16756
08/18/2025, 1:57 AMtall-airline-16756
08/18/2025, 1:58 AMsetLogExtension
method. Here’s a quick example to get you started:
setLogExtension((level: LogLevel, msg: string, context: object) => {
const enhancedContext = { ...context, timeStamp: Date.now() };
if (level >= LogLevel.debug) {
console.log(level, msg, enhancedContext);
}
});
On top of that, you can also collect metrics to get a better picture of those LLM interactions:
@session.on("metrics_collected")
def _on_metrics_collected(ev: MetricsCollectedEvent):
metrics.log_metrics(ev.metrics)
Using these methods, you’ll be able to log everything LiveKit sends to the LLM and all the responses it gets back. If you have any questions or want to dive deeper, just let me know!
Sources: README.md | Capturing metrics | LiveKit Docswitty-kangaroo-6889
08/18/2025, 2:00 AMtall-airline-16756
08/18/2025, 2:00 AMimport logging
import asyncio
from pathlib import Path
from dotenv import load_dotenv
from livekit.agents import JobContext, WorkerOptions, cli
from livekit.agents.voice import Agent, AgentSession
from livekit.plugins import openai, deepgram, silero
from livekit.agents.metrics import LLMMetrics
logger = logging.getLogger("llm-logger")
logger.setLevel(<http://logging.INFO|logging.INFO>)
load_dotenv()
class LLMMetricsAgent(Agent):
def __init__(self) -> None:
super().__init__(
instructions="""You are a helpful agent.""",
stt=deepgram.STT(),
llm=openai.LLM(),
tts=openai.TTS(),
vad=silero.VAD.load()
)
def sync_wrapper(metrics: LLMMetrics):
asyncio.create_task(self.on_metrics_collected(metrics))
self.llm.on("metrics_collected", sync_wrapper)
async def on_metrics_collected(self, metrics: LLMMetrics) -> None:
<http://logger.info|logger.info>(f"LLM Metrics - Duration: {metrics.duration} seconds")
<http://logger.info|logger.info>(f"LLM Metrics - Tokens: {metrics.total_tokens}")
<http://logger.info|logger.info>(f"LLM Metrics - Request ID: {metrics.request_id}")
async def entrypoint(ctx: JobContext):
session = AgentSession()
await session.start(
agent=LLMMetricsAgent(),
room=ctx.room,
)
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
Here’s what this code does for you:
• Creates an LLMMetricsAgent
class that keeps track of and logs key LLM interaction metrics
• Implements an on_metrics_collected
method to capture and neatly log those metrics as they come in
• Defines an entrypoint function that kicks off the agent session smoothly
• Uses the OpenAI LLM plugin to handle the processing part
Feel free to tweak the logging by adding more metrics or changing the format to match what you need. If you have any questions or want to chat more about customizing this, just let me know—I’m here to help!
Sources: pipeline-llm/openai_llm.pywitty-kangaroo-6889
08/18/2025, 2:13 AMtall-airline-16756
08/18/2025, 2:13 AMimport logging
import asyncio
from pathlib import Path
from dotenv import load_dotenv
from livekit.agents import JobContext, WorkerOptions, cli
from livekit.agents.voice import Agent, AgentSession
from livekit.plugins import openai, deepgram, silero
from livekit.agents.metrics import LLMMetrics
logger = logging.getLogger("llm-logger")
logger.setLevel(<http://logging.INFO|logging.INFO>)
load_dotenv()
class LLMMetricsAgent(Agent):
def __init__(self) -> None:
super().__init__(
instructions="""You are a helpful agent.""",
stt=deepgram.STT(),
llm=openai.LLM(),
tts=openai.TTS(),
vad=silero.VAD.load()
)
def sync_wrapper(metrics: LLMMetrics):
asyncio.create_task(self.on_metrics_collected(metrics))
self.llm.on("metrics_collected", sync_wrapper)
async def on_metrics_collected(self, metrics: LLMMetrics) -> None:
<http://logger.info|logger.info>(f"LLM Metrics - Duration: {metrics.duration} seconds")
<http://logger.info|logger.info>(f"LLM Metrics - Tokens: {metrics.total_tokens}")
<http://logger.info|logger.info>(f"LLM Metrics - Request ID: {metrics.request_id}")
async def send_to_llm(self, text: str):
<http://logger.info|logger.info>(f"Text sent to LLM: {text}")
response = await self.llm.generate(text)
<http://logger.info|logger.info>(f"Response text from LLM: {response}")
return response
async def entrypoint(ctx: JobContext):
session = AgentSession()
await session.start(
agent=LLMMetricsAgent(),
room=ctx.room,
)
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
With this update, you’ll have logs for:
• The exact text sent to the LLM, thanks to <http://logger.info|logger.info>(f"Text sent to LLM: {text}")
• The response text you get back from the LLM, using <http://logger.info|logger.info>(f"Response text from LLM: {response}")
If you want me to help with anything else or add more tweaks, just let me know—I’m here to help!witty-kangaroo-6889
08/18/2025, 2:15 AMtall-airline-16756
08/18/2025, 2:16 AMsend_to_llm
method isn’t spelled out in the docs, but usually, it’s used within the agent’s workflow whenever some text needs to be sent to the LLM for processing. This might happen during user chats, while carrying out tasks, or other moments the agent’s logic decides are the right time.
If you have a particular situation or workflow you’re thinking about, just let me know! I’d be happy to help clear things up or guide you through it.