strong-tomato-79321
08/15/2025, 7:19 AMtall-airline-16756
08/15/2025, 7:19 AMtall-airline-16756
08/15/2025, 7:19 AMtall-airline-16756
08/15/2025, 7:19 AMLLMMetrics
events. Each tool call creates its own metrics event, so by adding up the duration
from these events along with the main call duration, you’ll get the full picture. Here’s a simple way to do it:
• Keep an eye out for MetricsCollectedEvent
to collect all LLMMetrics
instances during the session.
• Add up the duration
from each LLMMetrics
event, including those from tool calls.
Here’s an example to help you get started:
from livekit.agents import metrics, MetricsCollectedEvent
# Initialize a usage collector
usage_collector = metrics.UsageCollector()
# Listen for metrics events
@session.on("metrics_collected")
def _on_metrics_collected(ev: MetricsCollectedEvent):
usage_collector.collect(ev.metrics)
# After the session ends, get the summary
summary = usage_collector.get_summary()
# Manually sum the durations from all LLM metrics, including tool calls
total_duration = 0.0
for metric in usage_collector.metrics:
if isinstance(metric, metrics.LLMMetrics):
total_duration += metric.duration
print(f"Total call duration including tool calls: {total_duration} seconds")
This way, you’ll capture both the LLM response time and any extra time spent on tool calls during the session. If you have any questions or want to dive deeper, just let me know—I’m here to help! Sources: Capturing metrics | LiveKit Docs | pkg/stats/monitor.gostrong-tomato-79321
08/15/2025, 7:19 AMstrong-tomato-79321
08/15/2025, 7:21 AMtall-airline-16756
08/15/2025, 7:21 AMmetrics_collected
event. This event gives you a MetricsCollectedEvent
object that holds all the metrics data you need. Here’s a quick example to show you how it works:
from livekit.agents import AgentSession, metrics
# Initialize the session
session = AgentSession()
# Define a handler for metrics collection
@session.on("metrics_collected")
def handle_metrics(event: metrics.MetricsCollectedEvent):
# Access the metrics object
metrics_data = event.metrics
# Log or process the metrics
print(f"Received metrics: {metrics_data}")
# Use the session as needed
The metrics you'll get include things like:
• Completion Tokens: Number of tokens generated by the LLM.
• Prompt Tokens: Number of tokens in the input prompt.
• Total Tokens: Combined tokens from completion and prompt.
• Tokens per Second: Rate of token generation.
• Timestamp: When the metrics were recorded.
• Request ID: Unique identifier for the request.
This way, you can keep an eye on and use the metrics easily during your tool calls. If you have any questions or want to dive deeper, just let me know—I’m here to help!