late-helmet-58096
09/10/2025, 6:35 AMtall-airline-16756
09/10/2025, 6:36 AMtall-airline-16756
09/10/2025, 6:36 AMmetrics_collected
event in your agent code to capture and log metrics as they happen. For example, in Python:
from livekit.agents import metrics
@session.on("metrics_collected")
def _on_metrics_collected(ev: metrics.MetricsCollectedEvent):
metrics.log_metrics(ev.metrics) # Logs metrics
• View Runtime Logs: You can easily stream logs in real time using the LiveKit CLI:
lk agent logs
• Forward Logs to External Services: If you want to send runtime logs to Datadog, just set the DATADOG_TOKEN
as a secret like this:
lk agent update-secrets --secrets "DATADOG_TOKEN=your-client-token"
• Aggregate Metrics: The UsageCollector
helper class is great for summarizing metrics during a session:
usage_collector = metrics.UsageCollector()
@session.on("metrics_collected")
def _on_metrics_collected(ev: metrics.MetricsCollectedEvent):
usage_collector.collect(ev.metrics)
summary = usage_collector.get_summary()
<http://logger.info|logger.info>(f"Usage: {summary}")
• Advanced Monitoring: You might also consider integrating with OpenTelemetry for trace-based observability if you want to go deeper.
If you have any questions or would like me to walk you through any of these steps, please don’t hesitate to ask—I’m here to help!
Sources: Logs, metrics, and telemetry | LiveKit Docs | examples/src/realtime_agent.ts | Log collection | LiveKit Docs | metrics/send-metrics-to-3p/metrics_server/README.mdlate-helmet-58096
09/10/2025, 6:39 AMtall-airline-16756
09/10/2025, 6:40 AMOnParticipantConnected
and OnParticipantDisconnected
events to note down when the agent joins and leaves the room. For instance, in Go:
room := lksdk.ConnectToRoom(url, lksdk.ConnectInfo{...}, &lksdk.RoomCallback{
OnParticipantConnected: func(rp *lksdk.RemoteParticipant) {
if rp.Kind() == lksdk.ParticipantAgent {
agentJoinedAt = time.Now()
}
},
OnParticipantDisconnected: func(rp *lksdk.RemoteParticipant) {
if rp.Kind() == lksdk.ParticipantAgent {
agentLeftAt = time.Now()
duration := agentLeftAt.Sub(agentJoinedAt)
log.Printf("Agent standing time: %v", duration)
}
},
})
• Use Prometheus Metrics: You might want to enable Prometheus scraping to get session duration metrics like session_duration_ms
which can be really helpful for observability.
• Aggregate Metrics: The UsageCollector
class is great for summarizing metrics and matching them with timestamps to find out session durations.
• Cloud Billing: You can also check LiveKit Cloud's agent session minutes to confirm standing time, as it reflects active connection time.
If you'd like, I’m here to help you with more details or guide you through implementing any of these methods—just let me know!
Sources: cmd/lk/perf.go | pkg/telemetry/prometheus/rooms.go | Capturing metrics | LiveKit Docs | Capturing metrics | LiveKit Docs | Docs::Homelate-helmet-58096
09/10/2025, 8:58 AMtall-airline-16756
09/10/2025, 8:59 AMoutput_audio_tokens
shows as 0 in your UsageSummary
. This usually happens because of how the UsageCollector
gathers metrics. While you can find audio token details (like outputTokenDetails.audioTokens
) in the raw RealtimeModelMetrics
events from Gemini Live API and GPT Realtime API, sometimes the UsageCollector
isn’t set up to include these in the final summary.
Here are a couple of steps that might help:
• Double-check that your UsageCollector
is properly processing and adding up the outputTokenDetails.audioTokens
from the metrics events.
• If you’re working with the JavaScript version, you might want to extend the UsageSummary
interface to add fields like llmInputAudioTokens
and llmOutputAudioTokens
, and update the collect
method to include these values in the totals.
Please don’t hesitate to reach out if you’d like some more detailed help with updating the UsageCollector
or if you run into any other questions—I’m here to support you!
Sources: plugins/google/src/beta/realtime/realtime_api.ts | livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime...late-helmet-58096
09/10/2025, 9:54 AMtall-airline-16756
09/10/2025, 9:54 AMUsageCollector
in LiveKit is designed to help you effortlessly collect LLM output audio tokens when using RealtimeModelMetrics
. To make sure you’re capturing this data smoothly, here’s what you can do:
1. Use a real-time LLM, like openai.realtime.RealtimeModel()
, in your agent setup.
2. Attach the UsageCollector
to the metrics_collected
event of your agent session.
3. When you need it, simply retrieve the summary using get_summary()
.
Here’s a friendly example in Python to guide you:
usage_collector = metrics.UsageCollector()
@agent.on("metrics_collected")
def on_metrics(metrics: metrics.AgentMetrics):
usage_collector.collect(metrics)
# Later, access the collected audio tokens
summary = usage_collector.get_summary()
print(f"Output audio tokens: {summary.llm_output_audio_tokens}")
If you have any questions or need a bit more help, please don’t hesitate to reach out—I’m here for you!
Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...late-helmet-58096
09/10/2025, 1:25 PMtall-airline-16756
09/10/2025, 1:26 PMllm_output_audio_tokens
field in UsageSummary
shows 0 because it only counts audio tokens generated directly by the LLM itself. If your agent uses a typical setup where the LLM generates text and then a separate TTS system converts that text to speech, those audio tokens won’t appear in llm_output_audio_tokens
.
To see values greater than zero in llm_output_audio_tokens
, please make sure that:
• You’re using a real-time LLM model that natively produces audio, like with RealtimeModel
configured for audio output.
• The model is set up to output audio directly, instead of depending on a separate TTS system.
If your agent relies on TTS, the spoken output won’t show in llm_output_audio_tokens
, but you can track TTS usage through tts_characters_count
.
If anything is unclear or you’d like more help, just let me know—I’m here to support you!
Sources: livekit-plugins/livekit-plugins-anthropic/livekit/plugins/anthropic/llm.pylate-helmet-58096
09/10/2025, 1:29 PMtall-airline-16756
09/10/2025, 1:30 PMllm_output_audio_tokens
and tts_character_count
are zero. The UsageCollector
still keeps track of other important metrics as usual. Here’s a friendly breakdown of how it works:
• LLM Metrics: The system adds up llm_prompt_tokens
, llm_prompt_cached_tokens
, and llm_completion_tokens
from LLMMetrics
events, no matter what the audio token values are.
• TTS Metrics: Even if tts_character_count
is zero, the system might still record tts_audio_duration
if audio was created in other ways. If no TTS processing happened, both values will simply stay at zero.
• Zero Handling: The UsageCollector
starts all fields in UsageSummary
at zero and adds up metrics straightforwardly, so zero values are perfectly normal and don’t cause any issues.
In your situation, seeing no llm_output_audio_tokens
and tts_character_count
likely means no audio was generated directly by the LLM or via TTS. But don’t worry—other metrics like llm_prompt_tokens
and llm_completion_tokens
still give a good picture of the LLM’s usage.
If you have any more questions or want me to walk you through these metrics further, just let me know—I’m here to help!
Sources: livekit-plugins/livekit-plugins-anthropic/livekit/plugins/anthropic/llm.py | Logs, metrics, and telemetry | LiveKit Docs