how can i get log & metrics for realtime agent...
# ask-ai
l
how can i get log & metrics for realtime agents
t
Thanks for reaching out, biberim45. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To help you access logs and metrics for real-time agents in LiveKit, here’s a simple guide you can follow: • Subscribe to Metrics Events: You can use the
metrics_collected
event in your agent code to capture and log metrics as they happen. For example, in Python:
Copy code
from livekit.agents import metrics

@session.on("metrics_collected")
def _on_metrics_collected(ev: metrics.MetricsCollectedEvent):
    metrics.log_metrics(ev.metrics)  # Logs metrics
View Runtime Logs: You can easily stream logs in real time using the LiveKit CLI:
Copy code
lk agent logs
Forward Logs to External Services: If you want to send runtime logs to Datadog, just set the
DATADOG_TOKEN
as a secret like this:
Copy code
lk agent update-secrets --secrets "DATADOG_TOKEN=your-client-token"
Aggregate Metrics: The
UsageCollector
helper class is great for summarizing metrics during a session:
Copy code
usage_collector = metrics.UsageCollector()

@session.on("metrics_collected")
def _on_metrics_collected(ev: metrics.MetricsCollectedEvent):
    usage_collector.collect(ev.metrics)

summary = usage_collector.get_summary()
<http://logger.info|logger.info>(f"Usage: {summary}")
Advanced Monitoring: You might also consider integrating with OpenTelemetry for trace-based observability if you want to go deeper. If you have any questions or would like me to walk you through any of these steps, please don’t hesitate to ask—I’m here to help! Sources: Logs, metrics, and telemetry | LiveKit Docs | examples/src/realtime_agent.ts | Log collection | LiveKit Docs | metrics/send-metrics-to-3p/metrics_server/README.md
l
how can i get agent standing time duration on room
t
To help you measure an agent's standing time duration in a room, here are some friendly methods you can try out: • Track Participant Lifecycle Events: You can use the SDK's
OnParticipantConnected
and
OnParticipantDisconnected
events to note down when the agent joins and leaves the room. For instance, in Go:
Copy code
room := lksdk.ConnectToRoom(url, lksdk.ConnectInfo{...}, &lksdk.RoomCallback{
    OnParticipantConnected: func(rp *lksdk.RemoteParticipant) {
        if rp.Kind() == lksdk.ParticipantAgent {
            agentJoinedAt = time.Now()
        }
    },
    OnParticipantDisconnected: func(rp *lksdk.RemoteParticipant) {
        if rp.Kind() == lksdk.ParticipantAgent {
            agentLeftAt = time.Now()
            duration := agentLeftAt.Sub(agentJoinedAt)
            log.Printf("Agent standing time: %v", duration)
        }
    },
})
Use Prometheus Metrics: You might want to enable Prometheus scraping to get session duration metrics like
session_duration_ms
which can be really helpful for observability. • Aggregate Metrics: The
UsageCollector
class is great for summarizing metrics and matching them with timestamps to find out session durations. • Cloud Billing: You can also check LiveKit Cloud's agent session minutes to confirm standing time, as it reflects active connection time. If you'd like, I’m here to help you with more details or guide you through implementing any of these methods—just let me know! Sources: cmd/lk/perf.go | pkg/telemetry/prometheus/rooms.go | Capturing metrics | LiveKit Docs | Capturing metrics | LiveKit Docs | Docs::Home
l
i now use to gemini live api and gpt realtime api how can i get metrics Usage: UsageSummary(llm_prompt_tokens=26418, llm_prompt_cached_tokens=0, llm_input_audio_tokens=171, llm_completion_tokens=443, llm_output_audio_tokens=0, tts_characters_count=0, tts_audio_duration=0.0, stt_audio_duration=0.0) i taked but output audio tokens i saw to 0 why
t
I understand it can be a bit confusing when
output_audio_tokens
shows as 0 in your
UsageSummary
. This usually happens because of how the
UsageCollector
gathers metrics. While you can find audio token details (like
outputTokenDetails.audioTokens
) in the raw
RealtimeModelMetrics
events from Gemini Live API and GPT Realtime API, sometimes the
UsageCollector
isn’t set up to include these in the final summary. Here are a couple of steps that might help: • Double-check that your
UsageCollector
is properly processing and adding up the
outputTokenDetails.audioTokens
from the metrics events. • If you’re working with the JavaScript version, you might want to extend the
UsageSummary
interface to add fields like
llmInputAudioTokens
and
llmOutputAudioTokens
, and update the
collect
method to include these values in the totals. Please don’t hesitate to reach out if you’d like some more detailed help with updating the
UsageCollector
or if you run into any other questions—I’m here to support you! Sources: plugins/google/src/beta/realtime/realtime_api.ts | livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime...
l
how can i do check usagecollector for llm output audiotokens and how can i expand for collect this data
t
The
UsageCollector
in LiveKit is designed to help you effortlessly collect LLM output audio tokens when using
RealtimeModelMetrics
. To make sure you’re capturing this data smoothly, here’s what you can do: 1. Use a real-time LLM, like
openai.realtime.RealtimeModel()
, in your agent setup. 2. Attach the
UsageCollector
to the
metrics_collected
event of your agent session. 3. When you need it, simply retrieve the summary using
get_summary()
. Here’s a friendly example in Python to guide you:
Copy code
usage_collector = metrics.UsageCollector()

@agent.on("metrics_collected")
def on_metrics(metrics: metrics.AgentMetrics):
    usage_collector.collect(metrics)

# Later, access the collected audio tokens
summary = usage_collector.get_summary()
print(f"Output audio tokens: {summary.llm_output_audio_tokens}")
If you have any questions or need a bit more help, please don’t hesitate to reach out—I’m here for you! Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...
l
Agent: Test, Room: Test_room_42172ea3-70f6-4d16-be8c-ec6fb87e073a, Usage: UsageSummary(llm_prompt_tokens=26418, llm_prompt_cached_tokens=0, llm_input_audio_tokens=171, llm_completion_tokens=443, llm_output_audio_tokens=0, tts_characters_count=0, tts_audio_duration=0.0, stt_audio_duration=0.0) this my get_summary result why i see llm_output_audio_tokens=0 actually agent talked to me but not increas llm_output_audio_tokens
t
Thanks for your question! The
llm_output_audio_tokens
field in
UsageSummary
shows 0 because it only counts audio tokens generated directly by the LLM itself. If your agent uses a typical setup where the LLM generates text and then a separate TTS system converts that text to speech, those audio tokens won’t appear in
llm_output_audio_tokens
. To see values greater than zero in
llm_output_audio_tokens
, please make sure that: • You’re using a real-time LLM model that natively produces audio, like with
RealtimeModel
configured for audio output. • The model is set up to output audio directly, instead of depending on a separate TTS system. If your agent relies on TTS, the spoken output won’t show in
llm_output_audio_tokens
, but you can track TTS usage through
tts_characters_count
. If anything is unclear or you’d like more help, just let me know—I’m here to support you! Sources: livekit-plugins/livekit-plugins-anthropic/livekit/plugins/anthropic/llm.py
l
i use realtimemodel and this is results for summary: Agent: Test, Room: Test_room_42172ea3-70f6-4d16-be8c-ec6fb87e073a, Usage: UsageSummary(llm_prompt_tokens=26418, llm_prompt_cached_tokens=0, llm_input_audio_tokens=171, llm_completion_tokens=443, llm_output_audio_tokens=0, tts_characters_count=0, tts_audio_duration=0.0, stt_audio_duration=0.0) as you see tts_character_count=0 and output_audio_tokens=0 how can i calculate ai usage counts
t
I’m happy to help you understand how AI usage counts are calculated when
llm_output_audio_tokens
and
tts_character_count
are zero. The
UsageCollector
still keeps track of other important metrics as usual. Here’s a friendly breakdown of how it works: • LLM Metrics: The system adds up
llm_prompt_tokens
,
llm_prompt_cached_tokens
, and
llm_completion_tokens
from
LLMMetrics
events, no matter what the audio token values are. • TTS Metrics: Even if
tts_character_count
is zero, the system might still record
tts_audio_duration
if audio was created in other ways. If no TTS processing happened, both values will simply stay at zero. • Zero Handling: The
UsageCollector
starts all fields in
UsageSummary
at zero and adds up metrics straightforwardly, so zero values are perfectly normal and don’t cause any issues. In your situation, seeing no
llm_output_audio_tokens
and
tts_character_count
likely means no audio was generated directly by the LLM or via TTS. But don’t worry—other metrics like
llm_prompt_tokens
and
llm_completion_tokens
still give a good picture of the LLM’s usage. If you have any more questions or want me to walk you through these metrics further, just let me know—I’m here to help! Sources: livekit-plugins/livekit-plugins-anthropic/livekit/plugins/anthropic/llm.py | Logs, metrics, and telemetry | LiveKit Docs