I am using openai's realtime model for llm and I a...
# ask-ai
c
I am using openai's realtime model for llm and I am getting the below error: await self._job_entrypoint_fnc(job_ctx) File "/Users/rashmi/workspace/livekit/agent-starter-python/src/agent.py", line 170, in entrypoint agent=Assistant(), ^^^^^^^^^^^ File "/Users/rashmi/workspace/livekit/agent-starter-python/src/agent.py", line 59, in init self.llm.on("metrics_collected", sync_wrapper)
t
Thanks for reaching out, Rashmi. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
It looks like the error you're running into is tied to event handling and how it works with OpenAI's realtime model in LiveKit. No worries—here are some steps that should help you sort it out: 1. Use the Correct Metrics Type: Make sure you're using
RealtimeModelMetrics
instead of
LLMMetrics
. The
RealtimeModelMetrics
is built specifically for OpenAI's Realtime API and covers metrics like
ttft
,
duration
, and
tokens_per_second
. 2. Proper Event Listener Setup: Attach your event listener to the LLM instance once it’s fully initialized. For example, setting it up in the
__init__
method usually does the trick. 3. Async Task Creation: Use
asyncio.create_task
inside your
sync_wrapper
function to collect metrics asynchronously. This way, you won’t block the main thread. 4. Ensure Correct Initialization: Double-check that your LLM is initialized with all the necessary parameters, like
model
,
api_key
, and any other options needed for the OpenAI Realtime API. Give these a try, and hopefully, that clears things up! If you want me to walk you through any part or if something’s still unclear, just let me know—I’m here to help. Sources: livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime... | metrics/metrics_vad.py
c
what is the pip command to import RealtimeModelMetrics?
t
To get started with
RealtimeModelMetrics
using OpenAI's realtime model in LiveKit, you'll want to install the OpenAI plugin for LiveKit Agents. Just run this pip command:
Copy code
pip install "livekit-agents[openai]~=1.2"
This will bring in everything you need, including
RealtimeModelMetrics
, to smoothly connect with OpenAI's Realtime API. If you have any questions or need a hand, feel free to ask—I'm here to help! Sources: OpenAI Realtime API integration guide | LiveKit Docs | livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime...
c
What all metrics can I emit using RealtimeModelMetrics?
t
Hey there! Just wanted to share some info about the
RealtimeModelMetrics
in LiveKit for OpenAI's realtime model. It gives you these handy metrics: • timestamp: When the response was created. • request_id: The unique ID for your request. • duration: How long the response took from start to finish (in seconds). • ttft (Time to First Token): Time until the first audio token arrives (in seconds). If it’s -1, that means no audio token was sent. • cancelled: A simple yes/no if the request got cancelled. • input_tokens: How many input tokens were used, including both text and audio. • output_tokens: The number of output tokens sent back, also including text and audio. • total_tokens: The total tokens involved in the response. • tokens_per_second: How many tokens are generated each second. There’s also a detailed breakdown for: • input_token_details: This covers
audio_tokens
,
text_tokens
,
image_tokens
(which is 0 for now),
cached_tokens
, and
cached_tokens_details
. • output_token_details: This includes
text_tokens
,
audio_tokens
, and
image_tokens
(also 0 at the moment). All these metrics pop up once a response finishes through the
metrics_collected
event. If you want to dive deeper or have any questions, just let me know—I’m here to help! Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti... | AI Agents Framework | livekit/livekit_composite | DeepWiki
c
can you give an eg of how to use RealtimeModelMetrics with real time openai agent?
t
Here’s a friendly example of how you can use
RealtimeModelMetrics
with a real-time OpenAI agent in LiveKit:
Copy code
from livekit.plugins import openai
from livekit.agents.metrics import RealtimeModelMetrics

# Instantiate the real-time model
model = openai.realtime.RealtimeModel()

# Define a callback function to handle metrics
def on_metrics_collected(metrics: RealtimeModelMetrics):
    print(f"Request {metrics.request_id} took {metrics.duration:.2f}s")
    print(f"TTFT: {metrics.ttft:.2f}s, Tokens/s: {metrics.tokens_per_second:.2f}")
    print(f"Input: {metrics.input_tokens}, Output: {metrics.output_tokens}")

# Attach the listener to the metrics_collected event
model.on("metrics_collected", on_metrics_collected)
This little example walks you through how to: • Create the real-time model. • Set up a listener for the
metrics_collected
event. • Work with the
RealtimeModelMetrics
object to get useful info like
duration
,
ttft
, and token counts. If you have any questions or want me to explain anything more, just let me know—I’m here to help! Sources: livekit-agents/livekit/agents/metrics/base.py | plugins/openai/src/realtime/realtime_model.ts | metrics/metrics_llm.py
c
what about in a pipeline?
t
To use
RealtimeModelMetrics
in a pipeline with OpenAI's realtime model, you can simply listen for the
metrics_collected
event that the session emits. Here’s a friendly example to get you started:
Copy code
from livekit.agents import llm
from livekit.plugins import openai

# Create a RealtimeModel instance
realtime_model = openai.realtime.RealtimeModel()

# Create a session
session = realtime_model.session()

# Listen for metrics
@session.on("metrics_collected")
def on_metrics(metrics: llm.RealtimeModelMetrics):
    print(f"Request ID: {metrics.request_id}")
    print(f"Duration: {metrics.duration}s")
    print(f"TTFT: {metrics.ttft}s")
    print(f"Input tokens: {metrics.input_tokens}")
    print(f"Output tokens: {metrics.output_tokens}")
    print(f"Tokens per second: {metrics.tokens_per_second}")
    # Dive into detailed token info
    print(f"Input audio tokens: {metrics.input_token_details.audio_tokens}")
    print(f"Output audio tokens: {metrics.output_token_details.audio_tokens}")
This little snippet shows you how to: • Create a session from the realtime model. • Attach a listener to catch the
metrics_collected
event. • Access and work with the
RealtimeModelMetrics
object, including a detailed breakdown of tokens. If anything’s unclear or you want to dive deeper, just let me know—I’m here to help! Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti... | OpenAI Realtime API integration guide | LiveKit Docs