<@U084APK40NS> I see llm class emits metrics ```s...
# ask-ai
d
@millions-hair-24956 I see llm class emits metrics
Copy code
self._llm.emit("metrics_collected", metrics)
How to catch and print them in voice pipeline agent?
g
To catch and print metrics emitted by the
llm
class in a VoicePipelineAgent, you can subscribe to the
metrics_collected
event. Here's a basic example of how you can achieve this: 1. Subscribe to the Event: Use the
on
method of the agent to listen for the
metrics_collected
event. 2. Log the Metrics: Define a callback function that logs or prints the metrics when the event is triggered. Here's a code snippet to illustrate this:
Copy code
from livekit.agents.pipeline import VoicePipelineAgent
from livekit.plugins import openai, deepgram, silero

# Initialize the agent
agent = VoicePipelineAgent(
    vad=silero.VAD.load(),
    stt=deepgram.STT(),
    llm=openai.LLM(),
    tts=openai.TTS(),
)

# Define a callback function to handle the metrics
def on_metrics_collected(metrics):
    print(f"Metrics collected: {metrics}")

# Subscribe to the metrics_collected event
agent.on("metrics_collected", on_metrics_collected)

# Start the agent (assuming room and participant are defined)
agent.start(room, participant)
This code sets up a listener for the
metrics_collected
event and prints the metrics whenever they are emitted by the
llm
class. Would you like more details on any specific part of this process? Sources: - VoicePipelineAgent - Capturing metrics 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
d
@millions-hair-24956 I added prints
Copy code
metrics = LLMMetrics(
    timestamp=time.time(),
    request_id=request_id,
    ttft=ttft,
    duration=duration,
    cancelled=self._task.cancelled(),
    label=self._llm._label,
    completion_tokens=usage.completion_tokens if usage else 0,
    prompt_tokens=usage.prompt_tokens if usage else 0,
    total_tokens=usage.total_tokens if usage else 0,
    tokens_per_second=usage.completion_tokens / duration if usage else 0.0,
    error=None,
)
print("AFTER METTICS LLM METRICS", metrics)
self._llm.emit("metrics_collected", metrics)
I see info like AFTER METTICS LLM METRICS LLMMetrics(request_id='557cbc79-c20c-4daa-b8fc-586623d9b4e1', timestamp=1740489787.082572, ttft=0.5034498330205679, duration=1.0604407500941306, label='livekit.plugins.openai.llm.LLM', cancelled=False, completion_tokens=46, prompt_tokens=71, total_tokens=117, tokens_per_second=43.37818967812844, error=None) I also have in my entrypoint:
Copy code
@assistant.on("metrics_collected")
def _on_metrics_collected(mtrcs: metrics.AgentMetrics):
    print("metrics_collected catched", mtrcs)
But _on_metrics_collected doesnt show llm metrics. It shows vad and stt only
g
It seems likely that the
metrics_collected
event is being emitted, but your callback might not be capturing the LLM metrics specifically. This could be due to the way the event is being handled or the specific type of metrics being emitted. To ensure that your callback captures all types of metrics, including LLM metrics, you might want to check the following: 1. Ensure Correct Event Subscription: Make sure that the
@assistant.on("metrics_collected")
decorator is correctly set up to listen for all types of metrics, including LLM metrics. 2. Check Metric Types: In your callback function, you can add a check to differentiate between different types of metrics. This will help you confirm if the LLM metrics are being emitted but not printed due to filtering or other logic. Here's an updated version of your callback function that includes a type check:
Copy code
@assistant.on("metrics_collected")
def _on_metrics_collected(mtrcs: metrics.AgentMetrics):
    if isinstance(mtrcs, metrics.LLMMetrics):
        print("LLM metrics collected:", mtrcs)
    else:
        print("Other metrics collected:", mtrcs)
This code will print LLM metrics separately from other types of metrics, helping you verify if they are being emitted and captured correctly. Would you like more details on any specific part of this process? Sources: - https://livekit-users.slack.com/archives/C07FY8WHGPM/p1730233417.503479 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
d
@millions-hair-24956 I see llm metrics coming only when user speaks
g
It seems likely that the LLM metrics are only being generated and emitted when there is user input because the LLM is invoked as part of processing that input. In a typical VoicePipelineAgent setup, the LLM is called to generate a response based on the user's speech, which is transcribed by the STT (Speech-to-Text) component. Therefore, LLM metrics would naturally be collected and emitted only when the LLM is actively used, which occurs when the user speaks. To ensure that LLM metrics are captured whenever the LLM is used, you might want to verify that the LLM is indeed being invoked as expected during user interactions. If you want to capture metrics even when the LLM is not directly responding to user speech, you may need to trigger the LLM in other parts of your application logic. Would you like more details on how to ensure LLM metrics are captured in different scenarios? Sources: - https://docs.livekit.io/agents/build/metrics/#LLM 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.