search_engine' coro=<OutboundAgent.search_engin...
# ask-ai
b
search_engine' coro=<OutboundAgent.search_engine() done, defined at /home/ubuntu/line_service/line/agent/outbound/handler.py:164> result='\n ... '> took too long: 2.43 second is this only while executing or it include time it took llm to evaluate function call?
t
Thanks for reaching out, Sandeep. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
The execution time measurement you see for LLM coroutines (such as
search_engine' coro
in
handler.py
) includes only the time spent on LLM inference—that is, the time it takes the language model to generate its response (from start, through the first token, to completion). It does not include the time it takes to execute any function calls or tool calls that may be triggered by the LLM's response. Each function call's response time is measured and handled separately in the framework. This means the 2.43 second timing shown refers only to LLM model evaluation and response generation, not the downstream tool execution. • Source: This information is based on documented
LLMMetrics
measurement behavior and implementation in the LiveKit Agents framework. Let me know if you want details on how tool execution time is tracked or how to measure end-to-end latency. Sources: Capturing metrics | LiveKit Docs | livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...