aloof-scooter-47164
09/10/2025, 9:12 PM{"message": "Agent state changed from: listening to speaking", "level": "INFO", "name": "agent", "room": "492b0ce9-97d8-44cc-8d0b-a07d7a20214f_So2CP", "agent": "", "pid": 100, "job_id": "AJ_bQjNigvfFuLw", "timestamp": "2025-09-10T21:05:21.951536+00:00"}
{"message": "RealtimeModel metrics", "level": "INFO", "name": "livekit.agents", "room": "492b0ce9-97d8-44cc-8d0b-a07d7a20214f_So2CP", "agent": "", "ttft": -1.0, "input_tokens": 2532, "cached_input_tokens": 2432, "output_tokens": 167, "total_tokens": 2699, "tokens_per_second": 70.66, "pid": 100, "job_id": "AJ_bQjNigvfFuLw", "timestamp": "2025-09-10T21:05:23.883600+00:00"}
{"message": "rotate_segment called while previous segment is still being rotated", "level": "WARNING", "name": "livekit.agents", "room": "492b0ce9-97d8-44cc-8d0b-a07d7a20214f_So2CP", "agent": "", "pid": 100, "job_id": "AJ_bQjNigvfFuLw", "timestamp": "2025-09-10T21:05:27.052640+00:00"}
Conversation item added from assistant: Un momento por favor, voy a buscar el resultado del partido de Argentina de ayer.. interrupted: False
- text: Un momento por favor, voy a buscar el resultado del partido de Argentina de ayer.
{"message": "Agent state changed from: speaking to listening", "level": "INFO", "name": "agent", "room": "492b0ce9-97d8-44cc-8d0b-a07d7a20214f_So2CP", "agent": "", "pid": 100, "job_id": "AJ_bQjNigvfFuLw", "timestamp": "2025-09-10T21:05:27.053218+00:00"}
{"message": "Agent state changed from: listening to thinking", "level": "INFO", "name": "agent", "room": "492b0ce9-97d8-44cc-8d0b-a07d7a20214f_So2CP", "agent": "", "pid": 100, "job_id": "AJ_bQjNigvfFuLw", "timestamp": "2025-09-10T21:05:27.054774+00:00"}
agent has a rule to say some thinking phrase while running tool calls (as suggested by openai cookbook), but would it be possible that we are always in thinking state here instead of going to speaking?tall-airline-16756
09/10/2025, 9:12 PMtall-airline-16756
09/10/2025, 9:12 PMsession.generate_reply()
.
If you’d like, I’m here to help with any details or guide you through implementing these enhancements. Just let me know!
Sources: livekit-agents/livekit/agents/voice/agent_activity.py | External data and RAG | LiveKit Docsaloof-scooter-47164
09/10/2025, 9:13 PMaloof-scooter-47164
09/10/2025, 9:13 PMtall-airline-16756
09/10/2025, 9:14 PMagent_state_changed
event, which is triggered whenever the agent changes states—including when it goes into "thinking" mode, right before any tool execution starts.
Here’s a simple way to set this up:
• Use the agent_state_changed
Event: Listen for the state change to "thinking" and then trigger your notification. For example:
from livekit.agents import AgentStateChangedEvent
@session.on("agent_state_changed")
def on_agent_state_changed(event: AgentStateChangedEvent):
if event.new_state == "thinking":
notify_external_system("Agent is about to process input and may run tools.")
• Monitor the lk.agent.state
Attribute: On the frontend, you can watch this attribute to respond to state changes as they happen.
participant.on('metadataChanged', () => {
const agentState = participant.attributes['lk.agent.state'];
if (agentState === 'thinking') {
sendNotificationToSystem();
}
});
Using these approaches will help you notify any external systems or devices just before the tool execution starts. If you’d like, I’m here to guide you through the implementation or answer any questions you might have!
Sources: Events and error handling | LiveKit Docs | livekit-agents/livekit/agents/voice/agent_activity.py | External data and RAG | LiveKit Docsaloof-scooter-47164
09/10/2025, 9:16 PMtall-airline-16756
09/10/2025, 9:16 PM