when do i know that the whole AGENT message / resp...
# ask-ai
p
when do i know that the whole AGENT message / response is finished? So the main text and potential tool calls are executed?
t
Thanks for reaching out, Nikolaus. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! Just so you know, an AGENT message or response is considered fully completed when both the main text and any related tool calls have been taken care of. Here’s a simple way to think about it: • Main Text Completion: The main text is wrapped up once it’s added to the chat, signaling that the primary response is done. • Tool Calls Completion: After the main text, tool calls run their course, and they’re done when all the tools have been processed or events show that everything’s finished. • Final Completion: The whole response is fully complete when both the main text and tool calls are done, along with any follow-up steps or messages. If you want me to walk you through any part of this or have questions, just let me know—I’m here to help! Sources: agents/src/pipeline/pipeline_agent.ts | tests/test_agent_session.py
p
then show me the event(s) that i need to listen to, in oder to know when the assistant message is finished (just the LLM part=
t
To know when the assistant message (just the LLM part) is finished, you can listen for the
ResponseTextDoneEvent
. This event pops up right when the LLM has wrapped up its response. Here’s a quick example to show you how it works:
Copy code
// Listen for the ResponseTextDoneEvent
agent.addEventListener('ResponseTextDoneEvent', (event) => {
    console.log('LLM response completed:', event.text);
});
If you want, I’m here to help with any other questions or details on how to set this up!
p
i mean in the agents python code. not the client js code
t
In the agent's Python code, the completion of the assistant message (LLM part) happens when a few things come together: • The
_handle_response_done
event, which closes all message channels and lets us know the generation is wrapped up. • The
_mark_generation_done()
method, which clearly signals that the generation is finished. • A message with
role="assistant"
gets added to the chat context and conversation items. All these steps work together to make sure the LLM response is complete. If you'd like, I’m happy to share more details or examples—just let me know! Sources: examples/drive-thru/test_agent.py