I have a complex workflow with many successive/con...
# ask-ai
c
I have a complex workflow with many successive/conditional llm agents. I'd like to have some kind of test suite, with predefined user inputs for each step, and I'd like to see the final conversation. How would I do that ?
t
Thanks for reaching out, Guillaume. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
You can test complex workflows involving multiple successive or conditional LLM agents by employing structured patterns and test frameworks shown in LiveKit documentation. Here's a summary of the recommended approach: • Define Agent Structure: Implement your workflow using sequential or conditional agent patterns, where agents can hand off to others based on logic or user input. • Mock User Inputs: Create test cases with predefined user responses that simulate each step in the workflow, either by calling agent methods directly in code or by simulating user input via your testing framework. • Session and State Tracking: Use data structures to keep track of conversation state and transitions (e.g., a
SurveyData
or
StoryData
class as shown in the sources), allowing you to verify which paths were taken and ensure all information flows as expected. • Conversation Logging: Maintain and review the conversation context across agent transitions by passing context objects (such as
chat_ctx
) between agents. This allows you to verify and output the entire conversation for final review. • Automated Testing: Write test functions that call your agent logic and simulate conversations; assert on outcomes and the conversation record to validate correctness and flow. See detailed patterns and code examples illustrating these strategies in the LiveKit documentation for agents and LLM workflow testing. Sources: main.py | flows/README.md | tests/test_llm.py