polite-furniture-24847
08/18/2025, 7:22 AMtall-airline-16756
08/18/2025, 7:22 AMtall-airline-16756
08/18/2025, 7:22 AMLIVEKIT_EVALS_VERBOSE
environment variable to 1
when you run your tests with pytest. This will give you detailed output for every agent execution, showing both what you typed and the assistant’s response. Here’s how you can do it:
LIVEKIT_EVALS_VERBOSE=1 pytest -s -o log_cli=true <your-test-file>
• Using a Chat Script: Another handy way is to use a bash script that talks to the LLM and shows each turn right in the terminal. For example, the chat-llama2.sh
script does exactly this by using curl
to send your questions and then prints both your input and the responses. Here’s a quick snippet:
#!/bin/bash
API_URL=<YOUR_API_URL>
while true; do
echo -en "\033[0;32m" # Green color
read -r -e -p "> " QUESTION
echo -en "\033[0m" # Reset color
# Send the question to the API and print the response
done
Using either of these methods will make it super easy to keep an eye on the conversation flow and debug things right from your terminal. If you run into any hiccups or want to chat more about it, just let me know!
Sources: Testing and evaluation | LiveKit Docs | tools/llama-bench/llama-bench.cpp