how to print llm output in terminal for each turn
# ask-ai
p
how to print llm output in terminal for each turn
t
Thanks for reaching out, Bryner. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
If you want to see the LLM output right in your terminal for each turn, here are a couple of easy ways to do it: • Using Verbose Mode: Just set the
LIVEKIT_EVALS_VERBOSE
environment variable to
1
when you run your tests with pytest. This will give you detailed output for every agent execution, showing both what you typed and the assistant’s response. Here’s how you can do it:
Copy code
LIVEKIT_EVALS_VERBOSE=1 pytest -s -o log_cli=true <your-test-file>
Using a Chat Script: Another handy way is to use a bash script that talks to the LLM and shows each turn right in the terminal. For example, the
chat-llama2.sh
script does exactly this by using
curl
to send your questions and then prints both your input and the responses. Here’s a quick snippet:
Copy code
#!/bin/bash

API_URL=<YOUR_API_URL>

while true; do
    echo -en "\033[0;32m"  # Green color
    read -r -e -p "> " QUESTION
    echo -en "\033[0m"  # Reset color
    # Send the question to the API and print the response
done
Using either of these methods will make it super easy to keep an eye on the conversation flow and debug things right from your terminal. If you run into any hiccups or want to chat more about it, just let me know! Sources: Testing and evaluation | LiveKit Docs | tools/llama-bench/llama-bench.cpp