how can i check what was the whole prompt passed t...
# ask-ai
m
how can i check what was the whole prompt passed to the llm for a user query ? do we have any way to check it . i want it for debugging and monitoring.
t
Thanks for reaching out, Gaurav. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I'd be happy to help you check the full prompt passed to the LLM for debugging and monitoring! You can do this by enabling verbose prompt logging or using specific API endpoints. Here’s how: • Enable Verbose Prompt Logging: If you're working with
llama.cpp
, you can turn on the
verbose_prompt
flag in the
main.cpp
file or simply run the binary with the
--verbose-prompt
flag. This will log the entire prompt along with tokenization details and token IDs, giving you a clear picture. • Use the /apply-template API Endpoint: When
llama.cpp
is running in server mode, you can call the
/apply-template
endpoint to see the full prompt before generation. This is especially handy for debugging chat templates and tool formatting. • LiveKit Agents: Although full prompt logging isn’t directly available here, user input and context messages are captured in OpenTelemetry traces and event logs. If you need more details, you can extend the logging accordingly. If you'd like, I can guide you through detailed steps for enabling verbose logging or using the API based on your specific setup. Just let me know! Sources: VoicePipelineAgent | LiveKit Docs | tools/main/main.cpp | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py
m
i am using livekit agent and azure open ai llm . cant there be any logging techique for it ?
t
Hi there! To help you enable logging for full prompts in LiveKit Agent with Azure OpenAI LLM, you can use the
LK_OPENAI_DEBUG
environment variable. Setting it to
1
will turn on debug-level logging, which captures the full prompt context, function context, and tool choices during chat completion requests. Here’s how you can do it: just set the following environment variables when you start your LiveKit Agent process:
Copy code
LK_OPENAI_DEBUG=1 AZURE_OPENAI_API_KEY=your_key AZURE_OPENAI_ENDPOINT=your_endpoint OPENAI_API_VERSION=2024-10-01-preview python your_agent_script.py
This way, you’ll be able to see the complete prompts sent to the Azure OpenAI LLM. If you have any questions or need a hand with this setup, please don’t hesitate to reach out—I’m here to help!