how can i view what is passed to the open ai chat ...
# ask-ai
l
how can i view what is passed to the open ai chat completion api from within the agent class? can i override the llm node and intercept what is being passed?
t
Thanks for reaching out, Daniel. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I'd love to help you see exactly what's being sent to the OpenAI Chat Completion API! You can do this by overriding the
llm_node
method in your custom Agent class. This way, you can take a closer look at the
chat_ctx
,
tools
, and
model_settings
before the API call happens. Here's a simple example to guide you:
Copy code
from livekit.agents import Agent, llm, ModelSettings

class MyCustomAgent(Agent):
    async def llm_node(
        self,
        chat_ctx: llm.ChatContext,
        tools: list[llm.FunctionTool],
        model_settings: ModelSettings,
    ):
        # Log the full data being passed to the LLM
        print("Chat Context:", chat_ctx)
        print("Tools:", [tool.name for tool in tools])
        print("Model Settings:", model_settings)

        # Delegate to default implementation
        async for chunk in Agent.default.llm_node(self, chat_ctx, tools, model_settings):
            yield chunk
Also, if you'd like a more detailed look, you can enable debug logging by setting the environment variable
LK_OPENAI_DEBUG=1
. This will give you insights into key parts of the request, like the converted
chat_ctx
,
tool_choice
, and the function calling context. If you're interested in taking full control and customizing the LLM interaction, you can skip the default method and call
self.llm.chat()
directly inside
llm_node
. Please feel free to reach out if you'd like me to walk you through that or if you have any other questions—I'm here to help! Sources: Docs::Agents | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py | pipeline-llm/replacing_llm_output.py | Modifying LLM output before TTS | LiveKit Docs | Agents 0.x migration guide | LiveKit Docs
l
where can i set the lk_openai_debug variable
t
You can set the
LK_OPENAI_DEBUG
environment variable just like other environment variables used by LiveKit, such as
OPENAI_API_KEY
. Here are a few friendly options to get you started: • In a
.env
or
.env.local
file:
Simply add this line:
LK_OPENAI_DEBUG=1
Directly in your shell: You can run this command:
export LK_OPENAI_DEBUG=1
In a Docker Compose file: Just include it under the
environment
section of your service:
services:\n  app:\n    environment:\n      - LK_OPENAI_DEBUG=1
If you’d like, I’m here to help you set this up in your specific environment—just let me know! Sources: livekit-plugins/livekit-plugins-openai/README.md | README.md