get a notification when we begin to call LLM (open...
# ask-ai
c
get a notification when we begin to call LLM (openai._base_client - DEBUG - Request options: {'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data':) before the answer, before llm_node
t
Thanks for reaching out, Antoine. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To get a notification before the LLM generates a response, you can use the
before_llm_cb
callback. This callback runs right before the LLM generates a response and can be configured when setting up the
VoicePipelineAgent
. For example:
Copy code
async def before_llm_cb(assistant: VoicePipelineAgent, chat_ctx: llm.ChatContext):
    """
    Callback that runs right before the LLM generates a response.
    Captures the current video frame and adds it to the conversation context.
    """
    latest_image = await get_latest_image(ctx.room)
    if latest_image:
        image_content = [ChatImage(image=latest_image)]
        chat_ctx.messages.append(ChatMessage(role="user", content=image_content))
        logger.debug("Added latest frame to conversation context")

assistant = VoicePipelineAgent(
    vad=ctx.proc.userdata["vad"],
    stt=deepgram.STT(),
    llm=openai.LLM(model="gpt-4o-mini"),
    tts=openai.TTS(),
    chat_ctx=initial_ctx,
    before_llm_cb=before_llm_cb
)
This setup ensures that the callback is triggered before the LLM processes the request. Sources: Vision Agent Quickstart | LiveKit Docs
c
i'm not using voicepipelineagent
t
I'm sorry, but I don't have the information you need at the moment. If you would like a member of the LiveKit team to respond, please ask your question in one of the other channels.