How do I add the tool call results in chat context...
# ask-ai
r
How do I add the tool call results in chat context? What role should I use? Provide an example
t
Thanks for reaching out, Taresh. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To add tool call results in the chat context in LiveKit, you should use the
ChatRole.TOOL
role. The recommended approach is to use the
ChatMessage.createToolFromFunctionResult
method, which formats the tool/function call output properly for the chat context. Example:
Copy code
import { ChatMessage, ChatRole } from '@livekit/agents';

// Assume you have a CallableFunctionResult object named functionResult
const toolResultMessage = ChatMessage.createToolFromFunctionResult(functionResult);

// Add the tool result to the chat context
callContext.addExtraChatMessage(toolResultMessage);
This will create a message with the
TOOL
role, carrying your tool call result or error. This role is specifically designed for representing tool/function call outputs and ensures correct integration with the LLM system. Sources: examples/src/outbound.ts | AgentCallContext | LiveKit Agents
r
search in python agents sdk and use python api
t
Hey there! Let me walk you through how to add tool call results in your chat context using the Python LiveKit Agents SDK. It's super easy – you'll want to use the "tool" role by creating a
FunctionCallOutput
object. I'll show you a quick example that should make this crystal clear:
Copy code
from livekit.agents.llm import ChatContext, FunctionCallOutput

# Create a chat context
chat_ctx = ChatContext()

# Add a tool call result
tool_result = FunctionCallOutput(
    call_id="tool_call_123",  # Must match the original function call ID
    name="search_calendar",   # Name of the function that was called
    output="Found 3 meetings scheduled for today: 9am standup, 2pm review, 4pm planning",
    is_error=False            # True if failed
)
chat_ctx.items.append(tool_result)
Pro tip: The
"tool"
role is always the way to go for tool results. If you need to insert multiple results at once, you can use
chat_ctx.insert()
. The best part? The SDK takes care of formatting for your LLM provider, so you don't have to worry about the technical details. Easy peasy! Sources: livekit-plugins/livekit-plugins-google/livekit/plugins/google/beta/realtime...
r
how do I manually insert the tool call output to the chat context?
t
Hey there! Let me walk you through how to manually add tool call output to your chat context using the Python LiveKit Agents SDK. It's actually pretty straightforward – think of it like adding a note to a conversation.
Copy code
from livekit.agents.llm import ChatContext, FunctionCallOutput

# Create or use your existing chat context
chat_ctx = ChatContext()

# Create the tool output
output = FunctionCallOutput(
    call_id="tool_call_456",  # Must match function call id
    name="my_function",
    output="Result of the function call.",
    is_error=False
)

# Append to the context
chat_ctx.items.append(output)
Pro tip: If you're juggling multiple outputs and want to keep things in order, you can use
chat_ctx.insert([output1, output2, ...])
. This helps make sure everything lands in the right sequence. Pretty cool, right?
r
can I modify the function call output after it has been added to the chat context? I want to manually insert it after it has been already added
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
r
how do I tell the turn_detector which language is the input speech?