polite-dog-12152
08/29/2025, 5:29 AMtall-airline-16756
08/29/2025, 5:29 AMtall-airline-16756
08/29/2025, 5:29 AMpolite-dog-12152
09/07/2025, 9:04 AMtall-airline-16756
09/07/2025, 9:04 AMpolite-dog-12152
09/07/2025, 9:05 AMtall-airline-16756
09/07/2025, 9:05 AMfrom livekit.plugins import silero
# Load the VAD with optimized settings
vad = silero.VAD.load(
max_buffered_speech=10.0, # Cut down buffered speech to 10 seconds
sample_rate=8000, # Use 8kHz sample rate to save memory
prefix_padding_duration=0.25, # Shorten padding duration to 0.25 seconds
force_cpu=True # Make sure VAD runs on the CPU
)
# Example usage in an agent session
def prewarm(proc):
proc.userdata["vad"] = vad # Reuse the same VAD instance across sessions
async def entrypoint(ctx):
vad_instance = ctx.proc.userdata["vad"]
session = AgentSession(vad=vad_instance)
await session.start()
This setup helps keep memory use low without losing any functionality. If you want me to walk you through it or share more examples, just let me know—I’m here to help!polite-dog-12152
09/08/2025, 12:05 PMtall-airline-16756
09/08/2025, 12:06 PMtool_choice
parameter, which you can set to:
• auto: The model decides if it should use tools.
• required: The model has to use tools.
• none: No tool usage at all.
There's also parallel tool calling, which means the model can make several tool calls at the same time—super handy for tackling complex tasks more efficiently! This is controlled by the parallel_tool_calls
parameter (or parallelToolCalls
if you're using Node.js). Good news: it’s turned on by default in OpenAI integrations for LiveKit.
If you want to set this up, you just adjust these parameters when initializing the LLM. Here's a quick example:
from livekit.agents import LLM\n\nllm = LLM.with_openai(api_key="your_api_key",\n tool_choice="auto",\n parallel_tool_calls=True\n)\n
If you have any questions or want me to walk you through specific examples, just let me know—I’m here to help!
Sources: Fireworks AI LLM integration guide | LiveKit Docs | Cerebras LLM integration guide | LiveKit Docs | Telnyx LLM integration guide | LiveKit Docs | Telnyx LLM integration guide | LiveKit Docs | DeepSeek LLM integration guide | LiveKit Docspolite-dog-12152
09/08/2025, 12:07 PMtall-airline-16756
09/08/2025, 12:08 PMfrom livekit.agents import LLM
# Initialize the LLM with OpenAI integration
llm = LLM.with_openai(
api_key="your_openai_api_key", # Just swap this out with your OpenAI API key
tool_choice="auto", # The model decides if it wants to use tools
parallel_tool_calls=True # Turn on parallel tool calling for efficiency
)
# Here’s how you can make a chat completion call
response = llm.chat(
messages=[
{"role": "user", "content": "What is the weather and time in New York?"}
]
)
# Let’s see what it says!
print(response.content)
This little example shows you how to get tool calling and parallel tool calling up and running with OpenAI and LiveKit. Just remember to replace your_openai_api_key
with your actual API key. If you want me to walk you through anything else or need more examples, just give me a shout!polite-dog-12152
09/08/2025, 12:09 PMtall-airline-16756
09/08/2025, 12:10 PMfrom livekit.agents import function_tool, AgentSession, LLM
from duckduckgo_search import DDGS
# Define a custom web search tool
@function_tool
def search_web(query: str):
"""Perform a web search using DuckDuckGo."""
ddgs = DDGS()
results = ddgs.text(query)
return results[:3] # Return top 3 results
# Initialize the LLM with the custom tool
llm = LLM.with_openai(
api_key="your_openai_api_key",
tool_choice="auto"
)
# Create an agent session with the tool
session = AgentSession(
llm=llm,
tools=[search_web]
)
# Example usage
response = session.llm.chat(
messages=[{"role": "user", "content": "Search for the latest news on AI."}]
)
print(response.content)
Or, if you prefer, you can tap into Perplexity models that come with built-in search capabilities, like this:
llm = LLM.with_perplexity(
api_key="your_perplexity_api_key",
model="llama-3.1-sonar-small-128k-chat"
)
If you want me to walk you through more examples or have any questions, just let me know—I’m here to help!
Sources: Ollama integration guide | LiveKit Docs | OpenAI LLM integration guide | LiveKit Docs | Perplexity LLM integration guide | LiveKit Docs | livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py | OpenAI LLM integration guide | LiveKit Docs