This message was deleted.
# ask-ai
s
This message was deleted.
t
Thanks for reaching out, Muhammad. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To use a custom endpoint for the LLM in your setup, you have two main approaches: • Use an OpenAI-Compatible Custom Endpoint: If your custom endpoint is OpenAI API-compatible, you can use the OpenAI plugin with a custom
base_url
parameter. Here's an example: `from livekit.plugins import openai # Create LLM with custom endpoint llm = openai.LLM( model="your-model-name", api_key="your-api-key", # or set to a placeholder if not needed base_url="[your-custom-endpoint]/v1" )`This approach works seamlessly if your endpoint adheres to the OpenAI API specification. • Implement a Custom LLM Node: For completely custom LLM implementations that don’t follow the OpenAI API format, you can override the LLM node. This allows you to: You can find more details on implementing a custom LLM node in the LiveKit documentation on LLM nodes. ◦ Customize how the LLM is used ◦ Modify the chat context prior to inference ◦ Adjust how tool invocations and responses are handled ◦ Implement a custom LLM provider without a plugin If your custom endpoint is OpenAI-compatible, the first approach is the simplest. Otherwise, you can explore the second option for full customization. Sources: Voice pipeline nodes | LiveKit Docs | Large language model (LLM) integrations | LiveKit Docs | LiveKit Docs | livekit-agents/livekit/agents/llm/fallback_adapter.py
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!