<@U084APK40NS> how do i point my voicepipelineagen...
# ask-ai
b
@millions-hair-24956 how do i point my voicepipelineagent on livekit cloud to use my self hosted deepgram
g
It seems likely that you can configure the
VoicePipelineAgent
to use a self-hosted Deepgram instance by specifying the Deepgram STT model with the appropriate endpoint in your agent's configuration. According to the VoicePipelineAgent documentation, you can customize the models used by the agent, including the STT model. Here's a basic example of how you might set up the
VoicePipelineAgent
to use a custom Deepgram STT model:
Copy code
from livekit.agents import VoicePipelineAgent
from livekit.plugins import deepgram, openai, silero

# Configure your Deepgram STT with a custom endpoint
stt = deepgram.STT(model="your-custom-model", api_url="http://your-deepgram-server")

agent = VoicePipelineAgent(
    vad=silero.VAD.load(),
    stt=stt,
    llm=openai.LLM(),
    tts=openai.TTS(),
    chat_ctx=initial_ctx,
    allow_interruptions=True,
    interrupt_speech_duration=0.5,
    interrupt_min_words=0,
    min_endpointing_delay=0.5,
)

agent.start(room, participant)
Would you like more details on setting up the Deepgram server or configuring the agent? Sources: - VoicePipelineAgent documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
b
@millions-hair-24956 help me configure the agent
g
To configure the
VoicePipelineAgent
to use your self-hosted Deepgram, you need to set up the agent with the correct STT model and endpoint. Here's a step-by-step guide: 1. Install Required Packages: Ensure you have the necessary LiveKit and Deepgram packages installed. 2. Configure Deepgram STT: Set up the Deepgram STT with your self-hosted server's URL. 3. Set Up the VoicePipelineAgent: Use the
VoicePipelineAgent
class to integrate the STT, LLM, and TTS components. Here's an example configuration:
Copy code
from livekit.agents import VoicePipelineAgent
from livekit.plugins import deepgram, openai, silero

# Load VAD
vad = silero.VAD.load()

# Configure Deepgram STT with custom endpoint
stt = deepgram.STT(
    model="your-custom-model",
    api_url="http://your-deepgram-server"
)

# Set up the agent
agent = VoicePipelineAgent(
    vad=vad,
    stt=stt,
    llm=openai.LLM(),
    tts=openai.TTS(),
    chat_ctx=initial_ctx,
    allow_interruptions=True,
    interrupt_speech_duration=0.5,
    interrupt_min_words=0,
    min_endpointing_delay=0.5,
)

# Start the agent
agent.start(room, participant)
Make sure to replace
"your-custom-model"
and
"http://your-deepgram-server"
with your actual Deepgram model and server URL. Would you like more detailed instructions on any specific part of this setup? Sources: - https://docs.livekit.io/sip/examples/sip-participant-attributes/ - https://docs.livekit.io/agents/quickstarts/voice-agent/ 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.