fierce-answer-55580
08/29/2025, 1:28 PMtall-airline-16756
08/29/2025, 1:28 PMtall-airline-16756
08/29/2025, 1:29 PMfierce-answer-55580
08/29/2025, 1:29 PMtall-airline-16756
08/29/2025, 1:30 PMpip install "livekit-agents[openai]~=1.2"
2. Set Up Your Environment Variables:
Pop your DeepSeek API key into a .env
file like this:
DEEPSEEK_API_KEY=<your-deepseek-api-key>
3. Get the DeepSeek LLM Ready:
Here’s some code to set up the LLM:
`from livekit.plugins import openai
deepseek_llm = openai.LLM.with_deepseek(
model="deepseek-chat", # Default model
temperature=0.7,
parallel_tool_calls=True,
tool_choice="auto"
)`Or, if you want to specify the DeepSeek base URL, try this:
llm=openai.LLM(
base_url="<https://api.deepseek.com/v1>",
model="deepseek-chat",
api_key=os.getenv("DEEPSEEK_API_KEY")
)
4. Plug It Into a VoicePipelineAgent:
Use the LLM you just set up in your voice agent pipeline like so:
agent = VoicePipelineAgent(
vad=ctx.proc.userdata["vad"],
stt=openai.STT.with_groq(), # Or another STT provider
llm=deepseek_llm,
tts=openai.TTS(), # Or another TTS provider
chat_ctx=initial_ctx
)
5. Other Options:
Just so you know, DeepSeek models also work with providers like Cerebras, Groq, and others that support OpenAI-compatible APIs.
If you hit any bumps or want a hand with any of these steps, just give me a shout—I’m here to help!
Sources: chain-of-thought-tts/agent.py | Cerebras and LiveKit | LiveKit Docs | Large language model (LLM) integrations | LiveKit Docs | DeepSeek LLM integration guide | LiveKit Docs | Building an Automated IVR Menu Caller | LiveKit Docsfierce-answer-55580
08/29/2025, 1:47 PMtall-airline-16756
08/29/2025, 1:48 PMfierce-answer-55580
08/29/2025, 3:25 PMfierce-answer-55580
08/29/2025, 7:46 PM