I'm actually using GroqLLM as base class and write...
# ask-ai
e
I'm actually using GroqLLM as base class and write my custom logic inside. The problem is the LLM first creates all the text and that's returned to TTS through AgentSession pipeline. I want it to be simultaneous with TTS. Can you give me a sample code?
t
Thanks for reaching out, Viren. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To achieve simultaneous text-to-speech (TTS) integration with GroqLLM, you can use the following setup: First, install the Groq plugin:
Copy code
pip install "livekit-agents[groq]~=1.0"
Then, add your Groq API key to your
.env
file:
Copy code
GROQ_API_KEY=<your groq api key>
Here is a sample code snippet for initializing an
AgentSession
with Groq components:
Copy code
from livekit.plugins import groq

# ...

# in your entrypoint function
session = AgentSession(
    stt=groq.STT(
        model="whisper-large-v3-turbo",
        language="en",
    ),
    llm=groq.LLM(
        model="llama3-8b-8192"
    ),
    tts=groq.TTS(
        model="playai-tts",
        voice="Arista-PlayAI",
    ),
    # ... vad, turn_detection, etc.
)
This setup configures a complete voice AI pipeline using Groq's services for speech-to-text, language model processing, and text-to-speech capabilities. Let me know if you need further assistance! Sources: Groq integration guide | LiveKit Docs
e
But I have my custom class for LLM
t
I'm sorry, but I don't have the information you need at the moment. If you would like a member of the LiveKit team to respond, please ask your question in one of the other channels.