I am using deepgram TTS and I keep get error 429 T...
# ask-ai
m
I am using deepgram TTS and I keep get error 429 Too many requests. Even though I only have one connection, on backend it says 19 websockets concurrently open
t
Thanks for reaching out, Usama. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
The 429 Too Many Requests error with Deepgram TTS may occur due to connection pooling issues or improper connection reuse. Here are steps to address this: • Check Connection Pooling: Ensure the connection pool is correctly sized and configured to reuse connections efficiently. Avoid creating new websockets for each request. • Inspect Connection Management: Verify that connections are properly closed when no longer needed and not left hanging open. • Review Request Patterns: If multiple requests are made in quick succession, ensure the pool size is sufficient and implement retry logic with back-off to handle rate limits. These steps should help resolve the issue. Let me know if you need further assistance or clarification.
m
What changes would it require in code
t
Hey! Could you tell me a bit about how your code is handling websocket connections for Deepgram TTS right now? For instance: • Are you using a connection pool? If yes, how have you set it up? • How do you open and close connections in your code? • Do you reuse connections, or create new ones every time? Knowing these details will really help me give you spot-on suggestions to fix the issue. Thanks! Sources: livekit-plugins/livekit-plugins-cartesia/livekit/plugins/cartesia/tts.py
m
Copy code
tts=deepgram.TTS(
            model="aura-2-andromeda-en",
            api_key= '3489f8e1b657714fb31c5cc8b2a1ae1075c90723',
            encoding= 'linear16',
            sample_rate= 24000,
            ),
This is the code i am using
t
Hey there! To help you tackle that 429 Too Many Requests error, a great approach is to use connection pooling and make sure connections get reused properly. Here’s a friendly example of how you can set up Deepgram TTS with connection pooling:
Copy code
from livekit.plugins import deepgram

# Initialize Deepgram TTS with connection pooling
tts = deepgram.TTS(
    model="aura-2-andromeda-en",
    api_key="YOUR_DEEPGRAM_API_KEY",  
)

# Connection pooling is handled internally
tts._pool = utils.ConnectionPool[aiohttp.ClientWebSocketResponse](
    connect_cb=tts._connect_ws,
    close_cb=tts._close_ws,
    max_session_duration=3600,
    mark_refreshed_on_get=False,
)

# Prewarm connections
tts.prewarm()
This way, your connections get reused smoothly, which helps a lot. Also, keep an eye on your request patterns and consider adding retry logic with back-off to gracefully handle those rate limits. If you want, I’m here to help you through it—just let me know! Sources: livekit-plugins/livekit-plugins-deepgram/livekit/plugins/deepgram/stt.py
m
what is utils? where to import it from