is deepgram stt the only stt that livekit support ...
# ask-ai
b
is deepgram stt the only stt that livekit support streaming of transcription? How to implement it correctly for lowest latency?
t
Thanks for reaching out, Andrew. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
LiveKit supports streaming transcription with multiple STT providers, including Deepgram, Amazon Transcribe, AssemblyAI, Azure AI Speech, Google Cloud STT, and others. All these providers support low-latency multilingual transcription. To implement Deepgram STT for the lowest latency, follow these steps: • Enable
no_delay
: Ensures results are returned without waiting for the sequence to complete. Defaults to True. • Minimize
endpointing_ms
: Set this to a smaller value (e.g., 10ms) or disable it entirely (set to 0) to reduce the time considered as the end of speech. • Enable
interim_results
: Provides immediate feedback with non-final transcription results. Defaults to True. • Choose the right model: The default
nova-3
model balances accuracy and speed. ### Example Configuration
Copy code
from livekit.plugins import deepgram

stt = deepgram.STT(
    model="nova-3",
    interim_results=True,
    no_delay=True,
    endpointing_ms=10,  # Reduce from default 25ms
    punctuate=False,    # Disable if not needed for speed
    smart_format=False  # Disable if not needed for speed
)
If absolute lowest latency is critical, you may also consider evaluating Google Cloud STT, as LiveKit's testing found it faster than Deepgram in some cases. Sources: Speech-to-text (STT) integrations | LiveKit Docs | Working with plugins | LiveKit Docs | tests/test_stt.py | Deepgram STT integration guide | LiveKit Docs | Live Conversations with AI using ChatGPT and WebRTC