I created an AI Agent using python with the VoiceP...
# ask-ai
d
I created an AI Agent using python with the VoicePipelineAgent. Show me the code that sends transcriptions to the frontend client
g
To send transcriptions to the frontend client using the
VoicePipelineAgent
, you can utilize the
STTSegmentsForwarder
class to forward transcriptions. Here's a basic example of how you can achieve this: 1. Set up the
STTSegmentsForwarder
: This class is used to forward STT transcriptions to the client, which is useful for client-side rendering. 2. Listen for Transcription Events: Use the LiveKit SDKs to receive transcription events in your frontend. Here's a code snippet to illustrate the setup:
Copy code
from livekit.agents.transcription import STTSegmentsForwarder

async def transcribe_track(participant, track, room):
    stt_forwarder = STTSegmentsForwarder(room=room, participant=participant, track=track)
    stt_stream = stt.stream()
    
    async for ev in stt_stream:
        stt_forwarder.update(ev)
        if ev.type == stt.SpeechEventType.INTERIM_TRANSCRIPT:
            print(ev.alternatives[0].text, end="")
        elif ev.type == stt.SpeechEventType.FINAL_TRANSCRIPT:
            print("\n")
            print(" -> ", ev.alternatives[0].text)
In your frontend, you can listen to the
RoomEvent.TranscriptionReceived
event to collect and display the transcriptions. Would you like more details on integrating this with a specific frontend framework? Sources: - LiveKit Transcriptions - STTSegmentsForwarder API 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.