creamy-tiger-14805
08/29/2025, 10:07 AMtall-airline-16756
08/29/2025, 10:07 AMtall-airline-16756
08/29/2025, 10:08 AMRoomEvent.TranscriptionReceived
event in the LiveKit client SDKs to get transcription segments as they happen. Just a heads up, this method is deprecated and will be removed in future versions.
• Text Streams (Recommended): Transcripts are shared as text streams on the topic lk.transcription
. You can use the registerTextStreamHandler
method to listen to these streams programmatically.
• Full Transcript History (Agent Side): If you’re building an agent, you can access the entire conversation history through the session.history
property to save transcripts once the session ends.
Would you like me to walk you through how to set up any of these methods in a particular SDK? I’m happy to help!
Sources: Transcriptions | LiveKit Docs | livekit-android-test/src/test/java/io/livekit/android/room/RoomTranscriptio... | livekit-compose-components/src/main/java/io/livekit/android/compose/state/t... | Text and transcriptions | LiveKit Docs | livekit-agents/livekit/agents/voice/room_io/room_io.pycreamy-tiger-14805
08/29/2025, 10:10 AMtall-airline-16756
08/29/2025, 10:11 AMsession.history
property. It’s super handy for saving everything once your session wraps up. Here’s a quick example to show you how it works:
import json
async def write_transcript():
filename = f"/tmp/transcript_{ctx.room.name}.json"
with open(filename, 'w') as f:
json.dump(session.history.to_dict(), f, indent=2)
# Add a shutdown callback to save the transcript when the session ends
ctx.add_shutdown_callback(write_transcript)
This little snippet saves the whole conversation, including all the AI’s responses, into a neat JSON file. If you have any questions or want to tweak it, just give me a shout—I’m here to help!creamy-tiger-14805
08/29/2025, 1:17 PMtall-airline-16756
08/29/2025, 1:17 PMcreamy-tiger-14805
08/29/2025, 1:20 PMtall-airline-16756
08/29/2025, 1:20 PMidentity
and a server-generated sid
. They can have details like name
, state
(think JOINING, JOINED, ACTIVE, DISCONNECTED), kind
(like STANDARD, AGENT, SIP), and permissions. In the SDKs, you’ll find two types: LocalParticipant
(that’s you!) and RemoteParticipant
(everyone else in the room).
• Tracks: Tracks are streams of info—audio, video, or data—published by participants. They come with properties like sid
, kind
(audio, video, or data), source
(Camera, Microphone, ScreenShare, etc.), name
, plus whether they’re muted
or subscribed
. You can easily subscribe or unsubscribe from tracks whenever you want.
• Track Details: There’s more! Tracks include things like width
and height
for video, mime_type
, and encryption settings. Plus, video layers have quality levels (LOW, MEDIUM, HIGH) with bitrate and spatial layer info to keep things smooth.
• Active Speakers: The room even gives you a list of who’s currently speaking, so you can spot active speakers in real time—super handy!
If you want a hand accessing or setting up any of these features in the Python SDK, just give me a shout—I’m here to help!
Sources: Rooms, participants, and tracks | LiveKit Docs | protobufs/livekit_models.proto | test/mock/test_data.dart | frontend/lib/Debug.tsx