How to fetch transcripts from the room. I mean I c...
# ask-ai
c
How to fetch transcripts from the room. I mean I can see the transcripts in the playground.
t
Thanks for reaching out, Subham. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
If you want to grab transcripts from a room, here are a few ways you can do it: • Realtime Transcription Events (Frontend): You can listen to the
RoomEvent.TranscriptionReceived
event in the LiveKit client SDKs to get transcription segments as they happen. Just a heads up, this method is deprecated and will be removed in future versions. • Text Streams (Recommended): Transcripts are shared as text streams on the topic
lk.transcription
. You can use the
registerTextStreamHandler
method to listen to these streams programmatically. • Full Transcript History (Agent Side): If you’re building an agent, you can access the entire conversation history through the
session.history
property to save transcripts once the session ends. Would you like me to walk you through how to set up any of these methods in a particular SDK? I’m happy to help! Sources: Transcriptions | LiveKit Docs | livekit-android-test/src/test/java/io/livekit/android/room/RoomTranscriptio... | livekit-compose-components/src/main/java/io/livekit/android/compose/state/t... | Text and transcriptions | LiveKit Docs | livekit-agents/livekit/agents/voice/room_io/room_io.py
c
How to do the last one in python sdk?
t
Hey there! If you want to grab the full transcript history using the Python SDK, you can simply use the
session.history
property. It’s super handy for saving everything once your session wraps up. Here’s a quick example to show you how it works:
Copy code
import json

async def write_transcript():
    filename = f"/tmp/transcript_{ctx.room.name}.json"
    with open(filename, 'w') as f:
        json.dump(session.history.to_dict(), f, indent=2)

# Add a shutdown callback to save the transcript when the session ends
ctx.add_shutdown_callback(write_transcript)
This little snippet saves the whole conversation, including all the AI’s responses, into a neat JSON file. If you have any questions or want to tweak it, just give me a shout—I’m here to help!
c
what other data can we fetch from room or session?
t
Hey there! You can grab all sorts of data from a room or session in LiveKit, such as: • Room Information: Things like Room ID, name, metadata, when it was created, max participants allowed, how many are currently in, number of publishers, recording status, server details, and connection state. • Participant and Track Data: Info about remote participants, the local participant, who's speaking, and details on track publications (audio, video, and data tracks). • Session Statistics: Real-time WebRTC stats like packet loss, bitrate, jitter, and RTT to keep you in the loop. • Event Data: Live updates on participant events (joining, leaving, metadata changes), track events (publishing, subscribing, muting/unmuting), and data/chat events (messages, transcription, SIP signals). • Additional Features: Managing end-to-end encryption and tweaking data channel options to fit your needs. If you want a hand with accessing any specific data or need help using this with the Python SDK, just give me a shout—I’m here to help! Sources: Analytics API | LiveKit Docs | Sources/LiveKit/Core/Room.swift | livekit/src/room/mod.rs | Connecting to LiveKit | LiveKit Docs
c
Participant and Track Data: Info about remote participants, the local participant, who's speaking, and details on track publications (audio, video, and data tracks). give me more information on this
t
Hey there! Here’s a friendly rundown of participant and track data in LiveKit to help you get comfortable with it: • Participants: Every participant has a unique
identity
and a server-generated
sid
. They can have details like
name
,
state
(think JOINING, JOINED, ACTIVE, DISCONNECTED),
kind
(like STANDARD, AGENT, SIP), and permissions. In the SDKs, you’ll find two types:
LocalParticipant
(that’s you!) and
RemoteParticipant
(everyone else in the room). • Tracks: Tracks are streams of info—audio, video, or data—published by participants. They come with properties like
sid
,
kind
(audio, video, or data),
source
(Camera, Microphone, ScreenShare, etc.),
name
, plus whether they’re
muted
or
subscribed
. You can easily subscribe or unsubscribe from tracks whenever you want. • Track Details: There’s more! Tracks include things like
width
and
height
for video,
mime_type
, and encryption settings. Plus, video layers have quality levels (LOW, MEDIUM, HIGH) with bitrate and spatial layer info to keep things smooth. • Active Speakers: The room even gives you a list of who’s currently speaking, so you can spot active speakers in real time—super handy! If you want a hand accessing or setting up any of these features in the Python SDK, just give me a shout—I’m here to help! Sources: Rooms, participants, and tracks | LiveKit Docs | protobufs/livekit_models.proto | test/mock/test_data.dart | frontend/lib/Debug.tsx