boundless-battery-65494
08/11/2025, 4:13 PMtall-airline-16756
08/11/2025, 4:13 PMtall-airline-16756
08/11/2025, 4:14 PMboundless-battery-65494
08/11/2025, 4:17 PMtall-airline-16756
08/11/2025, 4:18 PMboundless-battery-65494
08/11/2025, 4:21 PMtall-airline-16756
08/11/2025, 4:22 PMsendFile
method to send your files over a specific topic like this:
const file = document.querySelector('input[type="file"]').files[0];
const info = await room.localParticipant.sendFile(file, {
mimeType: file.type,
topic: 'file-streaming',
onProgress: (progress) => {
console.log(`Sending file: ${Math.ceil(progress * 100)}%`);
}
});
console.log(`File sent with stream ID: ${info.id}`);
2. Retrieving Files
Then, you can register a byte stream handler to grab and handle the files like this:
room.registerByteStreamHandler('file-streaming', (reader, participantInfo) => {
const { mimeType, id } = <http://reader.info|reader.info>;
// Option 1: Process chunks as they come in
for await (const chunk of reader) {
console.log(`Received chunk: ${chunk.byteLength} bytes`);
}
// Option 2: Or wait until the whole file is ready
const blob = new Blob(await reader.readAll(), { type: mimeType });
const downloadUrl = URL.createObjectURL(blob);
// From here, you can save it or show it to your users
});
If you’d like a hand setting up storage options like AWS S3, Azure, or GCP, or if you want to chat about error handling and tracking upload progress, just let me know—I’m here to help!boundless-battery-65494
08/11/2025, 4:37 PMtall-airline-16756
08/11/2025, 4:37 PMboundless-battery-65494
08/11/2025, 4:47 PMtall-airline-16756
08/11/2025, 4:48 PMconst transcribeTrack = async (track) => {
const audioStream = new AudioStream(track);
const sttStream = new STT({ sampleRate: 48000 }).stream();
const sendTask = async () => {
for await (const event of audioStream) {
sttStream.pushFrame(event);
}
};
const recvTask = async () => {
for await (const event of sttStream) {
if (event.type === STT.SpeechEventType.FINAL_TRANSCRIPT) {
console.log(event.alternatives[0].text); // Here’s where you can store the transcript
}
}
};
Promise.all([sendTask(), recvTask()]);
};
2. Synchronize Text and Audio
Next, use the TextAudioSynchronizer to keep your audio and text perfectly in sync:
const synchronizer = new TextAudioSynchronizer(defaultTextSyncOptions);
synchronizer.on('textUpdated', async (text) => {
console.log('Text updated:', text.text); // This is where you can save the updated text
});
3. Store Transcripts
You can save your transcripts in memory, a file, or even a database—whichever works best for you:
let transcripts = [];
function storeTranscript(side, text) {
transcripts.push({
timestamp: new Date().toISOString(),
side,
text
});
}
// Example: Save to a file
const fs = require('fs');
fs.writeFileSync('transcripts.json', JSON.stringify(transcripts, null, 2));
4. Persist Data
When you’re ready for production, it’s a good idea to use a database like MongoDB or PostgreSQL to keep your transcripts safe and organized.
If you want a hand with any part of this—maybe setting up storage or handling errors—just let me know! I’m here to help.
Sources: agents/src/multimodal/agent_playout.tsboundless-battery-65494
08/11/2025, 4:52 PMtall-airline-16756
08/11/2025, 4:52 PMTranscriptionSegment
and Transcription
, which you can find in the Node.js RTC package.
• You can easily connect with speech-to-text services such as Deepgram, Google, and Groq through its plugin system.
• There are also helpful examples and documentation for using transcription with the Agents framework.
If you have any questions or want a hand with setting this up, just give me a shout—I’m here to help!
Sources: packages/livekit-rtc/src/index.ts | Transcriptions | LiveKit Docs | Transcription Applications | livekit/livekit_composite | DeepWiki | plugins/openai/src/realtime/api_proto.ts