https://livekit.io logo
Join Slack
Powered by
# sdk-web
  • a

    average-knife-77348

    09/24/2025, 5:02 AM
    hello can we support voice speed in node sdk. i can see it in python, but somehow it is missing in node. node:https://github.com/livekit/agents-js/blob/main/plugins/elevenlabs/src/tts.ts#L26-L31 python: https://github.com/livekit/agents/blob/main/livekit-plugins/livekit-plugins-elevenlabs/livekit/plugins/elevenlabs/tts.py#L57-L62 Thanks!!
    r
    • 2
    • 1
  • g

    great-oil-61627

    09/25/2025, 5:11 AM
    I'm trying to use https://docs.livekit.io/reference/components/react/hook/usetranscriptions/ and noticed that I always get the agent participant info no matter who is speaking. I have also tried
    Copy code
    const handleTranscription = (segments, participant, publication) => {
          console.log('Transcription received:', segments, participant?.identity);
        };
    
        room.on(RoomEvent.TranscriptionReceived, handleTranscription);
    and
    Copy code
    room.registerTextStreamHandler('lk.transcription',
          async (reader, participantInfo) => {
            const message = await reader.readAll();
            if (reader.info.attributes['lk.transcribed_track_id']) {
              console.log(
                `New transcription from ${participantInfo.identity}: ${message}`);
            } else {
              console.log(
                `New message from ${participantInfo.identity}: ${message}`);
            }
          });
    but always get the same identity which seems to suggest that whatever is responsible for setting it is doing it wrong in the backend. android client does not have this issue. any guesses?
    r
    g
    • 3
    • 38
  • p

    polite-oil-10264

    09/25/2025, 1:59 PM
    The background blur using the LiveKit track processor is showing the body corners that are not fully blurred. After checking, I found that
    @mediapipe/tasks-vision
    is not using the latest version. Can I ask if anyone has tried updating the package by forking it, and if it worked correctly for them?
  • h

    high-cat-84203

    09/25/2025, 5:28 PM
    Hi all, I have a hybrid mobile app where we don’t have access to the native source code and can’t embed the LiveKit SDK. We already have a LiveKit agent running, and we’d like to add real-time chat (text + voice) support inside the app. What’s the recommended way to integrate LiveKit in this case? Can we rely on the web components/agent-starter-embed or SIP bridging to achieve this without modifying the app code? Not sure which is the right channel. Thanks in advance!
  • j

    jolly-doctor-73636

    09/26/2025, 7:40 AM
    @dry-elephant-14928 Can you pls let know of the ETA for a fix to this (https://github.com/livekit/python-sdks/issues/499)? I hope it is done asap. Our release to prod is blocked due to perfomance issues caused by downgrading to 1.0.12
    👀 1
    r
    b
    • 3
    • 8
  • p

    polite-oil-10264

    09/26/2025, 1:58 PM
    Is there a way to set the audio output before joining the session? For video and audio, there are hooks available, but not for audio output
  • h

    helpful-dress-37672

    09/26/2025, 2:20 PM
    Is there an explanation of which Rag strategy livekit used to create , hailey, samuel and the other agents on their website? Since there are multiple approaches Livekit published on their Github its difficult to determine if its the one from livekit docs Rag or the lamaindex Rag or another
  • b

    big-memory-52028

    09/27/2025, 8:08 AM
    Hi everyone! I just released version 0.1.4 of my Elixir SDK for LiveKit: https://github.com/alexfilatov/livekit-elixir-sdk A big thanks to Adrian Borowski for contributing to this release 🙌 Would love feedback from anyone trying it out! 🎉🎉🎉
    🙌 1
  • m

    magnificent-jelly-58282

    09/30/2025, 7:28 AM
    I am trying to set up my first react starter agent and i am being prompted for
    NEXT_PUBLIC_APP_CONFIG_ENDPOINT
    after i enter
    lk app create --template agent-starter-react
    . Where do i get this ?
    l
    • 2
    • 1
  • f

    full-baker-91538

    10/09/2025, 10:15 AM
    We are using version 0.2 and getting many occurences of initiation timeouts on the livekit itself - anybody else encounter such a thing? huge spike on Oct 1 and then again Yesterday
  • l

    loud-byte-53435

    10/09/2025, 10:23 AM
    We’re building a Doctor Consultation Telehealth Application using React.js and using LiveKit for video calling. ### Here’s our current implementation flow (Standalone): - We do not manually create LiveKit rooms — Livekit created when a user initiates a call. - We use LiveKit webhooks (primarily `room_started `and participant-related events) to: - Update our `callLog `table with the generated
    roomId
    . - Track and store
    participant join/left
    events. - Maintain a complete call record for audit and analytics purposes. ### Our requirement: Whenever a patient books an appointment, we insert a record into the `callLog `table containing the `appointmentId `and the scheduled appointment time. At this stage, the `roomId `is not yet available, as the LiveKit room has not been created Once the scheduled appointment time arrives and either patient or doctor initiates the call , a LiveKit room is created, At that point, we need to update the existing `callLog `record with the corresponding `roomId `linked to the
    appointmentId
    . *_*We’re considering two potential approaches:*_* ### Option 1: - After the LiveKit room is auto-created when a call is initiated, - Update the room metadata with the
    appointmentId
    . - Use a RoomMetadataChanged event to detect this and call our backend API to update the `callLog `table with the `roomId `corresponding to the existing
    appointmentId
    .
    Note: I have currently implemented the first two steps of Option 1 in our existing flow.
    ### Option 2: - Instead of auto room creation, we manually create the LiveKit room in the backend when the appointment time arrives or when the first participant initiates the call. - While creating the room, we set the metadata with the
    appointmentId
    . - Use the `room_started `**webhook** from LiveKit to update our `callLog `table with both `appointmentId `and
    roomId
    . ### What we’re looking for: - Is there a recommended or best-practice approach for this kind of mapping (*appointment ↔️ room*)? - Are there any potential pitfalls with relying on metadata update events vs using the `room_started `**webhook**? - Is there any better approach to keep appointment and room in sync? *Any guidance or suggestions would be really helpful 🙏* Thanks in Advance
    h
    r
    • 3
    • 2
  • c

    colossal-airport-19101

    10/11/2025, 8:54 AM
    Can someone help me with an issue in my egress_ended webhook? Sometimes when I try to fetch room metadata, I get "room not found"—it looks like the room is being deleted before I can access its metadata. Here’s the relevant part of my code:
    Copy code
    export async function POST(req: NextRequest) {
      const { event, egressInfo } = await receiver.receive(body, authHeader, true);
    
      if (event === "egress_ended") {
        const { metadata } = await GetRoom({ roomname: egressInfo.roomName });
        await DeleteRoom({ room_id: egressInfo.roomName });
      }
    
      return new NextResponse("ok", { status: 200 });
    }
    Earlier, getting metadata before deleting the room worked fine. Any suggestions on why this is failing now, or how I can make sure metadata is always available before deletion? cc: @orange-nightfall-56903
  • c

    colossal-airport-19101

    10/11/2025, 8:55 AM
    sometimes it works, sometimes it throws error that room not found.
  • c

    crooked-zebra-83774

    10/14/2025, 10:43 AM
    I've noticed performance improvements in the meet app in https://github.com/livekit-examples/meet/commit/f13f8df08eb03853e67c795392acfa5964120c5a is there any discussion around whether this works? is there plan to implement this into sdk later?
  • h

    happy-noon-83297

    10/19/2025, 11:41 AM
    What is the best way to measure the end-to-end round-trip latency between the client and server in a LiveKit WebRTC setup, including the time for audio transmission and server processing?
    g
    • 2
    • 1
  • t

    thankful-airport-13432

    10/20/2025, 1:39 PM
    Hallo at all, I'm trying to use LiveKit Client Worker to decode a video track coming from an iOS app, but if I connect two versions of Node.js, everything works fine. But between iOS and Node.js. Hoping not to violate any group policies, I'm trying to insert some code snippets. Can you tell me if this is the wrong approach? nodejs: this.cryptoWorker.postMessage({ kind: 'init', data: { keyProviderOptions: { sharedKey: false, ratchetWindowSize: 0, failureTolerance: 0, keyringSize: 1, ratchetSalt: new Uint8Array(0), discardFrameWhenCryptorNotReady: true }, loglevel: 'trace', }, }); const encoder = new TextEncoder(); const raw = encoder.encode(cipherKey.value); const material = await crypto.subtle.importKey('raw', raw, 'HKDF', false, ['deriveKey']); _this.cryptoWorker.postMessage({ kind: 'setKey', data: { key: raw, keyIndex: 0, isPublisher: publisher, participantIdentity: callsign } }); _this.cryptoWorker.postMessage({ kind: 'enable', data: { enabled: true, participantIdentity: callsign } }); whene I have the track: this.cryptoWorker.postMessage({ kind: 'updateCodec', data: { participantIdentity: this.callsign || '', trackId: receiver.track.id, codec: (receiver.track.kind === "audio" ? this.audio_codec : this.video_codec) } }); const { readable, writable } = receiver.createEncodedStreams() ; const kind = 'decode' ; // debugger; this.cryptoWorker.postMessage({ kind: kind, data: { participantIdentity: this.callsign || '', trackId: receiver.track.id, isReuse: false, readableStream: readable, writableStream: writable }, }, [readable, writable]); in swift: private var peerConnection: LKRTCPeerConnection? var audioCryptor: LKRTCFrameCryptor? var videoCryptor: LKRTCFrameCryptor? keyProvider = self.setupKeyProvider( key: data, partecipantId: self.callsign ) private func hkdfSHA256(ikm: Data, salt: Data, info: Data = Data(), outLen: Int = 32) -> Data { let inputKey = SymmetricKey(data: ikm) let derived = HKDFSHA256.deriveKey( inputKeyMaterial: inputKey, salt: Data(), info: Data(), outputByteCount: outLen ) return derived.withUnsafeBytes { Data($0) } } private func setupKeyProvider(key: Data, partecipantId: String) -> LKRTCFrameCryptorKeyProvider { let salt = Data() // non usato se window=0 let provider = LKRTCFrameCryptorKeyProvider( ratchetSalt: salt, ratchetWindowSize: 0, sharedKeyMode: false, uncryptedMagicBytes: Data(), failureTolerance: 0, keyRingSize: 1, discardFrameWhenCryptorNotReady: true ) let derivedKey = hkdfSHA256(ikm: key, salt: salt) provider.setKey(derivedKey, with: 0, forParticipant: partecipantId) return provider } if isVideoEnabled { let localVideoTrack = try makeVideoTrack() guard let sender = peerConnection!.add(localVideoTrack.track, streamIds: ["video0"]) else { throw MapError.generic(message: "Error create local video track") } guard let cryptor = LKRTCFrameCryptor( factory: factory, rtpSender: sender, participantId: callsign, algorithm: .aesGcm, keyProvider: keyProvider ) else { throw MapError.generic(message: "Error create video frame cryptor") } videoCryptor = cryptor videoCryptor?.enabled = true videoCryptor?.delegate = self self.localVideoTrack = localVideoTrack try restartCapture(width: 640, height: 480, frameRate: 30) } the keys are identical on both sides Thank you so much for any help you can give me Gianmassimo
  • m

    melodic-hairdresser-73631

    10/27/2025, 2:30 PM
    Hello All. We are using room composite egress and the room's are getting recorded. The recording is available in S3 and we can see egress in the livekit dashboard. But
    useIsRecording
    hook consistently returns
    false
    rendering it unusable in our UI to show the recording status. Has anyone experienced this? Should we even be using this hook for egress based recordings?
    l
    • 2
    • 1
  • a

    able-traffic-75285

    10/30/2025, 5:50 AM
    Hi all! I have a question about
    prioritizePerformance
    method. Why it is reduced publishing quality? If i got idea correct,
    LocalTrackCpuConstrained
    event fired when user have trouble with performance. Shouldn't it reduced only recieved video, not publishing?
    l
    • 2
    • 1
  • s

    square-pizza-32556

    10/31/2025, 1:37 PM
    python main.py
    Traceback (most recent call last): File "C:\Users\Office\Desktop\New folder\main.py", line 132, in <module> from livekit.agents import ( File "C:\Users\Office\AppData\Local\Programs\Python\Python312\Lib\site-packages\livekit\agents\__init__.py", line 23, in <module> from . import cli, inference, ipc, llm, metrics, stt, tokenize, tts, utils, vad, voice File "C:\Users\Office\AppData\Local\Programs\Python\Python312\Lib\site-packages\livekit\agents\inference\__init__.py", line 3, in <module> from .tts import TTS, TTSModels File "C:\Users\Office\AppData\Local\Programs\Python\Python312\Lib\site-packages\livekit\agents\inference\tts.py", line 13, in <module> from .. import tokenize, tts, utils File "C:\Users\Office\AppData\Local\Programs\Python\Python312\Lib\site-packages\livekit\agents\tokenize\__init__.py", line 1, in <module> from . import basic, blingfire, utils File "C:\Users\Office\AppData\Local\Programs\Python\Python312\Lib\site-packages\livekit\agents\tokenize\blingfire.py", line 7, in <module> from livekit import blingfire File "C:\Users\Office\AppData\Local\Programs\Python\Python312\Lib\site-packages\livekit\blingfire\__init__.py", line 18, in <module> import lk_blingfire as _cext ModuleNotFoundError: No module named 'lk_blingfire' PS C:\Users\Office\Desktop\New folder> i am getting this issue any ide how to sort it out
    l
    • 2
    • 2
  • h

    happy-noon-83297

    10/31/2025, 3:20 PM
    Hi all, I’m running into an issue with echo cancellation in a chat app built on LiveKit’s WebRTC stack. When AEC is enabled and users are not wearing headphones, the microphone input gets heavily suppressed whenever it overlaps with AI-generated audio output from the speakers. At lower playback volumes, some overlap remains audible, but as volume increases, the mic signal is almost completely muted. Is there a way to adjust AEC parameters or configure LiveKit/WebRTC to reduce this over-suppression while still maintaining effective echo cancellation?
  • s

    stale-oyster-66644

    11/04/2025, 6:48 AM
    hi everyone! quick question—has anyone tried implementing gemini live api's multimodal screensharing api with livekit? im trying to build something to share my screen to my tavus avatar (using livekit realtime + elevenlabs), but running into some issues. if anyone has an example they could share, that would be great! thank you so much!
    l
    • 2
    • 1
  • r

    rapid-ocean-58394

    11/05/2025, 7:52 AM
    Hi, since yesterday (or maybe longer) I get this weird error using the js client sdk 2.15.11 and Safari 17.2.1. It is caused during room connect and it doesn't connect anymore. The problem wasn't there before and is not there using a Chrome based browser. I connect to you cloud server.
    p
    • 2
    • 2
  • c

    crooked-zebra-83774

    11/06/2025, 1:29 PM
    Hi, if we are doing only 1:1 calls, is there any need/benefit of simulcast/dynacast? Or will any necessary degradation be handled by the webRTC layer without the overhead?
    r
    • 2
    • 1
  • a

    ancient-garage-36777

    11/06/2025, 4:28 PM
    Hi team, is there a way I can "replay" web-recordings with transcripts in the nextjs web kit? Would be useful for annotating/reviewing call scripts.
    r
    • 2
    • 3
  • w

    worried-shampoo-64319

    11/06/2025, 7:28 PM
    Hi, I was curious to get any pointers on how to troubleshoot encoding performance (I suspect this is where I'm seeing an issue?) for video, specifically screensharing. I'm trying to configure a screenshare at 1080p and 60fps, but I can't seem to see more than ~25fps. Based on the
    RTCOutboundRtpStreamStats
    , there is no reason given for
    qualityLimitationReason
    yet it still shows it is only between 5-25fps. In
    setScreenShareEnabled
    I've passed in the desired resolution and framerate to
    resolution
    in
    ScreenShareCaptureOptions
    . I've also set the maxFramerate in and tried multiple bitrates in a wide range for the options in the provided
    TrackPublishOptions
    . as well as trying h264, vp8, and vp9 all with similar results. I've been testing this on Windows with Chrome. Thanks!
  • b

    bored-motorcycle-22550

    11/10/2025, 6:26 PM
    https://github.com/livekit/client-sdk-flutter/issues/914 This is impacting our usage of VP9 with the LiveKit Flutter SDK. We cannot request a desired resolution for the remote video on simulcast. VP8 works fine, but in VP9, desired resolution is ignored.
  • s

    sparse-flag-66749

    11/11/2025, 4:09 AM
    Hey team 👋 We're running a telemedicine platform with video calling + screensharing, and we've been troubleshooting some quality issues that cropped up recently. Specifically seeing jitter and what seems like packet loss during screenshare sessions. Context: We made some changes to our LiveKit config a while back (removed explicit simulcast layers, adjusted bitrates) and immediately started getting complaints about screenshare quality - text becoming illegible, compression artifacts, etc. We've since implemented separate publish options for screenshare vs camera video: // Screenshare config { maxBitrate: 1_500_000, // 1.5 Mbps maxFramerate: 15, degradationPreference: 'maintain-resolution', videoSimulcastLayers: [VideoPresets.h1080, VideoPresets.h720] } This has helped with the resolution/compression issues, but we're still seeing some jitter and occasional packet loss that affects the experience. Question: What's the best way to minimize jitter and packet loss for screenshare specifically? Are there any: - Client-side publish settings we should tweak? - Server-side config optimizations for self-hosted deployments? - Network-related settings (buffer sizes, RED/FEC for video, etc.)? - Simulcast layer strategies that work better for screenshare? We're using React components from @livekit/components-react and publishing screenshare with useTrackToggle. Currently on VP9 codec. Any guidance would be really appreciated! Happy to share more details about our setup if needed. cc: @refined-appointment-81829 @echoing-kitchen-90064 @tall-belgium-91876 @best-notebook-41164
  • r

    rapid-ocean-58394

    11/11/2025, 7:32 AM
    Question: I change the metadata of the local participant. It fires the local ParticipantMetadataChanged, but not remote. Then I get locally the error: LiveKitError: Request to update local metadata timed out. In my JWT the following is set
    Copy code
    {
       ...,
       video: { 
         ...,
         canUpdateOwnMetadata: true,
       },
       metadata: "{ chair: 'v', subscribed: [] }"
    }
    We connect to the LiveKit cloud server. What am I doing wrong?
    l
    • 2
    • 1
  • p

    prehistoric-art-66005

    11/11/2025, 9:41 AM
    Hi, Can I rejoin the same LiveKit room after refreshing the browser using the same token ? The agent is self-hosted, and the client SDK is React . Thanks! cc : @refined-appointment-81829 @echoing-kitchen-90064
    l
    • 2
    • 1
  • f

    flaky-fish-58284

    11/12/2025, 8:30 AM
    hi team. we are trying to connect to 1 room with 3 participants from 2 browsers. is it possible to connect to the same room from 1 browser by 2 participants but 1 will be connected and the 2 will be "on hold"? we need to have separate audio devices for the 2 participants that are connecting from 1 browser. we tried to unsubscribe, we tried setEnabled, we tried setVolume, without effect - both of the 2 connections from same browser tab played after we called this. any idea if this is possible?
    r
    • 2
    • 4