https://livekit.io logo
Join Slack
Powered by
# server-api
  • t

    tall-terabyte-97980

    06/13/2025, 1:37 AM
    (Quick verbal PR) Also here: https://docs.livekit.io/agents/build/workflows/#context-preservation it suggests that there is a property you can access through
    self.session.chat_ctx
    which does not exist instead it has
    self.session._chat_ctx
    . What does exist though is
    self.chat_ctx
  • m

    millions-winter-96505

    06/14/2025, 2:49 AM
    I called Mute/unmute a Participant's Track This API muted the member, but I could still hear the member talking. Why is this?
  • s

    straight-smartphone-65394

    06/15/2025, 1:37 AM
    Hello all, I use LocalParticipant to public a track to a specific room, this room is set autoSubcribed = true. How can I stop/remove this track from a specific room for all participants? Thank you so much
    r
    • 2
    • 2
  • t

    thousands-night-79795

    06/16/2025, 9:46 AM
    Can anybody please clarify why the latest published "livekit-api" Python package version is 1.0.2? I see GitHub already has v1.0.9.
    r
    • 2
    • 1
  • p

    proud-boots-10188

    06/16/2025, 6:47 PM
    Hi! We're working on a voice agent implemented on Android device (client). The solution requires to interrupt agent triggering by VAD on the edge (on the client side, not backend). We'd like to give ID for each audio frame we sent to client, so client could track which frames were played/skipped and inform backend to keep consistent dialog state. Do you have an idea how we could use WebRTC headers in audio stream to send ID of the audioframe?
    • 1
    • 1
  • f

    fast-plastic-44892

    06/18/2025, 12:38 PM
    Hello! SIPDispatchRuleInfo includes the information about the room_config. This works fine when creating a room. Is there any way to update this SDR? Looking at SIPDispatchRuleUpdate it does not include any way to set the Update to edit the Room Configuration. https://docs.livekit.io/sip/api/#sipdispatchruleupdate I've only being able to change the roomconfiguration by deleting the SDR and recreate it. In the UI there is a way to edit it through the JSON Editor which looks like this: sip-dispatch-rule-info.json: { "sipDispatchRuleId": "SDR_XXXXXX", "rule": { "dispatchRuleIndividual": { "roomPrefix": "call-" } }, "trunkIds": [ "ST_XXXXX" ], "name": "NAME", "roomConfig": { "agents": [ { "agentName": "test-agent" } ] } }
    • 1
    • 1
  • l

    late-businessperson-58140

    06/18/2025, 3:05 PM
    Hello everyone, I've successfully integrated LiveKit with Tavus, but I'm facing an issue where the video doesn't appear sometimes. I'm currently investigating this, but it still occurs intermittently. I have a client demo coming up and need to present my Tavus agent. Could anyone please help me troubleshoot or guide me on how to resolve this issue?
  • h

    hallowed-bear-57167

    06/19/2025, 11:36 AM
    Hello, I have integrated Livekit with a custom TTS. The tts returns raw audio chunks and I notice a slight echo usually at the last chunk. I think this may be a problem with the flush. When I save the audio clips locally and play them I do not notice the same echo. Additionally the echo does not appear when I am working outside of livekit (handling audio response myself)
    p
    • 2
    • 2
  • m

    many-helmet-9770

    06/20/2025, 11:32 AM
    Hi folks! Getting this error "sha256 checksum of body does not match", while trying to parse received webhook's content. here is my webhook:
    Copy code
    const receiver = new WebhookReceiver(process.env.LIVEKIT_API_KEY!, process.env.LIVEKIT_API_SECRET!);
    
    // Middleware to handle 'application/webhook+json' content type
    const webhookJsonParser = express.json({
      type: ['application/json', 'application/webhook+json']
    });
    
    <http://router.post|router.post>('/lk-webhook', webhookJsonParser, async (req, res) => {
        try {
            console.log('Received webhook with content type:', req.get('content-type'));
            console.log('Authorization:', req.get('Authorization'));
    
            const event = await receiver.receive(req.body, req.get('Authorization'));
            console.log('Webhook event:', event);
            res.status(200).json({ message: 'ok' });
        } catch (ex) {
            console.error('Error processing webhook:', ex);
            res.status(500).json({
                message: 'Failed to handle webhook',
                error: ex instanceof Error ? ex.message : 'Unknown error'
            });
        }
    });
    I triple checked my API key, both in my env key and the one I used while creating webhook are the same
    • 1
    • 1
  • l

    limited-address-91743

    06/20/2025, 3:45 PM
    hello, i am trying to use the krisp on flutter web, but when the application try to request information on the server-api of livekit /settings, say 404 page not found, i tried to found something on the document of the server-api but i don`t have found nothing, has any suggestion how to fix this?
  • p

    proud-boots-10188

    06/27/2025, 12:23 AM
    Hi Guys! we love what you do and considering to migrate to your stack. But our android client requires Websocket connect and is not good with webRTC. Are there any plans to support it like pipecat: https://github.com/pipecat-ai/pipecat/tree/main/examples/websocket/client https://github.com/pipecat-ai/pipecat/issues/1749#issuecomment-2854893512
    r
    • 2
    • 1
  • m

    miniature-spring-29693

    06/27/2025, 5:29 PM
    Hello there, I'm creating a livekit room using the server SDK, and setting the departure timeout to 180 seconds. I am doing this after ensuring the room does not exist (i call room delete with a 5s timeout before calling this). Strangely, when I check room properties with
    lk room list
    CLI command, I'm seeing that the departure timeout is 20 seconds. I tried this with different names & i keep getting 20 seconds for departure timeout.
    ➕ 1
    👀 1
    t
    • 2
    • 1
  • m

    miniature-spring-29693

    06/27/2025, 5:29 PM
    image.png
    ➕ 1
  • b

    boundless-branch-10227

    06/29/2025, 10:02 PM
    im having the same issue ^ anyone know whats up with this?
    t
    • 2
    • 1
  • b

    busy-fish-13948

    06/30/2025, 12:12 PM
    Any Django devs here? Could you guys chime in on this async to sync implementation of the livekit-api functions please ?
  • p

    purple-leather-32430

    07/01/2025, 10:13 AM
    Hey guys, couple of questions: 1. How to know whether the call/meeting was inbound or outbound via webhook? 2. How to know the call/meeting end reason I could not find it in the doc but I received on egress_ended event detail: "End reason: Source ended" - what are other possible options? Thanks!
    t
    • 2
    • 1
  • l

    lemon-elephant-62047

    07/04/2025, 7:21 PM
    Hi devs! I'm using LiveKit Cloud and want to prevent participants from joining non-existent rooms. My goal: • Host clicks "Create Room" → room is created. • Participant clicks "Join Room" → only joins if the room exists. • Prevent auto room creation when
    roomCreate: false
    . Issue: Even with
    roomCreate: false
    , participants still auto-create rooms on join. Also, there's no way to check if a room already exists since
    RoomServiceClient
    isn’t available on Cloud. Right now I’m tracking room names manually on the backend. Is there a better way to handle this on LiveKit Cloud? Thanks!
    l
    • 2
    • 1
  • n

    numerous-whale-53652

    07/06/2025, 6:01 PM
    how can i use user_away_timeout i got No parameter named "user_away_timeout"basedpyrightreportCallIssue Ctrl+click to open in new tab (function) user_away_timeout: Unknown
    l
    t
    • 3
    • 7
  • b

    breezy-action-29677

    07/09/2025, 4:16 PM
    Hello, Is it possible to start an outbound call from Twilio (so without livekit), and once the user replies, make that call join an existing livekit room? The room to join as a random component to it. We know the room name before starting the twilio call.
    t
    r
    • 3
    • 2
  • f

    few-analyst-46

    07/10/2025, 4:04 AM
    hello. i'm using the python server sdk, how can i get the remote participant's ip address ?
    d
    • 2
    • 4
  • f

    full-hydrogen-48173

    07/11/2025, 3:06 AM
    hey guys, I want to know using python server sdk about the information about the SIP calls that is dispatched. I want to get this information inside of python object (dictionary or some Pydantic class) has anybody done it?
  • b

    bulky-fish-79784

    07/12/2025, 10:00 PM
    any boilterplate or github where we have fastapi/livekit backend and react as froentend.
  • c

    chilly-camera-39410

    07/13/2025, 5:09 AM
    Hello, I noticed a error which is that when publishing video track, my python client would crash immediately and print
    Copy code
    [bad_optional_access.cc : 39] RAW: Bad optional access
    [1]    93326 abort      python mre.py
    And here is also a minimal reproduction example
    Copy code
    import livekit
    import livekit.api
    import livekit.rtc
    import google.genai
    import google.genai.live
    import asyncio
    import numpy
    import cv2
    import base64
    import av
    import time
    import json
    import os
    import websockets_proxy
    
    new_loop = asyncio.new_event_loop()
    asyncio.set_event_loop(new_loop)
    
    async def getLiveKitAPI():
        return livekit.api.LiveKitAPI(f"<https://www.xiaokang00010.top:6212>", "token_here", "token_here")
    
    userToken = livekit.api.AccessToken(
            "token_here", "token_here").with_identity(
            'user').with_name("Jerry Chou").with_grants(livekit.api.VideoGrants(room_join=True, room="testroom")).to_jwt()
    
    botToken = livekit.api.AccessToken(
            "token_here", "token_here").with_identity(
            'model').with_name("Awwa").with_grants(livekit.api.VideoGrants(room_join=True, room="testroom")).to_jwt()
    
        # livekit api is in this file, so we can't put this logic into createRtSession
    async def f():
        await (await getLiveKitAPI()).room.create_room(create=livekit.api.CreateRoomRequest(name="testroom", empty_timeout=10*60, max_participants=2))
    
    asyncio.get_event_loop().run_until_complete(f())
    
    print("User token: ", userToken)
    print("Bot token: ", botToken)
    
    class MRE:
        def __init__(self, name = "Gemini"):
            self.name = name
            
        
        async def chatRealtime(self):
            buffer = ''
            while True:
                async for response in self.llmSession.receive():
                    if response.text is None:
                        # a turn is finished
                        break
                    print(f"Recved {len(response.text)}")
                    buffer += response.text
                print("End of turn ", buffer, self.llmSession._ws.close_code, self.llmSession._ws.close_reason)
                buffer = ''
        
            
        async def start(self, loop = new_loop):
            if os.getenv("HTTP_PROXY"):
                proxy = websockets_proxy.Proxy.from_url(os.getenv("HTTP_PROXY"))
                def fake_connect(*args, **kwargs):
                    return websockets_proxy.proxy_connect(*args, proxy=proxy, **kwargs)
                google.genai.live.connect = fake_connect
                
            print("Preparing for launch...")
            client = google.genai.Client(http_options={'api_version': 'v1alpha'})
            model_id = "gemini-2.0-flash-exp"
            config = {"response_modalities": ["TEXT"]}
            self.llmPreSession = client.aio.live.connect(model=model_id, config=config)
            self.llmSession: google.genai.live.AsyncSession = await self.llmPreSession.__aenter__()
            self.chatRoom = livekit.rtc.Room(loop)
            
            asyncio.ensure_future(self.chatRealtime())
            
            @self.chatRoom.on("track_subscribed")
            def on_track_subscribed(track: livekit.rtc.Track, publication: livekit.rtc.RemoteTrackPublication, participant: livekit.rtc.RemoteParticipant):
                print(f"track subscribed: {publication.sid}")
                if track.kind == livekit.rtc.TrackKind.KIND_VIDEO:
                    print('running video stream...')
                    asyncio.ensure_future(self.receiveVideoStream(
                        livekit.rtc.VideoStream(track)))
                elif track.kind == livekit.rtc.TrackKind.KIND_AUDIO:
                    print('running voice activity detection...')
                    asyncio.ensure_future(
                        self.forwardAudioStream(livekit.rtc.AudioStream(track), publication.mime_type))
                    
            @self.chatRoom.on("track_unsubscribed")
            def on_track_unsubscribed(track: livekit.rtc.Track, publication: livekit.rtc.RemoteTrackPublication, participant: livekit.rtc.RemoteParticipant):
                print(f"track unsubscribed: {publication.sid}")
    
            @self.chatRoom.on("participant_connected")
            def on_participant_connected(participant: livekit.rtc.RemoteParticipant):
                print(f"participant connected: {
                    participant.identity} {participant.sid}")
    
            @self.chatRoom.on("participant_disconnected")
            def on_participant_disconnected(participant: livekit.rtc.RemoteParticipant):
                print(
                    f"participant disconnected: {
                        participant.sid} {participant.identity}"
                )
    
                self.terminateSession()
    
            @self.chatRoom.on("connected")
            def on_connected() -> None:
                print("connected")
                    
            
            print("Connecting to LiveKit...")
            await self.chatRoom.connect(f"<wss://www.xiaokang00010.top:6212>", botToken)
            print("Connected to LiveKit.")
            # publish track
            # audio
            # audioSource = livekit.rtc.AudioSource(
            #     48000, 1)
            # self.broadcastAudioTrack = livekit.rtc.LocalAudioTrack.create_audio_track(
            #     "stream_track", audioSource)
            # publication_audio = await self.chatRoom.local_participant.publish_track(
            #     self.broadcastAudioTrack, livekit.rtc.TrackPublishOptions(source=livekit.rtc.TrackSource.SOURCE_MICROPHONE, red=False))
            # import threading
            # self.audioBroadcastingThread = threading.Thread(
            #     target=self.runBroadcastingLoop, args=(audioSource,))
            # self.audioBroadcastingThread.start()
            
            video_source = livekit.rtc.VideoSource(640, 480)
            self.broadcastVideoTrack = livekit.rtc.LocalVideoTrack.create_video_track(
                "stream_track", video_source)
            publication_video = await self.chatRoom.local_participant.publish_track(
                self.broadcastVideoTrack, livekit.rtc.TrackPublishOptions(source=livekit.rtc.TrackSource.SOURCE_CAMERA, red=False))
            print("Published video track.")
                    
            print("Waiting for participants to join...")
            while True:
                if self.llmSession:
                    print("Test", self.llmSession._ws.close_code, self.llmSession._ws.close_reason)
                await asyncio.sleep(1)
    
    
        def runBroadcastingLoop(self, audioSource) -> None:
            """
            Start the loop for broadcasting missions.
    
            Returns:
                None
            """
            print('starting broadcasting loop')
            new_loop = asyncio.new_event_loop()
            new_loop.run_until_complete(self.broadcastAudioLoop(audioSource))
    
    
        def generateEmptyAudioFrame(self) -> livekit.rtc.AudioFrame:
            """
            Generate an empty audio frame.
    
            Returns:
                livekit.rtc.AudioFrame: empty audio frame
            """
            amplitude = 32767  # for 16-bit audio
            samples_per_channel = 480  # 10ms at 48kHz
            time = numpy.arange(samples_per_channel) / \
                48000
            total_samples = 0
            audio_frame = livekit.rtc.AudioFrame.create(
                48000, 1, samples_per_channel)
            audio_data = numpy.frombuffer(audio_frame.data, dtype=numpy.int16)
            time = (total_samples + numpy.arange(samples_per_channel)) / \
                48000
            wave = numpy.int16(0)
            numpy.copyto(audio_data, wave)
            # logger.Logger.log('done1')
            return audio_frame
    
            
        async def receiveVideoStream(self, stream: livekit.rtc.VideoStream):
            async for frame in stream:
                img = frame.frame.convert(
                    livekit.rtc.VideoBufferType.BGRA).data.tobytes()
                img_np = numpy.frombuffer(img, dtype=numpy.uint8).reshape(
                    frame.frame.height,
                    frame.frame.width,
                    4
                )
                # convert to jpeg
                # resize the image so as to save the token
                scaler = frame.frame.width / 1280
                new_width, new_height = (int(
                    frame.frame.width // scaler), int(frame.frame.height // scaler))
                cv2.resize(img_np, (new_width, new_height))
    
                encoded, buffer = cv2.imencode('.jpg', img_np)
                
                await self.llmSession.send({"data": base64.b64encode(buffer.tobytes()).decode(), "mime_type": "image/jpeg"})
                
            
        async def forwardAudioStream(self, stream: livekit.rtc.AudioStream, mime_type: str):
            frames = 0
            last_sec = time.time()
            last_sec_frames = 0
            limit_to_send = 100
            data_chunk = b''
            async for frame in stream:
                last_sec_frames += 1
                frames += 1
                avFrame = av.AudioFrame.from_ndarray(numpy.frombuffer(frame.frame.remix_and_resample(16000, 1).data, dtype=numpy.int16).reshape(frame.frame.num_channels, -1), layout='mono', format='s16')
                data_chunk += avFrame.to_ndarray().tobytes()
                if frames % limit_to_send == 0:
                    await self.llmSession.send({"data": data_chunk, "mime_type": "audio/pcm"})
                
                    data_chunk = b''
                        
                if time.time() - last_sec > 1:
                    last_sec = time.time()
                    print(f"forwardAudioStream: last second: {last_sec_frames} frames, num_channels: {frame.frame.num_channels}, sample_rate: {frame.frame.sample_rate}, limit_to_send: {limit_to_send}")
                    last_sec_frames = 0
                    
                    
        async def broadcastAudioLoop(self, source: livekit.rtc.AudioSource, frequency: int = 1000):
            print('broadcasting audio...')
            
            while True:
                await source.capture_frame(self.generateEmptyAudioFrame())
                
        
    mre = MRE()
    
    asyncio.ensure_future(mre.start(new_loop))
    
    new_loop.run_forever()
    My livekit version and OS information
    Copy code
    ❯ pip freeze | grep livekit
    livekit==1.0.11
    livekit-api==1.0.3
    livekit-protocol==1.0.4
    ❯ uname -a
    Darwin Yoimiyas-MacBook-Air.local 25.0.0 Darwin Kernel Version 25.0.0: Mon Jun 30 22:07:46 PDT 2025; root:xnu-12377.0.132.0.2~69/RELEASE_ARM64_T8103 arm64
    Since I can not find any helpful suggestions from the official GitHub repo issues and prs, I first thought it would be a misconfiguration of my own code. However, it reoccured stablely on both my Linux server and my M1 Macbook Air. So I thought it might be a bug.
    • 1
    • 1
  • s

    swift-fireman-54201

    07/14/2025, 10:26 AM
    how to paginate Participant in nodejs: because const res = await roomService.listParticipants(roomName) does not paginate Participant
    r
    • 2
    • 1
  • l

    little-activity-13208

    07/15/2025, 3:41 PM
    hey guys, I'm having an issue updating the room metadata and webhook. The below code updates the room metadata, but the webhook event I receive still has the old metadata:
    Copy code
    await ctx.api.room.update_room_metadata(
            api.UpdateRoomMetadataRequest(
                room=ctx.room.name,
                metadata=json.dumps(updated_metadata)
            )
        )
        
        # await fetch_until_success(lambda room_info: room_info.metadata == json.dumps(updated_metadata))
        print("waiting")
        await asyncio.sleep(5)
        print("done waiting")
    
        await ctx.api.room.delete_room(
            api.DeleteRoomRequest(
                room=ctx.room.name,
            )
        )
    Fetching room from the api directly, however, does show the updated metadata, as the below code checks the two metadata as identical:
    Copy code
    async def fetch_room_info():
        """Fetch room information from the LiveKit server"""
        ctx = get_job_context()
        room_info = await ctx.api.room.list_rooms(
            api.ListRoomsRequest(names=[ctx.room.name])
        )
        return room_info.rooms[0]
    
    async def fetch_until_success(end_condition: Callable[[Any], bool], max_retries: int = 10):
        """Fetch room information from the LiveKit server until the end condition is met"""
        for _ in range(max_retries):
            room_info = await fetch_room_info()
            print(f"Room info: {room_info}")
            if end_condition(room_info):
                print(f"End condition met: {room_info}")
                return room_info, True
            await asyncio.sleep(1)
        return None, False
    Any help would be appreciated, thanks!
  • f

    fierce-zoo-69908

    07/15/2025, 3:42 PM
    Scenario: inbound call is coming. I want to dispatch an agent based on phone number with a context specific metadata. Can I use server APIs to add a participant agent based on some custom logic from webhook? https://docs.livekit.io/home/server/webhooks/
    • 1
    • 2
  • w

    wide-tailor-66933

    07/16/2025, 5:24 AM
    Copy code
    async def yeah():
        async with api.LiveKitAPI(url=cfg.LIVEKIT_URL, api_key=cfg.LIVEKIT_API_KEY, api_secret=cfg.LIVEKIT_API_SECRET) as lkapi:
            rooms_response = await lkapi.room.list_rooms(api.ListRoomsRequest())
        container = {}
        for room in rooms_response.rooms:
            container[room.name] = room.num_participants
            print(room.name, room.num_participants)
        return container
    Why do the number of participants fluctuate, usually changing from 0 to 2 or 2 to 0? Any reason why? My application relies on this to check empty room and turns out the API response fluctuates a lot. From my simple observation, the first 10 rooms always returns nonzero, the rest is zero. If I query with names,
    api.ListRoomsRequest(names=["room-123"])
    , this one never fluctuates
  • h

    hundreds-dawn-90573

    07/16/2025, 6:03 AM
    Hey, everyone I want to know if it will be possible to stream the audio from Whatsapp call to a livekit room and back to whatsapp. Whatsapp’s calling api works with webrtc, they send us a
    sdp_offer
    and we can create a sdp answer and send them a request to accept the call. I just wanted to know if it will be possible to create
    webrtc
    connection that can accept the
    sdp_offer
    from WhatsApp and stream audio back n forth b/w caller and agent (Livekit agent) I don’t have much idea about webrtc, so just asking if anyone else has done something like this previously. Thanks.
  • m

    many-helmet-9770

    07/16/2025, 6:58 AM
    Hi everyone! We're doing some integrations with telephony systems and LiveKit. We're receiving audio in Opus format from telephony systems but while passing it to LiveKit, I see we're having to decoding it to raw PCM format, and while transferring the audio back, we're again receiving raw PCM from LiveKit and encoding to Opus to send telephony systems. The problem is encoding & decoding operations taking a lot of resources, is there any possibility to pass the audio in Opus format to LiveKit ? I see LiveKit uses WebRTC to transfer audio from client to its server, and in that WebRTC, I believe we negotiate the codec by specifying it in SDP, so far I believe LiveKit specifying raw PCM in negotiation SDP, so is there any possibility to use Opus in that negotiation and transfer audio in Opus over WebRTC ? If this is not available yet, we're ready to dedicate our time in contribution if any guidance provided, since it's being crucial for us. Also just curious question: why we're transferring audio in raw PCM over WebRTC ? wouldn't it be more efficient to transfer audio in compressed version like in Opus ? Thank you!
  • c

    cuddly-eye-42352

    07/17/2025, 10:36 AM
    Hello, we are working on livekit and I'm facing some issues with participant permissions can I force participant to listen to specific datachannel topic or at least disable his permission of subscribe to data channels? something like
    canSubscribeData