https://livekit.io logo
Join Slack
Powered by
# docs-feedback
  • d

    dry-musician-70452

    09/23/2025, 11:06 PM
    set the channel description: Use this channel to report bugs and offer feedback for LiveKit's documentation resources. Reports will be investigated by the LiveKit Developer Products team.
  • d

    dazzling-apartment-68328

    09/24/2025, 5:18 PM
    Hi Livekit Agent team! I used the “Voice AI quickstart” a few days ago and I have some feedback on that experience and guide. + Great that LK has put this together + Cool that some things are generated for my project with my LK variables in them: env keys, IOS xcode project etc. + Starter projects exists, like IOS Room for improvements: • I started out trying the node.js guide but there too many things that didnt work out at the end, I didnt get it working, so I switched to the Python setup flow, and that worked much better. It seems more “matured”. (i know the node.js is newer) • Exampel in node.js flow, that cost me a fair amount of time: in the guide at Agent Code, there is an example of “node.js” main agent file, but I think in all other example code, it looks for agent.js ? A seasoned developer might handle that, but a semi-dev, or a light dev can easily get stuck there. • (added edit: ) PLease verify if the files should be .ts or .js - I think I ran into some issues there. Either in the quickstart example code, or in the starter-node repo. I switched to the Python flow: • I did not know which of the agent.py files in the starter package or the quickstart guide was the best/newest. Ideally they should be the same/synced I think. • ENV vars: In the quick guide, the concept of Livekit secrets are not mentioned. I was only familiar with ENV vars before this project. Important i think: At the DEPLOY section at the end, a step should be about setting the LK secrets, otherwiose the app wont work at “lk agent deploy”, and then a non-super-dev might lose interest in the whole exploration. tricky issue, since the afgent will work on DEV and START, but fail in DEPLOY. • ENV vars: it seems all env vars are required, even tho I do not use all due to the realtime path. Cartesia for example. Things break if I use an empty var value, but works if I enter a random string such as “ABCD”. • the last chapter in the guide: “Deploy to LiveKit Cloud” only mentions “lk agent create”, which I should only do the first time, and then later deployes should be “lk agent deploy”? I suggest that is communicated there. Perhaps a small deploy ritual: run a check on your .env.local to look for syntax errors, update secrets, and if new env variables added, secrets override is needed (it took me a while to realize that). I understand Livekit has 1000 things to tackle, but it seems LK has bet the farm on AI live voice, and this type of hello world wizard is a great initiative, but can benefit from some curling level sand papering. I recommend user testing by sitting next to a semi-dev trying to walk thorugh this quickstart. I now have a working realtime chat assistant, that saves some parts of the conversation to my Supabase table and can edit it when i tell it too. nice! keep up the good work lk
    🙏🏼 1
    🙏 4
    👀 1
    d
    g
    • 3
    • 4
  • f

    flaky-scooter-85113

    09/26/2025, 12:19 PM
    The Agent Warm Transfer page is ... hard to understand. After a good amount of time torturing ask-ai and deep-wiki, I settled on something like the following steps -
    Copy code
    - announce transfer request
    - mute the customer audio (background music?)
    - generate access-token to enable a transfer-agent to join the transfer-room
    - create transfer-room
    - sip-call the recipient, and move their sip-participant into the transfer-room
      - if it fails
        - reenable customer audio
        - close transfer-room
        - announce failure
    - create transfer-agent
        - instructions
          - provide a summary of the customer conversation, confirm or fail the transfer following discussion with the sip-participant
        - tools
          - transfer-confirmed 
            - move the sip-participant into the original customer room
            - reenable customer audio
            - close transfer-room and transfer-agent
            - remove the original agent
          - transfer-failed
            - announce failure to customer
            - reenable customer audio
            - announce transfer failure
            - close transfer-room and transfer-agent
    - create transfer-session, and start it with the transfer-agent and transfer-room
    I think this is pretty much there, but suspect not 100% complete. Note - I've used my own terms here: • the transfer-agent is the agent organising the transfer ("supervisor agent" in your terms). Posted because that Warm Transfer page needs some reworking. Ideally a JS example too. [Updated with minor formatting adjustment]
    🙏🏼 1
    👀 1
    d
    • 2
    • 4
  • b

    bored-finland-56176

    09/30/2025, 6:45 AM
    Hi LiveKit team, I've been setting up SIP integration and wanted to share some feedback on terminology that initially confused me. The term "trunk" is used for both inbound and outbound configurations, but they work quite differently: Inbound trunk - essentially a configuration that tells LiveKit which numbers to expect calls from. My external provider trunk forwards calls to LiveKit's SIP URI. Outbound trunk - an actual SIP trunk connection where LiveKit actively initiates calls through the provider. When I first set this up, I thought I needed to somehow "connect two trunks" for inbound calls. It took some trial and error to understand that the inbound "trunk" is really just configuration metadata, not a trunk in the traditional SIP sense. A few ideas that might help: • Consider different terminology for inbound vs outbound (maybe "Phone Number Configuration" vs "SIP Trunk")? • Alternatively, make the direction more prominent in the UI with visual indicators • Perhaps add a brief tooltip explaining the distinction Just a suggestion for consideration - the product itself works excellently once you understand the setup flow. Sharing this in case other users have similar initial confusion. Thanks for the great work on LiveKit!
    👀 1
    d
    • 2
    • 1
  • t

    tall-lamp-50063

    09/30/2025, 12:53 PM
    I just wanted to say thanks for the DeepWiki. #C088ZNU7QQ5 recommended it to me when it couldn’t answer a question, and it’s become my go-to place whenever I have a LiveKit question. I find it super helpful.
    🎉 4
  • n

    nice-vr-96469

    10/02/2025, 5:01 PM
    The livekit_composite repo has a typo in the worst place, my agents consistently try to find a
    knowledge_guidance.md
    file that doesn't exist. I created a PR to fix this, just needs to be approved: https://github.com/livekit/livekit_composite/pull/54
    👀 1
    r
    • 2
    • 1
  • d

    dazzling-apartment-68328

    10/06/2025, 12:56 PM
    Hey LK: suggestion of the day: If LK thinks the technolgy (MCP) works well enough, amybe show users in a simple way how to connect their IDE: Cursor or VS code etc ot The livekit docs properly, and or an livekit MCP? Suggested placemtn: First page of Livekit docs: “You can access the LK docs in various ways, this webpage, link to the knowledgebase from inside your IDE, or connect to our MCP. In Cursor for example, the user goes to Settings→ Docs, or Settings→ MCP
    👍 1
    👀 1
    d
    • 2
    • 1
  • q

    quick-lizard-90069

    10/07/2025, 9:31 AM
    Hey, could someone guide me to an example of a custom LLM Node in Typescript? I'm looking to integrate the AI-SDK but I don't see any documentation on the stream that should be returned by the node. Any guidance or pointers towards the right resources would be helpfull. What I already looked at: livekit nodes documentation Livekit code examples Livekit agent llm implementation Livekit google plugin
    👀 1
    l
    • 2
    • 1
  • w

    wooden-pilot-72063

    10/09/2025, 7:21 AM
    Hi team, spent a lot of time trying to figure out how to listen for chat conversation data on SwiftUI (user transcript, and agent speaking transcript) — would be great to have an example on this or if someone could point me to the API reference on how to do this
    👀 1
    d
    r
    a
    • 4
    • 22
  • r

    rapid-noon-12288

    10/12/2025, 4:08 PM
    Hi, the Telephony docs are missing info on how to delete room after call is hangup. when the user hangs up the call the room is still active and redials lead to weird behavior
    Copy code
    import asyncio
    import aiohttp
    from functools import wraps
    from livekit import agents, api, rtc
    
    async def entrypoint(ctx: agents.JobContext):
        
        @ctx.room.on("participant_attributes_changed")
        @asyncio_create_task
        async def participant_attributes_changed(
            changed_attributes: dict, participant: rtc.Participant
        ):
            if changed_attributes.get("sip.callStatus") == "hangup":
                session.shutdown()
                ctx.shutdown()
                try:
                    await ctx.api.room.delete_room(
                        api.DeleteRoomRequest(room=ctx.room.name)
                    )
                except aiohttp.ServerDisconnectedError:
                    pass
    
    
    def asyncio_create_task(fn):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            return asyncio.create_task(fn(*args, **kwargs))
    
        return wrapper
    d
    • 2
    • 1
  • n

    nutritious-apple-28097

    10/12/2025, 5:22 PM
    Hello, we're looking for where we can make a request for a HIPAA BAA to get signed and it's nowhere to be found on the website or docs. Who do I get in touch with for this?
    r
    • 2
    • 4
  • a

    adamant-plumber-44781

    10/15/2025, 5:01 PM
    Hey the docs make it sounds like when you setup cloudwatch you'd get the same logs out as
    lk agent log
    but the logs hitting cloudwatch are much more paired down. Is there something causing this limitation?
    👀 1
    d
    e
    • 3
    • 7
  • n

    narrow-pencil-46368

    10/16/2025, 3:53 PM
    Not sure if this is the best place, but it seems like the deeplearning.ai course doesn’t work. I checked other courses and they seem to work but not the LiveKit one.
    👀 1
    d
    • 2
    • 10
  • m

    modern-stone-67508

    10/16/2025, 4:43 PM
    Accepting Calls to Any Phone Number Not Working Hi everyone, I'm trying to set up SIP Trunk with Twilio and LiveKit Cloud. In the trunk config in LiveKit Cloud and also in the docs it says:
    You can configure an inbound trunk to accept incoming calls to any phone number by setting the numbers parameter to an empty string or wildcard character, for example, *
    But when I set it to empty string or * the SIP call does not work. It only works when I set it to a specific number. I think the docs are wrong in this case? https://docs.livekit.io/sip/trunk-inbound/#accepting-calls-to-any-phone-number
    👀 1
    d
    a
    • 3
    • 7
  • d

    dazzling-apartment-68328

    10/17/2025, 5:08 PM
    Hello, it would be great to have documentation on real time video processing. Exp in combination with AI agents. It seems like its mentioned that it is possible, by defauly every 1-3 frames are processed, but i cant find exmaples of how to do it. My usecase is that I would like to look for certain objects in the live feed, and while they are visible, those frames get modified by my app (point an arrow at them in the frame or simialir. A lot of documentation is about the voice capabilities now and I understand that, but it would be great to leverage the realtimefuncitonalities in the video tracks/streams too, thanks https://docs.livekit.io/home/client/tracks/raw-tracks/
    d
    a
    +2
    • 5
    • 5
  • w

    worried-knife-36498

    10/18/2025, 10:45 AM
    Just wanted to say i love the docs. Its really great, also love the way you’ve done recipes and integrated that in the docs. Overal the SDK is very good, simple and straightforward. Is there some way i can learn more about the webrtc internals and pion? I.e. the stuff one learn when onboarding themselves at your company , like the nitty gritty? very interested in this tech
    👀 1
    d
    • 2
    • 1
  • m

    microscopic-bear-1284

    10/21/2025, 5:47 PM
    not sure if this is an already known thing, though it appears there are a couple broken links on the docs for the js client sdk. the links to the example demo app and rpc demo look to be using relative links, so they dont work outside of the github repo. I threw together this PR with one potential way to handle it, though not sure if it is the desired way y’all would like it to be handled.
    👀 1
    g
    • 2
    • 1
  • s

    swift-photographer-84935

    10/24/2025, 10:46 AM
    Hey everyone 👋*, I could use some help with a migration task.* I’m a bit new to development, and I’ve been building a Next.js project that currently uses VAPI as the voice assistant. It handles real-time conversations with users and follows certain workflows the main configurations come from the VAPI Dashboard (
    assistant.json
    and
    workflows.json
    ). The project is also connected to Gemini and Firebase, and it’s used for an interview process where the AI assistant talks to the user, asks questions, and processes responses in real time. Now, I want to replace VAPI with LiveKit Voice AI, but keep everything else in the system (Gemini, Firebase, workflows, etc.) working exactly as it does now. Since I’m not sure how to set up LiveKit for this or what needs to change in my current code, I’d really appreciate some help with: 1. Understanding how VAPI is currently integrated in the codebase. 2. Creating a clear plan for migrating from VAPI to LiveKit Voice AI. 3. Guidance on what code/config updates are needed so LiveKit can handle the same real-time voice interactions and variable handling as VAPI. Thanks a lot in advance 🙏 any examples, documentation links, or step-by-step help would be amazing!
    👀 1
    r
    • 2
    • 1
  • d

    dry-musician-70452

    10/30/2025, 8:15 PM
    I think the doc is wrong here. Looking on Openai realtime plugin code, the default turn_detection is actually
    semantic_vad
    . https://docs.livekit.io/agents/models/realtime/plugins/openai/ https://github.com/livekit/agents/blob/8dbfce1fdd2027ce497025e9bbbe0386d359eee9/li[…]livekit-plugins-openai/livekit/plugins/openai/realtime/utils.py Thread in Slack Conversation
    👀 1
    g
    • 2
    • 2
  • t

    tall-lamp-50063

    11/03/2025, 1:36 PM
    The DeepWiki hasn’t been working for me since maybe Thursday. Seeing a bunch of 404 errors from devin.ai in the console.
    d
    r
    l
    • 4
    • 8
  • c

    colossal-airport-32984

    11/04/2025, 3:12 PM
    > https://docs.livekit.io/agents/ops/deployment/#regions > Currently, LiveKit Cloud deploys all agents to
    us-east
    (N. Virginia). More regions are coming soon. > https://docs.livekit.io/home/cloud/region-pinning/ > Region pinning restricts network traffic to a specific geographical region. Use this feature to comply with local telephony regulations or data residency requirements. > Protocol-based region pinning the docs say > . When pinning is enabled, if the initial connection is routed to a server outside the allowed regions, the request is rejected. > If the server is deployed in
    us-east
    , and it is pinned to
    asia
    - I am unclear from the docs what is happening to the network traffic? It presumably would mean all traffic is rejected if I read this literally. Please could you help me understand how data would be resident in
    asia
    when the server is in
    us-east
    ? Does it mean that any data persisted at rest is held in asia and data in use and in transit is in
    us-east
    ?
    👀 1
    d
    g
    • 3
    • 5
  • f

    flaky-scooter-85113

    11/06/2025, 11:56 AM
    https://docs.livekit.io/agents/start/telephony/#voicemail-detection ^ TypeScript 'Voicemail Detector' sample - this actually errors in
    ctx.waitForPlayout()
    .
    👀 1
    d
    • 2
    • 1
  • t

    tall-lamp-50063

    11/09/2025, 2:19 PM
    The docs for
    LocalParticipant.streamText
    in client-sdk-js are ambiguous.
    sendText
    docs say to consider using
    streamText
    (link), but
    streamText
    has no public docs and says it might go away (link). So, what am I supposed to be using for long texts?
    👀 1
    d
    • 2
    • 3
  • b

    better-house-57730

    11/10/2025, 10:23 AM
    Hi folks, Been deep-diving into plugins for python and found that several Python plugin docs are published twice under different paths: •
    <https://docs.livekit.io/reference/python/livekit/plugins/>...
    •
    <https://docs.livekit.io/reference/python/v1/livekit/plugins/>...
    A concrete example is the ElevenLabs plugin: • Old: https://docs.livekit.io/reference/python/livekit/plugins/elevenlabs/ • v1: https://docs.livekit.io/reference/python/v1/livekit/plugins/elevenlabs/ The two pages don’t match exactly (constructor signatures / types / deprecation notes), and it’s not clear which one is canonical. The v1 docs seem to match the current Agents integration docs and package on PyPI. They typically are much more complete Opened a GH issue here: https://github.com/livekit/python-sdks/issues/528 Thanks a lot guys!
    👍 1
    👀 1
    d
    • 2
    • 1