powerful-kitchen-37205
10/20/2025, 3:41 PMlk room participants list) shows the Agent is successfully subscribed to the Bridge's audio track (muted: false).
2. Health Check is OK: /health endpoint returns "ok."
Confirmed Troubleshooting
1. Firewall: UDP ports 50000-60000 and TCP ports 7880/7881 are open on the CentOS firewall (firewalld).
2. NAT/IP: livekit.yaml is configured with use_external_ip: true and the server's public IP is explicitly set in node_ip to handle NAT.
Request
The signaling is correct, and the ports are open, yet no media flows. This suggests an issue with the LiveKit server's UDP routing/forwarding on the CentOS platform. Any guidance on specific server-side logging or configuration unique to WebRTC media flow troubleshooting would be greatly appreciated.freezing-teacher-57973
10/23/2025, 10:05 AMrtc:
udp_port: 7882-7892
use_external_ip: false
node_ip: 172.xxx.xx.xx <<< public ip of load balancer
The configuration works well for all clients: those running inside the cluster and outside but "internal clients" also uses the same public IP of the load balancer to connect to the server. This is adding unnecessary inbound/outbound network cost. Also there a potential latency impact with additional network hops (I am not experiencing this ATM).
So my question is, how to make the server advertise both the internal and public IP addresses so that internal clients with choose the internal IP address and the public clients will choose public IP address? There is an old github issue about this too.
Also I am bit reluctant of enabling TURN due to latency impact. It took me quite a while to bring down the latency to acceptable level. I don't want to lose those gains by using TURN. Any input will be very helpful. Thanks in advance!
https://github.com/livekit/livekit/issues/1898red-park-73886
10/25/2025, 7:57 PMstrong-flag-7078
10/26/2025, 4:50 PMstrong-flag-7078
10/26/2025, 4:51 PMwonderful-carpet-2571
10/27/2025, 9:53 AMroom": "<redacted>", "participant": "<redacted>", "connID": "CO_Uz7oj2ERVjr2", "error": "request canceled"}
2025-10-27 09:10:33.905Z <http://github.com/livekit/livekit-server/pkg/routing.(*LocalRouter).StartParticipantSignalWithNodeID|github.com/livekit/livekit-server/pkg/routing.(*LocalRouter).StartParticipantSignalWithNodeID>
2025-10-27 09:10:33.905Z /workspace/pkg/routing/localrouter.go:113
2025-10-27 09:10:33.905Z <http://github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).StartParticipantSignal|github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).StartParticipantSignal>
2025-10-27 09:10:33.905Z /workspace/pkg/routing/redisrouter.go:169
2025-10-27 09:10:33.905Z <http://github.com/livekit/livekit-server/pkg/service.(*RTCService).startConnection|github.com/livekit/livekit-server/pkg/service.(*RTCService).startConnection>
2025-10-27 09:10:33.905Z /workspace/pkg/service/rtcservice.go:607
2025-10-27 09:10:33.905Z <http://github.com/livekit/livekit-server/pkg/service.(*RTCService).ServeHTTP|github.com/livekit/livekit-server/pkg/service.(*RTCService).ServeHTTP>
2025-10-27 09:10:33.905Z /workspace/pkg/service/rtcservice.go:277
2025-10-27 09:10:33.905Z net/http.(*ServeMux).ServeHTTP
2025-10-27 09:10:33.905Z /usr/local/go/src/net/http/server.go:2822
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1|github.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:59
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.HandlerFunc.ServeHTTP|github.com/urfave/negroni/v3.HandlerFunc.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:33
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.middleware.ServeHTTP|github.com/urfave/negroni/v3.middleware.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:51
2025-10-27 09:10:33.905Z net/http.HandlerFunc.ServeHTTP
2025-10-27 09:10:33.905Z /usr/local/go/src/net/http/server.go:2294
2025-10-27 09:10:33.905Z <http://github.com/livekit/livekit-server/pkg/service.(*APIKeyAuthMiddleware).ServeHTTP|github.com/livekit/livekit-server/pkg/service.(*APIKeyAuthMiddleware).ServeHTTP>
2025-10-27 09:10:33.905Z /workspace/pkg/service/auth.go:107
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.middleware.ServeHTTP|github.com/urfave/negroni/v3.middleware.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:51
2025-10-27 09:10:33.905Z <http://github.com/livekit/livekit-server/pkg/service.RemoveDoubleSlashes|github.com/livekit/livekit-server/pkg/service.RemoveDoubleSlashes>
2025-10-27 09:10:33.905Z /workspace/pkg/service/utils.go:49
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.HandlerFunc.ServeHTTP|github.com/urfave/negroni/v3.HandlerFunc.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:33
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.middleware.ServeHTTP|github.com/urfave/negroni/v3.middleware.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:51
2025-10-27 09:10:33.905Z <http://github.com/rs/cors.(*Cors).ServeHTTP|github.com/rs/cors.(*Cors).ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/rs/cors@v1.11.1/cors.go:324
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.middleware.ServeHTTP|github.com/urfave/negroni/v3.middleware.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:51
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.(*Recovery).ServeHTTP|github.com/urfave/negroni/v3.(*Recovery).ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/recovery.go:210
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.middleware.ServeHTTP|github.com/urfave/negroni/v3.middleware.ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:51
2025-10-27 09:10:33.905Z <http://github.com/urfave/negroni/v3.(*Negroni).ServeHTTP|github.com/urfave/negroni/v3.(*Negroni).ServeHTTP>
2025-10-27 09:10:33.905Z /go/pkg/mod/github.com/urfave/negroni/v3@v3.1.1/negroni.go:111
2025-10-27 09:10:33.905Z net/http.serverHandler.ServeHTTP
2025-10-27 09:10:33.905Z /usr/local/go/src/net/http/server.go:3301
2025-10-27 09:10:33.905Z net/http.(*conn).serve
2025-10-27 09:10:33.905Z /usr/local/go/src/net/http/server.go:2102
2025-10-27 09:10:34.002Z 2025-10-27T09:10:34.002Z ERROR livekit routing/localrouter.go:113 could not handle new participant {"room": "<redacted>", "participant": "<redacted>", "connID": "CO_z8zbfBe3n5o9", "error": "request canceled"}
I'm running livekit server on a beefy server with these limts, but we still end up throttling, is there a way to horizontally scale the RTC server? my deployment albeit has all the resources i never hit the limit but the code still starts to fail when we have about 250-300 ish rooms open concurrent
resources:
requests:
cpu: 13500m # ~13.5 cores
memory: 55Gi # ~55GiB
limits:
cpu: 15000m # full 16 cores
memory: 62Gi # full 64GiBrapid-van-16677
10/28/2025, 4:28 PMsome-lamp-76309
10/28/2025, 6:52 PMfast-hairdresser-13908
10/29/2025, 11:44 AMbillions-refrigerator-97088
10/30/2025, 2:59 PMbitter-grass-45470
10/31/2025, 6:52 AMinput in Langfuse. Following this example: https://github.com/livekit/agents/blob/main/examples/voice_agents/langfuse_trace.py
Is anyone using Langfuse? how does traces look for you?green-optician-80808
11/01/2025, 1:17 PMgreen-parrot-24267
11/03/2025, 3:57 AMprehistoric-printer-73777
11/03/2025, 8:33 AMbrainy-shoe-64693
11/03/2025, 11:25 AMwhite-oxygen-36812
11/03/2025, 4:37 PMCould not connect to room: could not establish signal connection: Websocket got closed during a (re)connection attempt
It seems they didn't reach livekit server. I guess due to some firewall or network protection.
We use Cloudflare in our domain.
Any recommendation?sparse-coat-14874
11/04/2025, 1:43 PMchilly-balloon-11273
11/06/2025, 1:41 AMbrainy-shoe-64693
11/07/2025, 10:42 AMancient-pizza-45206
11/08/2025, 6:53 AMimportant-eve-8964
11/08/2025, 6:02 PMinvalid_argument: missing rule error, even when I believe I am following the documented structure.
The Setup & Issue
1. Command Execution:
~/project# lk sip dispatch create dispatch-rule.json
Using default project [--project_name]
twirp error invalid_argument: missing rule
2. dispatch-rule.json content:
{
"dispatch_rule": {
"rule": {
"dispatchRuleIndividual": {
"roomPrefix": "call-"
}
}
}
}
Troubleshooting Attempted
I have already reviewed and tried solutions from the official documentation and related GitHub issues:
• LiveKit Docs: Checked the dispatch-rule format and setup guide (specifically [ https://docs.livekit.io/sip/dispatch-rule ]).
• GitHub Issue: Referenced [https://github.com/livekit/livekit/issues/3789], which discusses dispatch rule creation, but the solution hasn't resolved my specific "missing rule" error.
Has anyone encountered this specific twirp error invalid_argument: missing rule when using the lk sip dispatch create command?
Thanks in advance for any insights!busy-cricket-46386
11/08/2025, 6:15 PMINVITE but rejects it seconds later.
• Key Log Line: The call is closed with status 486 and reason "flood". This points to an internal rate-limiting mechanism being triggered in the LiveKit SIP component.
Code snippet:
livekit-sip | 2025-11-08T18:02:56.280Z INFO sip sip/inbound.go:782 Closing inbound call {"nodeID": "...", "callID": "SCL_bpEBHK2W5LmX", "fromIP": "...", "fromHost": "...", "status": 486, "reason": "flood"}
My Setup Context
• LiveKit Components: Self-hosted LiveKit Server, Agent, and SIP.
• Call Origin: An inbound call originated from Twilio (which then connects to my SIP endpoint).
• Error Cause: The log suggests the call is being rejected for perceived "flooding," even if it is a single legitimate call attempt.
Is there a specific configuration or environment variable in the LiveKit SIP service that controls the inbound call rate-limiting (e.g., calls per second from a single IP)?
Or
any solution for this?
Thanks!few-hamburger-81561
11/09/2025, 10:46 PMboundless-battery-65494
11/10/2025, 1:02 PMalert-farmer-13558
11/11/2025, 6:58 AMstocky-salesclerk-58931
11/11/2025, 7:29 AMinbound_trunk.json file with the correct IP addresses and numbers, along with the dispatch rule and agent configuration. On the other side, my Asterisk setup is properly configured.
When I originate a call from Asterisk, it keeps ringing continuously. However, on the LiveKit side, my agent gets triggered and starts generating streams after the call starts.
In the PCAP and SIP data, there is no 200 OK / ACK message, and from the Asterisk side, the call keeps ringing.
I’m not able to pinpoint the exact issue — I’m confused because if the SIP session isn’t accepted, how is the agent getting triggered?kind-engineer-18120
11/11/2025, 8:07 AMdelightful-planet-44653
11/12/2025, 5:21 AMboundless-afternoon-2110
11/12/2025, 10:46 PMlk) on one server. I need to launch two agents on one server without using Docker. Each agent belongs to its own project (in the cloud or self-hosted, it doesn’t matter). I added both projects to lk (LiveKit CLI), but I can’t enable both at the same time. Only one can be set as default. Therefore, I need either two lk or two simultaneously active projects.fancy-oil-75837
11/14/2025, 6:39 AMLIVEKIT_URL: <http://livekit-server:7880>
const participant = await sipClient.createSipParticipant(
trunkId,
phoneNumber,
roomName,
sipParticipantOptions
);
Following are the pararmter
Creating SIP participant with parameters:
trunkId: ST_8JTfsYKn3nmS
phoneNumber: +918126131525
roomName: support-room-ui
options: {
"participantIdentity": "agent-ui",
"participantName": "agent-ui",
"krispEnabled": true,
"waitUntilAnswered": true,
"hidePhoneNumber": false
}
Error I am getting
Error message: twirp error unknown: update room failed: URL was not provided