billions-lawyer-8588
08/16/2025, 8:34 PMlk perf agent-load-test --rooms 50 --agent-name load-testing-agent --echo-speech-delay 5s --duration 3m
I am getting this log -
Participant disconnected, rp:agent-AJ_SvVdCt5kmnB3/PA_nx5reg427Vgw
What does this mean?tall-airline-16756
08/16/2025, 8:34 PMtall-airline-16756
08/16/2025, 8:34 PMbillions-lawyer-8588
08/16/2025, 8:42 PMpanic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x101e37d04]
goroutine 27422 [running]:
<http://github.com/livekit/livekit-cli/v2/pkg/loadtester.(*LoadTestRoom).onParticipantDisconnected(0x14003be0000|github.com/livekit/livekit-cli/v2/pkg/loadtester.(*LoadTestRoom).onParticipantDisconnected(0x14003be0000>, 0x14006646a80)
<http://github.com/livekit/livekit-cli/v2/pkg/loadtester/agentloadtester.go:284|github.com/livekit/livekit-cli/v2/pkg/loadtester/agentloadtester.go:284> +0xf4
created by <http://github.com/livekit/server-sdk-go/v2.(*Room).OnParticipantDisconnect|github.com/livekit/server-sdk-go/v2.(*Room).OnParticipantDisconnect> in goroutine 25693
<http://github.com/livekit/server-sdk-go/v2@v2.9.3-0.20250809171724-6ac93dd428bf/room.go:850|github.com/livekit/server-sdk-go/v2@v2.9.3-0.20250809171724-6ac93dd428bf/room.go:850> +0x144
tall-airline-16756
08/16/2025, 8:43 PMpanic: runtime error: invalid memory address or nil pointer dereference
, pops up when the code tries to use a method or field on something that’s actually nil (or null). In your case, it’s happening inside the onParticipantDisconnected
function in agentloadtester.go
. The trouble starts when it tries to call rp.Identity()
without first checking if rp
or r.firstParticipant
are nil.
Basically, r.firstParticipant
hasn’t been set up yet (it’s nil), so when the code tries to access Identity()
on it, the panic occurs.
Here’s a simple fix: You can tweak the onParticipantDisconnected
function to check for nil values before doing anything:
func (r *LoadTestRoom) onParticipantDisconnected(rp *lksdk.RemoteParticipant) {
log.Printf("Participant disconnected, rp:%v/%v", rp.Identity(), rp.SID())
if r.firstParticipant != nil && rp.Identity() == r.firstParticipant.Identity() {
r.stop()
}
}
What’s new here?
• We added a check to make sure r.firstParticipant
isn’t nil before calling Identity()
on it.
• For extra safety, you might also want to check if rp
is nil before using it.
With this little change, your app won’t crash because of nil pointer issues in that callback. Hope this helps you get things running smoothly!billions-lawyer-8588
08/16/2025, 8:51 PMtall-airline-16756
08/16/2025, 8:52 PMconnectOpts.rtcConfig = { iceTransportPolicy: 'relay' };
1. Add TURN servers to your whitelist (for example, *.turn.livekit.cloud
on TCP/443 if you’re using LiveKit Cloud).
2. Run the LiveKit Connection Tester to help spot any connectivity issues.
Usually, this error clears up once your network can reach the TURN server again or when firewall settings are updated to allow TURN traffic. Switching from UDP to TCP/TLS transport can also make a difference sometimes.
Sources: Firewall Tips | Authentication | LiveKit Docsbillions-lawyer-8588
08/16/2025, 8:58 PMtall-airline-16756
08/16/2025, 8:58 PMlk load-test
to simulate the number of publishers and subscribers you want. For example:
lk load-test --video-publishers 150 --subscribers 150 --duration 1m
1. Analyze Load Test Results: Keep an eye on these important metrics:
• Packet Loss: If it’s over 5%, that might mean your system is overloaded.
• Bitrate: Check the total and average bitrate per subscriber to understand the data flow.
• Latency: High latency could point to network or server issues.
• CPU Utilization: CPU usage above 80% usually means your resources are getting stretched.
1. Monitor Kubernetes Pods: Use kubectl top pod <pod-name>
to see CPU and memory usage. If CPU stays above 80%, it’s a sign you might need to scale up.
2. Compare Against Baseline: It helps to have a baseline with expected loads so you can spot any unusual changes.
3. Analyze Grafana Dashboards: If you have them, these dashboards are great for real-time insights on participants, CPU, and memory usage.
4. Adjust Resources: If your pods are struggling, consider scaling your Kubernetes deployment or tweaking resource limits.
If you notice high packet loss, latency, or CPU usage, scaling up or adjusting resources is usually the way to go. Feel free to reach out if you want help digging into specific metrics or setting things up!billions-lawyer-8588
08/16/2025, 8:59 PMtall-airline-16756
08/16/2025, 8:59 PMkubectl top
are super handy to check CPU and memory use during the test. If you notice high usage, it might be time to think about scaling.
2. Check How Load Is Balanced: Make sure the work is spread out evenly across your pods so none of them get overwhelmed.
3. Look Over System Metrics: Things like packet loss, latency, and error rates tell you a lot. Low packet loss and steady latency mean things are running smoothly.
4. Compare with Benchmarks: LiveKit’s benchmarks (for example, a 4-Core, 8GB machine can handle 10-25 concurrent voice agents) are a helpful reference to see how your setup stacks up.
5. Tweak Your Scaling: If needed, you can add more pods (scale out) or give your pods more resources (scale up) to keep things running well.
If you want a hand digging into specific metrics or need tips on adjusting your Kubernetes setup, just let me know—I’m here to help!