While load testing the agent with command ```lk pe...
# ask-ai
b
While load testing the agent with command
Copy code
lk perf agent-load-test --rooms 50 --agent-name load-testing-agent --echo-speech-delay 5s --duration 3m
I am getting this log -
Copy code
Participant disconnected, rp:agent-AJ_SvVdCt5kmnB3/PA_nx5reg427Vgw
What does this mean?
t
Thanks for reaching out, Harsh. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! That log entry saying "Participant disconnected, rp:agent-AJ_SvVdCt5kmnB3/PA_nx5reg427Vgw" means that someone with the ID "agent-AJ_SvVdCt5kmnB3/PA_nx5reg427Vgw" has left the room. There are a few reasons why this might happen: • Server-Initiated Disconnect: Sometimes the server might remove a participant, or the whole room could have been deleted. • Network Issues: Connectivity hiccups can cause someone to drop off. • Duplicate Identity: If another participant joins using the same ID, the first one gets disconnected. • Session Timeout: If the participant was inactive for a while, their session might time out. • Manual Disconnection: They might have chosen to disconnect themselves. • Room Closure: If everyone leaves, the room might close automatically. If you want to dig a bit deeper, here’s what you can do: 1. Take a look at the agent logs to see if they give any clues about why the disconnect happened. 2. Check if this disconnection was expected as part of your test setup. 3. Peek at the server logs for any related error messages. 4. Make sure each agent has a unique identity to avoid mix-ups. Just so you know, this kind of log is pretty normal during load testing and usually isn’t something to worry about unless it happens a lot or at unexpected times.
b
what does this error mean-
Copy code
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x101e37d04]

goroutine 27422 [running]:
<http://github.com/livekit/livekit-cli/v2/pkg/loadtester.(*LoadTestRoom).onParticipantDisconnected(0x14003be0000|github.com/livekit/livekit-cli/v2/pkg/loadtester.(*LoadTestRoom).onParticipantDisconnected(0x14003be0000>, 0x14006646a80)
	<http://github.com/livekit/livekit-cli/v2/pkg/loadtester/agentloadtester.go:284|github.com/livekit/livekit-cli/v2/pkg/loadtester/agentloadtester.go:284> +0xf4
created by <http://github.com/livekit/server-sdk-go/v2.(*Room).OnParticipantDisconnect|github.com/livekit/server-sdk-go/v2.(*Room).OnParticipantDisconnect> in goroutine 25693
	<http://github.com/livekit/server-sdk-go/v2@v2.9.3-0.20250809171724-6ac93dd428bf/room.go:850|github.com/livekit/server-sdk-go/v2@v2.9.3-0.20250809171724-6ac93dd428bf/room.go:850> +0x144
t
Hey there! That error message,
panic: runtime error: invalid memory address or nil pointer dereference
, pops up when the code tries to use a method or field on something that’s actually nil (or null). In your case, it’s happening inside the
onParticipantDisconnected
function in
agentloadtester.go
. The trouble starts when it tries to call
rp.Identity()
without first checking if
rp
or
r.firstParticipant
are nil. Basically,
r.firstParticipant
hasn’t been set up yet (it’s nil), so when the code tries to access
Identity()
on it, the panic occurs. Here’s a simple fix: You can tweak the
onParticipantDisconnected
function to check for nil values before doing anything:
Copy code
func (r *LoadTestRoom) onParticipantDisconnected(rp *lksdk.RemoteParticipant) {
    log.Printf("Participant disconnected, rp:%v/%v", rp.Identity(), rp.SID())
    if r.firstParticipant != nil && rp.Identity() == r.firstParticipant.Identity() {
        r.stop()
    }
}
What’s new here? • We added a check to make sure
r.firstParticipant
isn’t nil before calling
Identity()
on it. • For extra safety, you might also want to check if
rp
is nil before using it. With this little change, your app won’t crash because of nil pointer issues in that callback. Hope this helps you get things running smoothly!
b
What does this error tells me - turnc ERROR: 2025/08/16 140839 Fail to refresh permissions: all retransmissions failed for V6/T0KhWlvTY3MWb
t
Hey there! That error message, "turnc ERROR: Fail to refresh permissions: all retransmissions failed", basically means there’s a hiccup connecting to the TURN (Traversal Using Relays around NAT) server. In other words, the requests to refresh permissions with the TURN server didn’t get through, even after trying multiple times. Here’s what usually causes this:Network Infrastructure Issues: Sometimes firewalls or NAT devices block UDP traffic or the packets that follow. • TURN Server Connectivity Problems: The TURN server might be unreachable, or the connection to it could be unstable. What you can do to fix it: 1. Check if your network can connect to the TURN server. 2. Make sure your firewalls allow TURN traffic. 3. Try forcing TURN to use TCP/TLS on port 443, which can get around firewall blocks by configuring:
Copy code
connectOpts.rtcConfig = { iceTransportPolicy: 'relay' };
1. Add TURN servers to your whitelist (for example,
*.turn.livekit.cloud
on TCP/443 if you’re using LiveKit Cloud). 2. Run the LiveKit Connection Tester to help spot any connectivity issues. Usually, this error clears up once your network can reach the TURN server again or when firewall settings are updated to allow TURN traffic. Switching from UDP to TCP/TLS transport can also make a difference sometimes. Sources: Firewall Tips | Authentication | LiveKit Docs
b
while performing load test, how to understand the result. How would we know if the pods on kubernetes are able to handle the load?
t
Here’s a simple way to interpret your load test results and see if your Kubernetes pods can handle the load: 1. Run the Load Test: Start by using the LiveKit CLI command
lk load-test
to simulate the number of publishers and subscribers you want. For example:
Copy code
lk load-test --video-publishers 150 --subscribers 150 --duration 1m
1. Analyze Load Test Results: Keep an eye on these important metrics: • Packet Loss: If it’s over 5%, that might mean your system is overloaded. • Bitrate: Check the total and average bitrate per subscriber to understand the data flow. • Latency: High latency could point to network or server issues. • CPU Utilization: CPU usage above 80% usually means your resources are getting stretched. 1. Monitor Kubernetes Pods: Use
kubectl top pod <pod-name>
to see CPU and memory usage. If CPU stays above 80%, it’s a sign you might need to scale up. 2. Compare Against Baseline: It helps to have a baseline with expected loads so you can spot any unusual changes. 3. Analyze Grafana Dashboards: If you have them, these dashboards are great for real-time insights on participants, CPU, and memory usage. 4. Adjust Resources: If your pods are struggling, consider scaling your Kubernetes deployment or tweaking resource limits. If you notice high packet loss, latency, or CPU usage, scaling up or adjusting resources is usually the way to go. Feel free to reach out if you want help digging into specific metrics or setting things up!
b
i am performing agent load test
t
Here’s a friendly guide to help you interpret the results of your agent load test and see if your Kubernetes pods are up for the challenge: 1. Get to Know Your Load Test Results: The agent load test gives you some important stats like: • Agent Dispatched At: When the agent was sent off to the room. • Agent Joined: Whether the agent successfully made it into the room. • Agent Join Delay: How long it took for the agent to join after being dispatched. • Agent Track Subscribed: Whether the agent subscribed to the user’s audio track. • Echo Track Published: Whether the echo track got published successfully. If you see agents dispatched and joined properly, with tracks subscribed, that’s a great sign your system can handle the load. 1. Keep an Eye on Your Kubernetes Pod Resources: Tools like
kubectl top
are super handy to check CPU and memory use during the test. If you notice high usage, it might be time to think about scaling. 2. Check How Load Is Balanced: Make sure the work is spread out evenly across your pods so none of them get overwhelmed. 3. Look Over System Metrics: Things like packet loss, latency, and error rates tell you a lot. Low packet loss and steady latency mean things are running smoothly. 4. Compare with Benchmarks: LiveKit’s benchmarks (for example, a 4-Core, 8GB machine can handle 10-25 concurrent voice agents) are a helpful reference to see how your setup stacks up. 5. Tweak Your Scaling: If needed, you can add more pods (scale out) or give your pods more resources (scale up) to keep things running well. If you want a hand digging into specific metrics or need tips on adjusting your Kubernetes setup, just let me know—I’m here to help!