I’m running a local worker and having a local part...
# ask-ai
b
I’m running a local worker and having a local participant join. It was working fine until recently. Since yesterday, when a participant joins, the following log is printed by the worker: session_id: 11111 :: pid(35761) :: ffi_event_callback :: ERROR :: livekit:rtc enginertc session503livekitrtc engine:rtc_session - signal_event taking too much time: Answer(SessionDescription { r#type: "answer", sdp: "v=0\r\no=- 9154060974659867878 1752209019 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=msid-semantic:WMS *\r\na=fingerprint:sha-256 709D380E7AF2A11542E73CFB42A9B2044B0DDD3BBEEDEBEBB3FA82749AE0A89D\r\na=ice lite\r\na=extmap allow mixed\r\na=groupBUNDLE 0 1\r\nm=video 9 UDP/TLS/RTP/SAVPF 100\r\nc=IN IP4 0.0.0.0\r\na=setupactive\r\na=mid0\r\na=ice-ufragRuZWNghjKQSGjJmc\r\na=ice pwdpKDAxVQxPfqaPzScnHEQLGcEEjlgXqth\r\na=rtcp-mux\r\na=rtcp-rsize\r\na=rtpmap:100 VP8/90000\r\na=rtcp-fb:100 transport-cc \r\na=rtcp-fb:100 ccm fir\r\na=rtcp-fb:100 nack \r\na=rtcp-fb:100 nack pli\r\na=extmap:9 urnietfparamsrtp hdrextsdesmid\r\na=extmap11 urnietfparamsrtp hdrextsdesrepaired rtp stream id\r\na=extmap4 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:10 urnietfparamsrtp hdrextsdesrtp stream id\r\na=extmap12 https://aomediacodec.github.io/av1-rtp-spec/#dependency-descriptor-rtp-header-extension\r\na=rid:f recv\r\na=rid:h recv\r\na=rid:q recv\r\na=simulcast:recv f;h;q\r\na=recvonly\r\nm=application 9 UDP/DTLS/SCTP webrtc-datachannel\r\nc=IN IP4 0.0.0.0\r\na=setupactive\r\na=mid1\r\na=sendrecv\r\na=sctp-port5000\r\na=max message size65535\r\na=ice-ufragRuZWNghjKQSGjJmc\r\na=ice pwdpKDAxVQxPfqaPzScnHEQLGcEEjlgXqth\r\n" }) What could be the cause of this?
t
Thanks for reaching out, 최윤지. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
The error signal_event taking too much time: Answer(SessionDescription ...) indicates that your LiveKit worker is taking longer than 10 seconds to handle the signaling event for processing the participant's answer in the WebRTC negotiation. This usually points to a performance bottleneck or contention within the worker process, or potentially a synchronization/blocking issue in your environment. Recommended troubleshooting steps: • Review system resource usage (CPU, memory) on the worker machine during these events. • Check the logs for additional errors and warnings immediately before and after the timeout; look for repeated patterns, or "failed to handle signal" errors. • Isolate whether this is client-specific, server-side, or load-related by noting if it happens under certain conditions (e.g., number of participants, room size, etc.). • Verify all LiveKit components are using compatible versions. • Monitor the timing and flow of the offer/answer exchange and check for network or signaling delays. This behavior commonly results from system resource contention, negotiation queue state issues, or thread pool/resource exhaustion. If you are able to share more about recent changes in your environment (e.g. updates, configuration changes, workload change), or if you have observed a pattern tied to particular clients or timings, this will help further pinpoint the cause. Sources: livekit/src/room/mod.rs | Community & Support | livekit/livekit_composite | DeepWiki