https://livekit.io logo
Join Slack
Powered by
# ask-ai
  • w

    witty-yak-62929

    05/30/2025, 8:51 AM
    I am using tts from OpenAI "gpt-4o-mini-tts" TTSMetrics streamed is No, how to make it stream
    t
    • 2
    • 4
  • c

    colossal-rose-18633

    05/30/2025, 9:10 AM
    i have deployed livekit frontend using https://github.com/livekit-examples/agent-deployment.git. and stt-llm-tts code has been deployed on ec2 server. but both things are not getting connected
    t
    • 2
    • 3
  • s

    steep-balloon-41261

    05/30/2025, 9:36 AM
    This message was deleted.
    t
    • 2
    • 2
  • h

    helpful-machine-32005

    05/30/2025, 9:39 AM
    Where do I get the SYS_ADMIN from?
    Copy code
    docker run --rm \
      --cap-add SYS_ADMIN \
      -e EGRESS_CONFIG_FILE=/out/config.yaml \
      -v ~/livekit-egress:/out \
      livekit/egress
    t
    • 2
    • 6
  • b

    brief-vase-33757

    05/30/2025, 10:20 AM
    Traceback (most recent call last): File "/Users/sandeep/projects/agntv1/lib/python3.10/site-packages/livekit/agents/stt/stt.py", line 246, in _main_task return await self._run() File "/Users/sandeep/projects/agntv1/lib/python3.10/site-packages/livekit/plugins/deepgram/stt.py", line 562, in _run ws = await self._connect_ws() File "/Users/sandeep/projects/agntv1/lib/python3.10/site-packages/livekit/plugins/deepgram/stt.py", line 622, in _connect_ws ws = await asyncio.wait_for( File "/opt/homebrew/Cellar/python@3.10/3.10.17_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/tasks.py", line 445, in wait_for return fut.result() File "/Users/sandeep/projects/agntv1/lib/python3.10/site-packages/aiohttp/client.py", line 1409, in send return self._coro.send(arg) File "/Users/sandeep/projects/agntv1/lib/python3.10/site-packages/aiohttp/client.py", line 1021, in _ws_connect raise WSServerHandshakeError( aiohttp.client_exceptions.WSServerHandshakeError: 400, message='Invalid response status', url='wss://api.deepgram.com/v1/listen?model=nova-3&amp;punctuate=true&amp;smart_format=true&amp;no_delay=true&amp;interim_results=true&amp;encoding=linear16&amp;vad_events=true&amp;sample_rate=16000&amp;channels=1&amp;endpointing=25&amp;filler_words=true&amp;profanity_filter=false&amp;numerals=false&amp;mip_opt_out=false&amp;language=hi' {"pid": 51196, "job_id": "AJ_DPoiADppLTSW"} close event: type='close' error=STTError(type='stt_error', timestamp=1748600166.066468, label='livekit.plugins.deepgram.stt.STT', error=WSServerHandshakeError(RequestInfo(url=URL('wss://api.deepgram.com/v1/listen?model=nova-3&amp;punctuate=true&amp;smart_format=true&amp;no_delay=true&amp;interim_results=true&amp;encoding=linear16&amp;vad_events=true&amp;sample_rate=16000&amp;channels=1&amp;endpointing=25&amp;filler_words=true&amp;profanity_filter=false&amp;numerals=false&amp;mip_opt_out=false&amp;language=hi'), method='GET', headers=<CIMultiDictProxy('Host': 'api.deepgram.com', 'Authorization': 'Token cf89eed70e555301638f2f263d1c78a6518099d6', 'Upgrade': 'websocket', 'Connection': 'Upgrade', 'Sec-WebSocket-Version': '13', 'Sec-WebSocket-Key': 'HSgA5yKBhcQpD8BV7tTqbg==', 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'User-Agent': 'Python/3.10 aiohttp/3.11.18')>, real_url=URL('wss://api.deepgram.com/v1/listen?model=nova-3&amp;punctuate=true&amp;smart_format=true&amp;no_delay=true&amp;interim_results=true&amp;encoding=linear16&amp;vad_events=true&amp;sample_rate=16000&amp;channels=1&amp;endpointing=25&amp;filler_words=true&amp;profanity_filter=false&amp;numerals=false&amp;mip_opt_out=false&amp;language=hi')), (), status=400, message='Invalid response status', headers=<CIMultiDictProxy('Content-Type': 'application/json', 'dg-error': 'Bad Request', 'Vary': 'origin, access-control-request-method, access-control-request-headers', 'Vary': 'accept-encoding', 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Expose-Headers': 'dg-model-name,dg-model-uuid,dg-char-count,dg-request-id,dg-error', 'Content-Encoding': 'gzip', 'dg-request-id': '9817948c-84cd-43c7-a6fe-41022d6bee6e', 'Transfer-Encoding': 'chunked', 'Date': 'Fri, 30 May 2025 101605 GMT')>), recoverable=False)
    t
    • 2
    • 3
  • e

    enough-sunset-14907

    05/30/2025, 10:20 AM
    I have launched sandbox livekit application and it's running fine. How do I integrate this setup on my server?
    t
    • 2
    • 4
  • h

    happy-angle-72232

    05/30/2025, 10:25 AM
    When communicating using SIP protocol via TCP, my telephony provider is asking for media server IP to whitelist. Where can I find the media server IP?
    t
    • 2
    • 2
  • s

    steep-balloon-41261

    05/30/2025, 10:30 AM
    This message was deleted.
    t
    • 2
    • 2
  • r

    rich-monitor-70665

    05/30/2025, 10:46 AM
    in python3 sdk room token i want to reduce the time when room considered disconnected when the all participant. left
    t
    • 2
    • 3
  • l

    little-article-83676

    05/30/2025, 10:46 AM
    Explain the VAD config options to leyman
    t
    • 2
    • 6
  • r

    rich-monitor-70665

    05/30/2025, 10:47 AM
    i mean RoomConfiguration(empty_timeout) (edited)
    t
    • 2
    • 2
  • r

    rich-monitor-70665

    05/30/2025, 10:48 AM
    i mean RoomConfiguration(empty_timeout) (edited) in python3 sdk
    t
    • 2
    • 2
  • s

    strong-furniture-4150

    05/30/2025, 10:49 AM
    can someone suggest me the best tts and sst for product?
    t
    • 2
    • 2
  • a

    abundant-father-33863

    05/30/2025, 10:57 AM
    I'm using json file where I need to receive all the SIP headers coming in my inbound request... how can I read 'em in my AI agent where I 'm getting SIP participant ??
    t
    • 2
    • 8
  • l

    little-yacht-32020

    05/30/2025, 11:23 AM
    Subject: Persistent "failed to retrieve region info" error with Python SDK Hello LiveKit peeps, I am encountering a persistent connection error when using the Python SDK and I'm hoping you can help, as I have exhausted all troubleshooting steps. The Problem: When I run a Python script to connect to my LiveKit Cloud project, it consistently fails with the error:
    livekit.rtc.room.ConnectError: engine: signal failure: failed to retrieve region info: error decoding response body: expected value at line 1 column 1
    . Project URL:
    <wss://twindersagarstrip-xyhjwf0v.livekit.cloud>
    Here is what I've discovered: * Browser Connection Works: I can connect to my project room (
    avatar_room_a
    ) successfully from the same machine using the web app at
    <http://meet.livekit.io|meet.livekit.io>
    . This proves my network path and credentials are fundamentally correct. * CLI Token Generation Works: I can use the
    lk
    CLI tool to generate valid tokens for my project. * Python Connection Fails Consistently: The Python script fails with the
    region info
    error under all of the following conditions: * Running on my local Windows machine. * Running inside a brand new, clean Gitpod cloud container (Linux). * Using a token generated by the Python
    AccessToken
    class (this sometimes gives a
    401 Unauthorized
    instead, suggesting the SDK's token signing is also incompatible with my setup). * Using a known-good token generated by the
    lk
    CLI. * Connecting to the main project URL. * Connecting directly to the regional URL (
    <wss://eu-central-1.livekit.cloud>
    ). * Things I've Ruled Out: * Local firewalls (the error persists even with Windows Firewall completely disabled). * Local proxies (no proxy environment variables are set). * Corrupted Python environment (the error persists in a fresh
    venv
    inside Gitpod). Since the browser (using the JS SDK) can connect but the Python SDK cannot, even from a pristine cloud environment, this strongly suggests a specific bug or incompatibility between the Python SDK's connection process and the cloud backend for my project. Here is the traceback from the last attempt in Gitpod, using a valid token generated by the CLI:
    Copy code
    ((venv) ) gitpod /workspace/sagar (main) $ python avatar_a_controller.py
    --- Full Avatar Control Script (using CLI token) ---
    Attempting to connect to <wss://twindersagarstrip-xyhjwf0v.livekit.cloud>...
    livekit::rtc_engine:392:livekit::rtc_engine - failed to connect: Signal(RegionError("error decoding response body: expected value at line 1 column 1")), retrying... (1/3)
    An error occurred: engine: signal failure: failed to retrieve region info: error decoding response body: expected value at line 1 column 1
    Traceback (most recent call last):
      File "/workspace/sagar/avatar_a_controller.py", line 39, in main
        await room.connect(LIVEKIT_URL, token)
      File "/workspace/sagar/venv/lib/python3.12/site-packages/livekit/rtc/room.py", line 394, in connect
        raise ConnectError(cb.connect.error)
    livekit.rtc.room.ConnectError: engine: signal failure: failed to retrieve region info: error decoding response body: expected value at line 1 column 1
    Could you please provide any insight into what might be causing this? Thank you for your help. 🙏🙏🙏
    t
    • 2
    • 2
  • s

    steep-balloon-41261

    05/30/2025, 11:24 AM
    This message was deleted.
    t
    p
    • 3
    • 3
  • a

    adamant-airport-69140

    05/30/2025, 11:24 AM
    Can i change TTS for Gemini live api?
    t
    • 2
    • 4
  • m

    many-ram-27523

    05/30/2025, 11:51 AM
    I am using unable to hear what the remote participant is saying even after being subscribed to remote audio track. I am using livekit_client flutter package only. import 'dart:async'; import 'dart:convert'; import 'dart:io'; import 'package:flutter/material.dart'; import 'package:fluttertoast/fluttertoast.dart'; import 'package:livekit_client/livekit_client.dart' as livekit; import 'package:permission_handler/permission_handler.dart'; import 'package:uuid/uuid.dart'; import '../../models/avatar_model.dart'; import '../../services/cheercast_api_service.dart'; import '../../theme/theme.dart'; class VideoCallScreen extends StatefulWidget { final AvatarModel avatar; final String mode; final String? outfit; final String? topic; final String? subtopic; final String? level; const VideoCallScreen({ super.key, required this.avatar, required this.mode, this.outfit, this.subtopic, this.topic, this.level, }); @override State<VideoCallScreen> createState() => _VideoCallScreenState(); } class _VideoCallScreenState extends State<VideoCallScreen> { final CheercastApiService _apiService = CheercastApiService(); livekit.Room? _room; livekit.RemoteVideoTrack? _remoteVideoTrack; livekit.RemoteAudioTrack? _remoteAudioTrack; livekit.LocalVideoTrack? _localVideoTrack; bool _isLoading = true; String? _sessionId; Timer? _timer; int _callDuration = 0; dynamic _socket; final bool _canSubscribe = false; String? agentSid; @override void initState() { super.initState(); _startVideoCall(); } void _startTimer() { timer = Timer.periodic(const Duration(seconds: 1), () { setState(() { _callDuration++; }); }); } String _formatDuration(int seconds) { final minutes = (seconds / 60).floor().toString().padLeft(2, '0'); final secs = (seconds % 60).toString().padLeft(2, '0'); return '$minutes:$secs'; } Future<void> _requestPermission() async { var status = await Permission.bluetooth.request(); if (status.isPermanentlyDenied) { print('Bluetooth Permission disabled'); } status = await Permission.bluetoothConnect.request(); if (status.isPermanentlyDenied) { print('Bluetooth Connect Permission disabled'); } status = await Permission.microphone.request(); if (status.isPermanentlyDenied) { print('Microphone Permission disabled'); } status = await Permission.camera.request(); if (status.isPermanentlyDenied) { print('Camera Permission disabled'); } return; } Future<void> _startVideoCall() async { try { final sessionId = const Uuid().v4(); /// TODO: Replace this with real backend userId when available const int userId = 21511; print("Getting to response"); final group = await _apiService.getChatGroup( userId: userId, avatarCode: widget.avatar.avatarCode, ); final channel = group['channel']; final socket = await _apiService.startSocket( chatGroupChannel: channel, sessionId: sessionId, userId: userId, ); await _requestPermission(); livekit.LiveKitClient.initialize(); socket.stream.listen((data) async { print("Received data: $data"); print("Received data type: ${data.runtimeType}"); final received = jsonDecode(data); if (received['code'] != 200) { Fluttertoast.showToast( msg: received['message'] ?? 'An error occurred', toastLength: Toast.LENGTH_SHORT, gravity: ToastGravity.BOTTOM, ); Navigator.pop(context); } else { if (received['data']['event'] == 'on_start_streaming') { final wssUrl = received['data']['metadata']['wss']; final token = received['data']['metadata']['token']; final room = livekit.Room( roomOptions: const livekit.RoomOptions(adaptiveStream: false), ); room.events.listen((event) async { print("This is the event in room: $event"); if (event is livekit.TrackSubscribedEvent) { if (event.track is livekit.RemoteVideoTrack) { setState(() { _remoteVideoTrack = event.track as livekit.RemoteVideoTrack; }); print("Video track is muted: ${_remoteVideoTrack?.muted}"); } else if (event.track is livekit.RemoteAudioTrack) { setState(() { _remoteAudioTrack = event.track as livekit.RemoteAudioTrack; }); _remoteAudioTrack?.enable(); _remoteAudioTrack?.mediaStreamTrack.enabled = true; _remoteAudioTrack?.start(); await room.setSpeakerOn(true, forceSpeakerOutput: true); await setSpeakerphoneOn(true); print( "Audio track found : ${_remoteAudioTrack?.isActive} && ${_remoteAudioTrack?.muted} ", ); } } }); await room.prepareConnection(wssUrl, token); try { // _localVideoTrack = // await livekit.LocalVideoTrack.createCameraTrack(); // final localAudioTrack = await livekit.LocalAudioTrack.create(); // await room.localParticipant?.publishVideoTrack(_localVideoTrack!); // await room.localParticipant?.publishAudioTrack(localAudioTrack); await room.localParticipant?.setCameraEnabled(true); await room.localParticipant?.setMicrophoneEnabled(true); await room.connect( wssUrl, token, fastConnectOptions: livekit.FastConnectOptions( camera: livekit.TrackOption( enabled: true, // track: _localVideoTrack, ), microphone: livekit.TrackOption( enabled: true, // track: localAudioTrack, ), ), ); } catch (error) { print('Could not publish video/audio, error: $error'); } final audioOuts = await livekit.Hardware.instance.audioOutputs(); final audioIns = await livekit.Hardware.instance.audioInputs(); print("Audio track outs: $audioOuts"); print( "Audio track selected out: ${room.selectedAudioOutputDeviceId} && ${_remoteAudioTrack?.isActive} && ${_remoteAudioTrack?.prevStats} && ${await _remoteAudioTrack?.getReceiverStats()}", ); print("Audio track ins: $audioIns"); setState(() { _room = room; _isLoading = false; _sessionId = sessionId; }); } } }); await Future.delayed(const Duration(milliseconds: 350), () { final payload = { 'event': 'start_video_call', 'id': sessionId, 'data': { 'mode': widget.mode, if (widget.outfit != null) 'outfit': widget.outfit, if (widget.topic != null) 'topic': widget.topic, if (widget.subtopic != null) 'subtopic': widget.subtopic, if (widget.level != null) 'level': widget.level, }, }; socket.sink.add(jsonEncode(payload)); print("Sent video call event"); _startTimer(); }); setState(() { _socket = socket; }); } catch (e, stacktrace) { debugPrint('Video Call Error: $e'); debugPrintStack(stackTrace: stacktrace, label: "Video Call Stacktrace"); if (mounted) { ScaffoldMessenger.of(context).showSnackBar( SnackBar(content: Text('Failed to start video call: $e')), ); Navigator.pop(context); } } } setSpeakerphoneOn(bool speakerOn) async { if (Platform.isIOS) { if (livekit.Hardware.instance.speakerOn == false && livekit.Hardware.instance.preferSpeakerOutput) { await livekit.Hardware.instance.setSpeakerphoneOn(false); } } await livekit.Hardware.instance.setSpeakerphoneOn(speakerOn); print("Speaker set to $speakerOn"); } Future<void> _endCall() async { try { _timer?.cancel(); _socket?.sink.add( jsonEncode({'event': 'stop_video_call', 'id': _sessionId}), ); _room?.localParticipant?.setCameraEnabled(false); _room?.localParticipant?.setMicrophoneEnabled(false); await _room?.disconnect(); _socket?.sink.close(); _socket = null; } catch (e) { debugPrint('Error ending video call: $e'); } finally { if (mounted) Navigator.pop(context); } } @override void dispose() { _timer?.cancel(); _socket?.sink.add( jsonEncode({'event': 'stop_video_call', 'id': _sessionId}), ); _room?.localParticipant?.setCameraEnabled(false); _room?.localParticipant?.setMicrophoneEnabled(false); _room?.disconnect(); _socket?.sink.close(); _socket = null; super.dispose(); } Widget _buildVideo() { return Stack( children: [ Positioned.fill( child: _remoteVideoTrack != null ? livekit.VideoTrackRenderer(_remoteVideoTrack!) : Center( child: Text( 'Waiting for ${widget.avatar.name} to join...', style: const TextStyle( color: Colors.white70, fontSize: 18, ), ), ), ), // if (_localVideoTrack != null) // Positioned( // right: 10, // top: 10, // width: 120, // height: 160, // child: livekit.VideoTrackRenderer(_localVideoTrack!), // ), Positioned( top: 16, left: 16, child: Container( padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6), decoration: BoxDecoration( color: Colors.black45, borderRadius: BorderRadius.circular(8), ), child: Text( _formatDuration(_callDuration), style: const TextStyle(color: Colors.white, fontSize: 16), ), ), ), ], ); } @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.black, appBar: AppBar( title: Text('Video Call - ${widget.avatar.name}'), backgroundColor: AppTheme.primaryColor, actions: [ IconButton(icon: const Icon(Icons.call_end), onPressed: _endCall), ], ), body: _isLoading ? const Center(child: CircularProgressIndicator()) : _buildVideo(), ); } }
    t
    • 2
    • 3
  • s

    steep-balloon-41261

    05/30/2025, 11:52 AM
    This message was deleted.
    t
    • 2
    • 2
  • s

    steep-balloon-41261

    05/30/2025, 12:12 PM
    This message was deleted.
    t
    • 2
    • 2
  • e

    early-afternoon-97327

    05/30/2025, 12:57 PM
    can i use same number for inbound and outbound
    t
    • 2
    • 6
  • s

    steep-balloon-41261

    05/30/2025, 1:47 PM
    This message was deleted.
    t
    • 2
    • 2
  • l

    little-traffic-72206

    05/30/2025, 1:56 PM
    does DeleteRoom emit an event ?
    t
    • 2
    • 2
  • a

    astonishing-kilobyte-42234

    05/30/2025, 2:15 PM
    If I pay for LiveKit enterprise, will I be able to deploy the mesh network on my own infra?
    t
    • 2
    • 2
  • s

    steep-balloon-41261

    05/30/2025, 3:26 PM
    This message was deleted.
    t
    • 2
    • 2
  • m

    miniature-nail-35141

    05/30/2025, 4:12 PM
    @icy-policeman-64898 can I give metadata to agent when connect using livekit-frontend example
    t
    • 2
    • 5
  • s

    steep-balloon-41261

    05/30/2025, 4:17 PM
    This message was deleted.
    t
    • 2
    • 2
  • g

    glamorous-byte-32596

    05/30/2025, 4:45 PM
    can i connect to twilio only using account sid and auth token not not username and password?
    t
    • 2
    • 3
  • a

    ancient-hospital-67205

    05/30/2025, 4:46 PM
    do i need to create a livekit sip trunk per phone number, ? one trunk for inbound and one trunk for outbound per phone number? one inbound trunk per phone number and only one outbound trunk for all phone numbers?
    t
    • 2
    • 2
  • c

    creamy-judge-56458

    05/30/2025, 5:02 PM
    Is it possible to inject context from outside into a room during a session, through API or directly to the agent? In python
    t
    • 2
    • 10