I am trying to record the caller and agent separat...
# ask-ai
m
I am trying to record the caller and agent separately. Here is my code for recording the caller:
Copy code
caller_req = lk_api.ParticipantEgressRequest(
        room_name=ctx.room.name,
        identity=caller_room_identity,
        screen_share=False,
        file_outputs=[lk_api.EncodedFileOutput(
            file_type=lk_api.EncodedFileType.OGG,
            filepath=file_path_caller,
            s3=lk_api.S3Upload(
                bucket=S3_RECORDING_BUCKET_NAME,
                region=S3_RECORDING_REGION,
                access_key=S3_RECORDING_ACCESS_KEY,
                secret=S3_RECORDING_SECRET_KEY,
            ),
        )],
    )
    await lkapi.egress.start_participant_egress(caller_req)
However, I get the following error:
Copy code
livekit.api.twirp_client.TwirpError: TwirpError(code=invalid_argument, message=no supported codec is compatible with all outputs, status=400)
What is the issue?
t
Thanks for reaching out, Daniel. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I'm sorry to hear you're encountering the error "no supported codec is compatible with all outputs." This usually happens because the audio and video codecs you've chosen for encoding aren't compatible with every output configuration in your egress request. LiveKit checks codec compatibility across all outputs, and if it can't find a common codec, this error pops up. Here are some friendly tips to help you fix this: • Make sure all your outputs in the egress request support the same codec. For instance, double-check that the codec you're using (like OGG) works with every output you've specified. • If you're setting codecs explicitly, confirm that the codec (such as OPUS for audio) is accepted by all your outputs. • Try using a preset configuration (like H264_720P_30) which comes with a set of codecs known to work well together. • Take a moment to review your output setup to ensure everything is consistent across outputs, like S3 and other file destinations. If you'd like, I'm here to help walk you through adjusting your configuration to get things running smoothly. Just let me know! Sources: pkg/config/pipeline.go | livekit/livekit_egress.pb.go | Docs::Home
m
I am not setting any codecs
I am using LiveKit in combination with Telnyx
t
Thanks for reaching out! Could you share a bit more about how you're using Telnyx with LiveKit? For example: • Are you recording SIP calls that are routed through Telnyx? • Are you using Telnyx for audio streams, video streams, or both? Understanding your setup a little better will really help me assist you more effectively. Looking forward to your details! Sources: lib/livekit/proto/livekit_egress_pb.rb | livekit-api/src/services/egress.rs | livekit/livekit_egress.twirp.go | HD voice for SIP | LiveKit Docs
m
This is what I have now:
Copy code
lkapi = lk_api.LiveKitAPI()

    caller_req = lk_api.ParticipantEgressRequest(
        room_name=ctx.room.name,
        identity=caller_room_identity,
        screen_share=False,
        file_outputs=[lk_api.EncodedFileOutput(
            file_type=lk_api.EncodedFileType.MP4,
            filepath=file_path_caller,
            s3=lk_api.S3Upload(
                endpoint=S3_RECORDING_ENDPOINT,
                bucket=S3_RECORDING_BUCKET_NAME,
                region=S3_RECORDING_REGION,
                access_key=S3_RECORDING_ACCESS_KEY,
                secret=S3_RECORDING_SECRET_KEY,
            ),
        )],
    )
    print("Caller egress request...")
    await lkapi.egress.start_participant_egress(caller_req)
    print("Caller egress request done")
I am using supabase for storage (locally) in combination with ngrok. However I am not receiving any request from Livekit to my supabase instance
t
I'd love to understand a bit more about how you're using Supabase and ngrok in your setup so I can assist you better. For instance: • Are you using Supabase as an S3-compatible storage backend? • How exactly is ngrok fitting into your configuration? If you could share a few more details about your storage setup, it would really help me get to the root of the issue and find the best solution for you. Sources: test/config-sample.yaml
m
• yes • because I run supabase locally for development
t
To help you configure LiveKit's
ParticipantEgressRequest
for S3-compatible storage backends like Supabase, here are some important steps to follow: • Make sure to set the
s3
field in your output configuration (for example, in
file_outputs
) with these details: Sources: lib/livekit/proto/livekit_egress_pb.rb | pkg/config/storage.go | protobufs/livekit_egress.proto | Egress API | LiveKit Docs
m
I am setting s3 field (see above!)
t
Since you’re already setting the
s3
field, I’d love to share some common tips and troubleshooting steps that might help when using LiveKit's
ParticipantEgressRequest
with S3-compatible storage like Supabase: • Ensure Correct Endpoint: Double-check that your
endpoint
is correctly formatted (for example, replace
<SUPABASE_STORAGE_ENDPOINT>
with your actual Supabase storage endpoint) and that it’s accessible via ngrok if you’re working locally. • Force Path Style: It’s often helpful to set
force_path_style=True
in the
S3Upload
configuration since many S3-compatible services expect path-style addressing. • Check Permissions: Make sure your access key and secret key have the right write permissions for the bucket you’re targeting in Supabase. • Inspect EgressInfo: If the egress doesn’t go through, taking a look at the
EgressInfo
response can provide detailed error messages and codes that can guide us. • Test Connectivity: You might want to try tools like
curl
or the AWS CLI (configured with your custom endpoint) to confirm the endpoint is reachable and your credentials are working as expected. Please feel free to reach out if you’d like me to help you debug further or if you have any specific error messages you want to share. I’m here to help! Sources: Egress output types | LiveKit Docs | livekit-protocol/livekit/protocol/egress.py | Community &amp; Support | livekit/livekit_composite | DeepWiki | Egress &amp; Media Processing | livekit/livekit_composite | DeepWiki | livekit/livekit_egress.pb.go
m
what's usually the latency until I have the recordings in my storage bucket?
t
I totally understand how important it is to know when your LiveKit egress recordings will show up in your S3-compatible storage bucket. The timing can vary based on things like the size of the file, your network conditions, and how your storage backend is set up. While there isn’t a precise timeframe, the upload starts right after the recording finishes, and if there are any hiccups, retries happen according to settings like
max_retries
and
retry_delay
. To keep an eye on everything, you can use webhooks to track the
egress_ended
status and see exactly when the upload wraps up. If you’d like, I’m here to help you set up those webhooks or chat about ways to make the upload process even smoother. Just let me know! Sources: test/file.go | Quotas and limits | LiveKit Docs
m
Here is my code:
Copy code
lkapi = lk_api.LiveKitAPI()

    caller_req = lk_api.ParticipantEgressRequest(
        room_name=ctx.room.name,
        identity=caller_room_identity,
        screen_share=False,
        file_outputs=[lk_api.EncodedFileOutput(
            file_type=lk_api.EncodedFileType.MP4,
            filepath=file_path_caller,
            s3=lk_api.S3Upload(
                endpoint=S3_RECORDING_ENDPOINT,
                bucket=S3_RECORDING_BUCKET_NAME,
                region=S3_RECORDING_REGION,
                access_key=S3_RECORDING_ACCESS_KEY,
                secret=S3_RECORDING_SECRET_KEY,
            ),
        )],
    )
    print("Caller egress request...")
    res = await lkapi.egress.start_participant_egress(caller_req)
    print("Caller egress request response:", res)
    if res.error:
        logger.error(f"Caller egress request failed: {res.error}")
        raise Exception(f"Caller egress request failed: {res.error}")
    print("Caller egress request done")
Why is this not working?
t
Hi there! I wanted to share some common reasons why your
ParticipantEgressRequest
might not be working and some tips to help you debug them: • Incorrect S3 Configuration: Please make sure the
S3Upload
object has the correct
endpoint
,
bucket
,
region
,
access_key
, and
secret
. It’s always good to double-check these values for accuracy. • Authentication Issues: Take a moment to verify that your LiveKit API client is initialized with the right
apiKey
and
apiSecret
. • Participant Activity: Make sure the participant with the given
identity
is actually in the room and actively publishing media like camera, mic, or screen share. • Inspect EgressInfo: If the request doesn’t go through, try checking the
EgressInfo
response for the
error
and
error_code
fields. These can give you clues about what might be going wrong. • Backup Storage: Sometimes if the main S3 upload fails, the system saves the file locally as a fallback. You can check if
backup_storage_used
is set to
true
in the
EgressInfo
. If you’d like, I’m here to help you interpret any error messages or walk you through further debugging steps. Just let me know! Sources: test/edge.go | packages/livekit-server-sdk/src/EgressClient.ts | livekit-api/src/services/egress.rs | Egress &amp; Media Processing | livekit/livekit_composite | DeepWiki | Storage &amp; Configuration | livekit/livekit_composite | DeepWiki