<@U084APK40NS> I am getting this error when using ...
# ask-ai
b
@millions-hair-24956 I am getting this error when using screen sharing index-69addefe5c531e10.js:1 Error starting screen share: rm: cannot publish track when not connected at ai.<anonymous> (http://localhost:3000/_next/static/chunks/3dbccb2b-9aae2d566ede3894.js:1:285921) at Generator.next (<anonymous>) at a (http://localhost:3000/_next/static/chunks/3dbccb2b-9aae2d566ede3894.js:1:66282) const tracks = useTracks(); useEffect(() => { if (roomState === ConnectionState.Connected) { localParticipant.setCameraEnabled(config.settings.inputs.camera); localParticipant.setMicrophoneEnabled(config.settings.inputs.mic); localParticipant.setScreenShareEnabled(true) .then(() => { console.log('Screen sharing started'); }) .catch((error) => { console.error('Error starting screen share:', error); }); console.log(
Config: ${JSON.stringify(config)}
); console.log('Screen Settings:', config.settings.inputs.screen); } }, [config, localParticipant, roomState]); const agentVideoTrack = tracks.find( (trackRef) => trackRef.publication.kind === Track.Kind.Video && trackRef.participant.isAgent ); const localTracks = tracks.filter( ({ participant }) => participant instanceof LocalParticipant ); const localVideoTrack = localTracks.find( ({ source }) => source === Track.Source.Camera ); const localMicTrack = localTracks.find( ({ source }) => source === Track.Source.Microphone ); const localScreenTrack = localTracks.find( ({ source }) => source === Track.Source.ScreenShare ); console.log(
Local Screen Share tracks: ${localScreenTrack}
); console.log(
Local Tracks: ${JSON.stringify(localTracks)}
)
g
Sure thing! I'm looking through the available data to answer your question. Hold on a second...
b
@millions-hair-24956 but this useEffect(() => { if (roomState === ConnectionState.Connected) { } says that the room is connected
g
It seems likely that the
useEffect
is correctly checking for the
Connected
state, but there might be a timing issue or a problem with how
roomState
is being updated. Ensure that
roomState
is accurately reflecting the connection status by logging it before the
if
condition. Additionally, verify that
localParticipant
is correctly initialized and associated with the room. If the
roomState
is indeed
Connected
, but the error persists, it might be helpful to check if there are any asynchronous operations or state updates that could be affecting the timing of when
setScreenShareEnabled(true)
is called. Consider adding more detailed logging to track the sequence of events and confirm that the connection is fully established before attempting to publish the screen share track. Would you like more detailed guidance on debugging this issue or managing connection states? Sources: - https://livekit-users.slack.com/archives/C07FYCB7FRQ/p1728674567.746139 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
b
async def start_track_egress_and_close(lkapi: api.LiveKitAPI, request: api.TrackEgressRequest): _try_: egress_info = await lkapi.egress.start_track_egress(request) logger.info(_f_"egress_info: {egress_info}") except Exception as e: logger.error(_f_"Error starting track egress: {e}") _finally_: await lkapi.aclose() @ctx.room.on("track_subscribed") def on_track_subscribed( track: rtc.Track, publication: rtc.TrackPublication, participant: rtc.RemoteParticipant, ): fileName=_f_"livekit/${ctx.room.name}/" print(_f_"subscribed to track {track.sid} and kind ${track.kind}from {participant.identity}") if publication.source == rtc.TrackSource.SOURCE_SCREENSHARE: print("This is a screen share track.") fileNam+="screen.mp4" elif publication.source == rtc.TrackSource.SOURCE_CAMERA: print("This is a camera video track.") fileName+="video.mp4" elif publication.source == rtc.TrackSource.SOURCE_MICROPHONE: print("This is a microphone audio track.") fileName+="audio.wav" if track.kind == rtc.TrackKind.KIND_AUDIO: asyncio.create_task(transcribe_track(participant, track)) _# elif track.kind == rtc.TrackKind.KIND_VIDEO:_ asyncio.create_task(handle_video(track)) track_sid = track.sid logger.info(_f_"track_subscribed: {track_sid}") request = api.TrackEgressRequest( room_name=ctx.room.name, track_id=track_sid, file=api.DirectFileOutput( filepath=fileName, _# filepath="livekit/"+ctx.room.name+track_sid+".mp4",_ _# filepath=f"livekit/{ctx.room.name}/{track_sid}.mp4",_ s3=api.S3Upload( access_key=os.getenv("AWS_ACCESS_KEY_ID"), secret=os.getenv("AWS_SECRET_ACCESS_KEY"), bucket=os.getenv("S3_BUCKET_NAME"), ), ), ) lkapi = api.LiveKitAPI() asyncio.create_task(start_track_egress_and_close(lkapi, request)) # else: # print("track.kind is not audio and is ",track.kind) @millions-hair-24956 The file is being stored as .webm and .wav.ogg i want .wav and .mp4 only
g
It seems likely that the issue with file formats arises from the default behavior of the
TrackEgress
API, which exports tracks in their native format. To ensure that audio is exported as
.wav
and video as
.mp4
, you may need to explicitly specify the desired output format in your egress request. According to the Track Egress documentation, the
TrackEgress
API exports tracks without transcoding, meaning the output format depends on the track's codec. If you require specific formats like
.wav
or
.mp4
, consider using
TrackCompositeEgress
, which allows transcoding to desired formats. You can specify the output format by setting the
fileType
in the
EncodedFileOutput
to
MP4
for video and ensure the audio is processed to
.wav
if necessary. Would you like more detailed guidance on configuring
TrackCompositeEgress
for specific output formats? Sources: - Track Composite Egress - Track Egress documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
b
@millions-hair-24956 Can you write updated code?
g
If you're looking for further help, please follow-up on one of the other channels available for community support!