This message was deleted.
# cloud
s
This message was deleted.
b
You can use BufferCapturer which is pretty simple to use.
👍 1
i
@better-house-11852 thanks! Are there any examples of using it?
b
I think I don't have an example of this but you can: 1. let track =
LocalVideoTrack.createBufferTrack(...)
2. Start repeatedly calling
track.capture(...)
3.
localParticipant.publishVideoTrack(track: track)
Be sure to capture at least 1 frame before publishing.
❤️ 1
i
@better-house-11852 thank you, this worked for video! Is there an equivalent for audio? Or maybe I need to create one that's similar to BufferCapture but for audio?
Also, I made this change locally to use the source to determine the forScreenShare flag vs hardcoding to true, not a huge issue/bug really but was wondering if a) you thought this was correct and b) should I make a PR for this?
👀 1
b
Oh nice find, I think I should put it in BufferCaptureOptions.
👍 1
BufferCapturing is not always screen share (probably like your use case)
What format of audio do you want to feed in ?
i
@better-house-11852 would be raw samples on iOS which I think is PCM?
Overall my use case is I want access to the unencoded video and audio (for local recording) and ideally to just directly control the AVCaptureSession for video/audio and just feed in directly to the LiveKit session rather than have it managed by the webrtc library
Alternately, for audio, is there some way to get access to the raw unencoded audio from the AVCaptureSession? This would work for our use case too I think, not as ideal but workable.
b
You can select input device but unfortunately there is no way to feed in audio samples at the moment. I think I can implement this but it will take time since it's a lot of work.
👍 1
i
@better-house-11852 instead of feeding in audio samples, is there some way to access the audio data in a callback or something before it gets encoded? That would also work for our use case
b
@important-psychiatrist-73895 Unfortunately it is not possible at the moment. Audio capturing is pretty deep inside webrtc and I'm not aware of hooking into it currently. The only way to feed in custom audio is to somehow create a virtual audio device and select that for input to the SDK. I have strong interest in implementing this since it will open up many use cases, but this will take some time.
👍 1