LiveKit has video track right. How to let's say sh...
# ask-ai
i
LiveKit has video track right. How to let's say show a local image on the agent's video track
t
Thanks for reaching out, Jan. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To show a local image on an agent's video track in LiveKit, you can use the
VirtualBackground
processor from the
@livekit/track-processors
library. Here's how you can do it: 1. Import Required Dependencies:
import { VirtualBackground } from '@livekit/track-processors';
2. Create a Local Video Track:
const videoTrack = await createLocalVideoTrack();
3. Set Up the Virtual Background Processor:
const virtualBackground = VirtualBackground('path/to/your/image.jpg');
4. Apply the Processor to the Video Track:
await videoTrack.setProcessor(virtualBackground);
5. Publish the Track:
await room.localParticipant.publishTrack(videoTrack);
You can also dynamically update the background image using:
Copy code
await virtualBackground.updateTransformerOptions({
  imagePath: 'path/to/new/image.jpg',
});
Ensure your browser supports WebGL 2 and OffscreenCanvas, and that the
@mediapipe/tasks-vision
library is available for background segmentation. Sources: src/transformers/BackgroundTransformer.ts | lib/CameraSettings.tsx
i
No, I mean not from the frontend. I want to play some video from the backend side (python)
t
If you want to play a video or image on a video track from the backend using Python, the LiveKit Python SDK has got you covered. Let me walk you through it step-by-step: 1. Set Up Media Streamer: You can use the
MediaFileStreamer
class to stream video or image frames. For example, here’s how you’d stream a video file:
Copy code
from livekit import rtc
import av

class MediaFileStreamer:
    def __init__(self, media_file):
        self._video_container = av.open(media_file)

    async def stream_video(self):
        for frame in self._video_container.decode(video=0):
            yield rtc.VideoFrame(
                width=frame.width,
                height=frame.height,
                type=rtc.VideoBufferType.RGBA,
                data=frame.to_ndarray().tobytes(),
            )
1. Create and Publish Tracks: Next up, create your video and audio tracks and publish them to the room like this:
Copy code
video_source = rtc.VideoSource(width=1280, height=720)
video_track = rtc.LocalVideoTrack.create_video_track("video", video_source)
await room.local_participant.publish_track(video_track)
1. Synchronize Audio and Video: To keep everything in sync, use the `AVSynchronizer`:
Copy code
av_sync = rtc.AVSynchronizer(audio_source, video_source, video_fps=30)
1. Stream an Image: Want to stream a static image? Just tweak the
stream_video
method so it keeps sending the same frame over and over:
Copy code
async def stream_image(image_path):
    img = ... # Load image as a numpy array
    while True:
        yield rtc.VideoFrame(
            width=img.shape[1],
            height=img.shape[0],
            type=rtc.VideoBufferType.RGBA,
            data=img.tobytes(),
        )
        await asyncio.sleep(0.1)
Don't forget to manage your resources by closing tracks and synchronizers when you’re done. If you have any questions or want me to clarify anything, just let me know—I’m here to help!
i
Can you give me an example script?
t
Here’s a simple example script to help you play a video file on a video track from the backend using Python:
Copy code
import asyncio
from livekit import rtc
import av

class MediaFileStreamer:
    def __init__(self, media_file):
        self._video_container = av.open(media_file)

    async def stream_video(self):
        for frame in self._video_container.decode(video=0):
            yield rtc.VideoFrame(
                width=frame.width,
                height=frame.height,
                type=rtc.VideoBufferType.RGBA,
                data=frame.to_ndarray().tobytes(),
            )

async def main():
    # Just swap these out with your LiveKit server details
    LIVEKIT_URL = "your_livekit_url"
    API_KEY = "your_api_key"
    API_SECRET = "your_api_secret"
    ROOM_NAME = "your_room_name"
    VIDEO_FILE = "path_to_your_video.mp4"

    # Connect to the room
    room = rtc.Room()
    token = rtc.AccessToken(API_KEY, API_SECRET).with_identity("backend").to_jwt()
    await room.connect(LIVEKIT_URL, token)

    # Create video track
    streamer = MediaFileStreamer(VIDEO_FILE)
    video_source = rtc.VideoSource(width=1280, height=720)
    video_track = rtc.LocalVideoTrack.create_video_track("video", video_source)
    await room.local_participant.publish_track(video_track)

    # Stream video frames
    async for frame in streamer.stream_video():
        await video_source.push(frame)

    # Clean up and disconnect
    await room.disconnect()

if __name__ == "__main__":
    asyncio.run(main())
Just replace the placeholders (
your_livekit_url
,
your_api_key
,
your_api_secret
,
your_room_name
, and
path_to_your_video.mp4
) with your actual LiveKit server info and the path to your video file. If you have any questions or want me to walk you through any part, just let me know—I’m here to help!
i
I just want to stream an image, can you change the example script?
t
Here’s an updated example script to help you stream a static image on a video track from the backend using Python:
Copy code
import asyncio
from livekit import rtc
import numpy as np
from PIL import Image

class ImageStreamer:
    def __init__(self, image_path):
        self.image = Image.open(image_path).convert("RGBA")
        self.image_data = np.array(self.image)

    async def stream_image(self):
        while True:
            yield rtc.VideoFrame(
                width=self.image_data.shape[1],
                height=self.image_data.shape[0],
                type=rtc.VideoBufferType.RGBA,
                data=self.image_data.tobytes(),
            )
            await asyncio.sleep(0.1)  # Stream at 10 FPS

async def main():
    # Replace with your LiveKit server details
    LIVEKIT_URL = "your_livekit_url"
    API_KEY = "your_api_key"
    API_SECRET = "your_api_secret"
    ROOM_NAME = "your_room_name"
    IMAGE_FILE = "path_to_your_image.png"

    # Connect to the room
    room = rtc.Room()
    token = rtc.AccessToken(API_KEY, API_SECRET).with_identity("backend").to_jwt()
    await room.connect(LIVEKIT_URL, token)

    # Create video track
    streamer = ImageStreamer(IMAGE_FILE)
    video_source = rtc.VideoSource(width=streamer.image.width, height=streamer.image.height)
    video_track = rtc.LocalVideoTrack.create_video_track("video", video_source)
    await room.local_participant.publish_track(video_track)

    # Stream image frames
    async for frame in streamer.stream_image():
        await video_source.push(frame)

    # Close resources
    await room.disconnect()

if __name__ == "__main__":
    asyncio.run(main())
Just swap out the placeholders (
your_livekit_url
,
your_api_key
,
your_api_secret
,
your_room_name
, and
path_to_your_image.png
) with your actual LiveKit server info and the path to your image file. If you have any questions or want me to walk you through any part, just let me know—I’m here to help!
i
There's no push method for rtc.VideoSource
t
You're right that
rtc.VideoSource
doesn't have a
push
method. No worries though! You can use the
capture_frame
method to stream an image instead. Here's an updated example script to help you out:
Copy code
import asyncio
from livekit import rtc
from PIL import Image
import numpy as np

# Set dimensions for the video
WIDTH = 640
HEIGHT = 480

async def main():
    # Replace with your LiveKit server details
    LIVEKIT_URL = "your_livekit_url"
    API_KEY = "your_api_key"
    API_SECRET = "your_api_secret"
    ROOM_NAME = "your_room_name"
    IMAGE_FILE = "path_to_your_image.png"

    # Connect to the room
    room = rtc.Room()
    token = rtc.AccessToken(API_KEY, API_SECRET).with_identity("backend").to_jwt()
    await room.connect(LIVEKIT_URL, token)

    # Create VideoSource and LocalVideoTrack
    source = rtc.VideoSource(WIDTH, HEIGHT)
    track = rtc.LocalVideoTrack.create_video_track("image-track", source)
    await room.local_participant.publish_track(track)

    # Load the image
    image = Image.open(IMAGE_FILE).convert("RGBA").resize((WIDTH, HEIGHT))
    image_data = np.array(image)

    # Stream the image
    async def stream_image():
        while True:
            frame = rtc.VideoFrame(
                width=WIDTH,
                height=HEIGHT,
                type=rtc.VideoBufferType.RGBA,
                data=image_data.tobytes(),
            )
            source.capture_frame(frame)
            await asyncio.sleep(0.1)  # Stream at 10 FPS

    await stream_image()

    # Close resources
    await room.disconnect()

if __name__ == "__main__":
    asyncio.run(main())
Just swap out the placeholders (
your_livekit_url
,
your_api_key
,
your_api_secret
,
your_room_name
, and
path_to_your_image.png
) with your actual LiveKit server info and the path to your image file. If you hit any snags or want to chat more about it, I’m here to help!
i
It works, but I got some warning that the memory usage is high 2025-08-07 141325,011 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 536.0, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 47285, "job_id": "AJ_USMFLRaZRutB"}
t
Hey there! To help with that high memory usage warning, you can tweak your script a bit by managing resources properly and cutting down on memory overhead. Here are some friendly tips: 1. Release Resources Properly: Make sure to close any video streams or buffers once you're done with them to free up memory. Here's a quick example:
Copy code
video_stream = rtc.VideoStream(track)
try:
    async for event in video_stream:
        # Process video frames
        pass
finally:
    await video_stream.aclose()  # This frees up memory buffers and video decoder instances
1. Optimize Memory Usage: If you're working with buffers, try to optimize them so you don’t end up storing duplicate data. Like this:
Copy code
buffer = ReplayBuffer(
    capacity=1000,
    storage_device="cpu",  # Keeps data on CPU to save GPU memory
    optimize_memory=True   # Helps avoid duplicating state data
)
Also, keep an eye on your system’s memory use, and if you can, try lowering the video stream’s resolution or frame rate. If you want, I’m here to help with anything else you need! Sources: Vision Agent Quickstart | LiveKit Docs | lerobot/common/datasets/compute_stats.py
i
Integrate your suggestion above into the previous code
t
Hey there! Here's the updated script with memory optimization and proper resource management all set up for you:
Copy code
import asyncio
from livekit import rtc
from PIL import Image
import numpy as np

# Set dimensions for the video
WIDTH = 640
HEIGHT = 480

async def main():
    # Replace with your LiveKit server details
    LIVEKIT_URL = "your_livekit_url"
    API_KEY = "your_api_key"
    API_SECRET = "your_api_secret"
    ROOM_NAME = "your_room_name"
    IMAGE_FILE = "path_to_your_image.png"

    # Connect to the room
    room = rtc.Room()
    token = rtc.AccessToken(API_KEY, API_SECRET).with_identity("backend").to_jwt()
    await room.connect(LIVEKIT_URL, token)

    # Create VideoSource and LocalVideoTrack
    source = rtc.VideoSource(WIDTH, HEIGHT)
    track = rtc.LocalVideoTrack.create_video_track("image-track", source)
    await room.local_participant.publish_track(track)

    # Load the image
    image = Image.open(IMAGE_FILE).convert("RGBA").resize((WIDTH, HEIGHT))
    image_data = np.array(image)

    # Stream the image
    async def stream_image():
        while True:
            frame = rtc.VideoFrame(
                width=WIDTH,
                height=HEIGHT,
                type=rtc.VideoBufferType.RGBA,
                data=image_data.tobytes(),
            )
            source.capture_frame(frame)
            await asyncio.sleep(0.1)  # Stream at 10 FPS

    try:
        await stream_image()
    finally:
        # Properly release resources
        await source.aclose()
        await track.aclose()
        await room.disconnect()

if __name__ == "__main__":
    asyncio.run(main())
This version makes sure to close up resources like the video source and track nicely when you're done, so your memory stays happy. If you want me to tweak anything else, just give me a shout!
i
How to add the buffer optimization to above? buffer = ReplayBuffer( capacity=1000, storage_device="cpu", # Keeps data on CPU to save GPU memory optimize_memory=True # Helps avoid duplicating state data )
t
The
ReplayBuffer
is usually used in reinforcement learning or similar scenarios, so it might not be the perfect fit for streaming an image on a video track. But if you're aiming to save memory when sending the same frames repeatedly, a simple buffer to reuse the frame data instead of recreating it each time can really help. Here’s a friendly way to add that kind of buffer optimization into your script:
Copy code
import asyncio
from livekit import rtc
from PIL import Image
import numpy as np

# Set dimensions for the video
WIDTH = 640
HEIGHT = 480

class FrameBuffer:
    def __init__(self, frame_data):
        self.frame_data = frame_data

    def get_frame(self):
        return self.frame_data

async def main():
    # Replace with your LiveKit server details
    LIVEKIT_URL = "your_livekit_url"
    API_KEY = "your_api_key"
    API_SECRET = "your_api_secret"
    ROOM_NAME = "your_room_name"
    IMAGE_FILE = "path_to_your_image.png"

    # Connect to the room
    room = rtc.Room()
    token = rtc.AccessToken(API_KEY, API_SECRET).with_identity("backend").to_jwt()
    await room.connect(LIVEKIT_URL, token)

    # Create VideoSource and LocalVideoTrack
    source = rtc.VideoSource(WIDTH, HEIGHT)
    track = rtc.LocalVideoTrack.create_video_track("image-track", source)
    await room.local_participant.publish_track(track)

    # Load the image and create a buffer
    image = Image.open(IMAGE_FILE).convert("RGBA").resize((WIDTH, HEIGHT))
    image_data = np.array(image).tobytes()
    buffer = FrameBuffer(image_data)

    # Stream the image using the buffer
    async def stream_image():
        while True:
            frame = rtc.VideoFrame(
                width=WIDTH,
                height=HEIGHT,
                type=rtc.VideoBufferType.RGBA,
                data=buffer.get_frame(),
            )
            source.capture_frame(frame)
            await asyncio.sleep(0.1)  # Stream at 10 FPS

    try:
        await stream_image()
    finally:
        # Properly release resources
        await source.aclose()
        await track.aclose()
        await room.disconnect()

if __name__ == "__main__":
    asyncio.run(main())
So here, the
FrameBuffer
class holds onto the image data you load once, and then reuses it for every frame. This way, you keep memory usage low and skip doing the same work over and over. If anything’s unclear or you want to dive deeper, just let me know—I’m happy to help!
i
Well it's useless even if I release resources when the room is closed.
Copy code
2025-08-07 15:08:45,469 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 502.765625, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 77059, "job_id": "AJ_CpqPv8ymG6au"}
2025-08-07 15:08:50,471 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 505.15625, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 77059, "job_id": "AJ_CpqPv8ymG6au"}
2025-08-07 15:08:55,473 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 505.34375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 77059, "job_id": "AJ_CpqPv8ymG6au"}
2025-08-07 15:09:00,474 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 505.375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 77059, "job_id": "AJ_CpqPv8ymG6au"}
2025-08-07 15:09:05,476 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 505.546875, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 77059, "job_id": "AJ_CpqPv8ymG6au"}
2025-08-07 15:09:10,477 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 509.59375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 77059, "job_id": "AJ_CpqPv8ymG6au"}
It seems the memory keeps going up. Maybe something needs to be done with
Copy code
self._video_source.capture_frame(frame)
t
Hey there! To help with the rising memory usage when using
capture_frame
, here are some friendly tips to optimize things and avoid memory leaks: 1. Release Resources Properly: Make sure to close the
VideoSource
instance after you're done by calling
await video_source.aclose()
. This helps free up memory buffers and other resources. 2. Use Try/Finally for Cleanup: It’s a good idea to wrap your streaming code in a
try/finally
block so cleanup happens even if something unexpected occurs. Here’s an example:
Copy code
try:
    while True:
        frame = rtc.VideoFrame(
            width=WIDTH,
            height=HEIGHT,
            type=rtc.VideoBufferType.RGBA,
            data=buffer.get_frame(),
        )
        source.capture_frame(frame)
        await asyncio.sleep(0.1)  # Stream at 10 FPS
finally:
    await source.aclose()
1. Avoid Retaining Frames: Be sure to process and discard frames as soon as you can. Holding onto frames longer than necessary can lead to memory issues. 2. Handle Continuous Streaming: Use a loop with suitable pauses, and make sure you can stop the loop when you’re done streaming. Following these steps should help keep your memory usage steady during streaming. If you want me to explain anything further or tweak your script, just let me know—I’m here to help! Sources: Vision Agent Quickstart | LiveKit Docs | Vision Agent Quickstart | LiveKit Docs | livekit-rtc/livekit/rtc/video_source.py | Audio and video | LiveKit Docs | Receiving and publishing tracks | LiveKit Docs
i
Well, the while True itself makes the memory booming. If I am having a conevrsation for 10 mins, it won't be closed right
t
You're right that a continuous
while True
loop can cause memory issues over time, especially if your session runs for a long while. To help with this, you can switch to a more controlled streaming approach. Here's a friendly update you might find useful: 1. Use a Timer or Frame Counter: Instead of looping forever, try using a timer or frame counter to reset or manage the streaming every now and then. 2. Release Resources Periodically: It’s a good idea to close and recreate the
VideoSource
or other resources regularly to keep memory usage in check. 3. Example Implementation:
Copy code
import asyncio
from livekit import rtc
from PIL import Image
import numpy as np

# Set dimensions for the video
WIDTH = 640
HEIGHT = 480

async def main():
    # Replace with your LiveKit server details
    LIVEKIT_URL = "your_livekit_url"
    API_KEY = "your_api_key"
    API_SECRET = "your_api_secret"
    ROOM_NAME = "your_room_name"
    IMAGE_FILE = "path_to_your_image.png"

    # Connect to the room
    room = rtc.Room()
    token = rtc.AccessToken(API_KEY, API_SECRET).with_identity("backend").to_jwt()
    await room.connect(LIVEKIT_URL, token)

    # Create VideoSource and LocalVideoTrack
    source = rtc.VideoSource(WIDTH, HEIGHT)
    track = rtc.LocalVideoTrack.create_video_track("image-track", source)
    await room.local_participant.publish_track(track)

    # Load the image
    image = Image.open(IMAGE_FILE).convert("RGBA").resize((WIDTH, HEIGHT))
    image_data = np.array(image).tobytes()

    # Stream the image with a frame counter
    frame_count = 0
    max_frames = 6000  # Example: Reset after 10 minutes at 10 FPS

    try:
        while frame_count < max_frames:
            frame = rtc.VideoFrame(
                width=WIDTH,
                height=HEIGHT,
                type=rtc.VideoBufferType.RGBA,
                data=image_data,
            )
            source.capture_frame(frame)
            frame_count += 1
            await asyncio.sleep(0.1)  # Stream at 10 FPS

            # Reset resources periodically
            if frame_count == max_frames:
                frame_count = 0
                await source.aclose()
                source = rtc.VideoSource(WIDTH, HEIGHT)

    finally:
        # Properly release resources
        await source.aclose()
        await track.aclose()
        await room.disconnect()

if __name__ == "__main__":
    asyncio.run(main())
This script resets the
VideoSource
after a set number of frames to help prevent memory buildup. Feel free to tweak the
max_frames
value to fit what works best for you. And if you want to chat more or need a hand with anything else, just let me know!
i
The solution didn't work
Copy code
2025-08-07 15:23:45,373 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 507.546875, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:23:50,375 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 507.65625, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:23:55,375 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 515.71875, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:00,377 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 515.796875, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:01,704 - DEBUG voc_livekit.plugins.static_avatar - Resetting avatar video track after 1000 frames {"pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:05,379 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 536.03125, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:10,380 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 540.328125, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:15,381 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 542.6875, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:20,383 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 546.734375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:25,384 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 550.78125, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:30,386 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 550.84375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:35,388 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 550.859375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:40,390 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 550.875, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:45,391 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 550.890625, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:47,463 - DEBUG voc_livekit.plugins.static_avatar - Resetting avatar video track after 1000 frames {"pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:50,393 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 573.0625, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:24:55,395 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 573.265625, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:25:00,396 - WARNING livekit.agents - process memory usage is high {"memory_usage_mb": 574.859375, "memory_warn_mb": 500, "memory_limit_mb": 0, "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
2025-08-07 15:25:01,217 - DEBUG livekit.agents - stream closed {"participant": "identity-UexH", "source": "SOURCE_MICROPHONE", "pid": 84925, "job_id": "AJ_Eh7xb2MXWXWu"}
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
i
Can I use AvatarRunner instead?