I am using the iOS vision demo, but when I use scr...
# ask-ai
r
I am using the iOS vision demo, but when I use screen share, it cannot see my screen. The video track doesn’t seem to be working with the Geminis live vision. Why isn’t this working
t
Thanks for reaching out, Tech. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
To address the issue with screen sharing in the iOS Vision Demo and its integration with Gemini Live Vision, please ensure the following steps are completed: 1. Broadcast Upload Extension: Add a Broadcast Upload Extension target to your Xcode project to enable screen sharing outside the app. Ensure the app and extension share the same app group. 2. Permissions: Verify that the necessary permissions (e.g., camera, microphone, and screen sharing) are added to the
Info.plist
files for both the app and the extension. 3. Screen Share API: Use the LiveKit Screen Share API by calling
setScreenShareEnabled
on the local participant in your Swift code. 4. Gemini Live Integration: Confirm that the backend agent is correctly integrated with Gemini Live and that real-time video input is enabled in the agent configuration. 5. Testing: Test the app on a physical iOS device, as simulators may not support all screen-sharing features. Monitor logs for any errors related to the Broadcast Extension. If the issue persists, please let me know which specific step is causing trouble, and I can provide further assistance.
r
Audio Clip (2025-08-06 202600).m4a
Why? It’s not working
t
Hey, could you share a bit more about what happens when you try to use screen sharing? Like: • Do you see the screen sharing dialog pop up? • Are there any error messages or does the app crash? • Is the Broadcast Upload Extension set up and turned on? Knowing these details will really help me figure out what's going on and find the best fix for you.
r
When I use the demo and I do the video feature, it can see my face and the floor and whatever but when I do screen share and I hit start broadcast, it works and everything set up but when you say to the AI, can you see my screen what do you see? It gives incorrect answer so it is obvious that it is not reading the screen so I don’t know if the video stream isn’t being set or what but why isn’t it working?
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
r
What you’re saying makes no sense. I’m using the vision demo. It should just work. OK when I hit start broadcast it starts the broadcast but the video stream obviously isn’t being working because when I say, do you see my screen what’s on my screen? It says random stuff.
Hello?