This message was deleted.
# helpdesk
s
This message was deleted.
b
Goal: Always provide a high quality resolution so that text is easily readable, reduce FPS but not the resolution. --- 1. Approach: leave LiveKit default ScreenScreen sharing simulcast behavior. Capture window wit resolution: 1728x1001px (in browser) and 2200x1400px (true px captured) The automatic simulcast layers are:
Copy code
[
    {
        "rid": "q",
        "scaleResolutionDownBy": 2,
        "maxBitrate": 150000,
        "maxFramerate": 3
    },
    {
        "rid": "h",
        "scaleResolutionDownBy": 1,
        "maxBitrate": 1500000,
        "maxFramerate": 15
    }
]
RESULT Playback with resolution: 1341x831px --> “h” layer selected Playback with resolution: 1260x786px --> “q” layer selected So the automatic switching between the 2 qualities is working well. However we don’t want the resolution to go down to half of it as this is not good for sharing text documents. We prefer reducing the FPS down to 2 and keeping the resolution high. Therefore I have added a override for the simulcast layer for the next approach.
2. Approach: provide a single manual simulcast layer to make sure the resolution stays high Capture window with resolution: 1728x1001px (in browser) and 2200x1400px (true px captured via webrtc internals media source) Based on the manual config simulcast layers are:
Copy code
[
    {
        "rid": "q",
        "scaleResolutionDownBy": 1.2,
        "maxBitrate": 250000,
        "maxFramerate": 2,
        "priority": "medium",
        "networkPriority": "medium"
    },
    {
        "rid": "h",
        "scaleResolutionDownBy": 1,
        "maxBitrate": 1500000,
        "maxFramerate": 15
    }
]
RESULT Playback 1 with resolution: 1773x1108px --> “q” layer selected ⚠️ Playback 2 with resolution: 1260x786px --> “q” layer selected Intuitively I would expect the first playback to go to
h
because the window being captured is smaller than the one being played back. However because it’s a high density display the actual content being captured has a higher resolution than the logical pixels. So staying on the
q
level is more or less expected because the playback size is closer to the
q
layer than it is to the
h
layer which is fine. Note: Do mind though that the playback sizes are logical px so the real px values should be about double that size as it’s also a high density display. Therefore it would be more or less expected to use the higher quality stream here too.
3. Approach: provide multiple manual simulcast layers to make sure a higher frame rate is used if possible Capture window with resolution: 1728x1001px (in browser) and 2200x1400px (true px captured via webrtc internals media source) Based on the manual config simulcast layers are:
Copy code
[
    {
        "rid": "q",
        "scaleResolutionDownBy": 1.2,
        "maxBitrate": 250000,
        "maxFramerate": 2
    },
    {
        "rid": "h",
        "scaleResolutionDownBy": 1.2,
        "maxBitrate": 750000,
        "maxFramerate": 6
    },
    {
        "rid": "f",
        "scaleResolutionDownBy": 1,
        "maxBitrate": 1500000,
        "maxFramerate": 15
    }
]
RESULT Playback 1 with resolution: 1773x1108px --> “q” layer selected Playback 2 with resolution: 1260x786px --> “q” layer selected Even though the
h
quality has the same resolution as the
q
quality I never get a switch to
h
. No matter what I do it always stays on
q
even though my network connection could easily handle it. I think this must be somehow related to the
adaptiveStream
logic being confused because both
h
and
q
have the same resolution and there is no logic applied to different bitrate or frame rate settings on the layers.
Conclusion I see 2 problems with the current logic that may be improved. 1. Captured px are using real screen px which means a captured stream’s px count is way higher than the logical size we’re capturing on a high dpi display. However I have the feeling the choice of the simulcast layer for playback is based on the logical pixels. Therefore if you capture a window on a high dpi screen and play it back on the exact same screen the adaptive stream logic will likely select a lower res layer which is not expected 2. If you have multiple simulcast layers with the same resolution but different fps and frame rates the adaptive stream logic gets confused. I’d love to get some feedback on those thoughts. I also wouldn’t rule out that I’m getting something wrong as it’s a tricky topic.
After spending some more time with it I saw that you have some code in place to address the high DPI issue. However my 15" MacBook Pro M1. Returns
window.devicePixelRatio=2
which will fall back to
1
which I think isn’t quite right
Shouldn’t the statement here be
if (devicePixelRatio >= 2)
or are you excluding devices with exactly
2
on purpose?
@dry-elephant-14928 or @polite-kilobyte-67570 do you have any input on the topic?
thanks for the info @fancy-wire-61616
if the server is picking a lower layer because of congestion
No it isn’t the server is idling around just waiting for something to do 😄
🙏🏽 1
e
For the pixelRatio, we chose that >2 value in a recent change to avoid changing default bandwidth consumption in desktop browsers, aiming only at mobile where the pixel density is even higher. For your purposes, have you tried
adaptiveStream.pixelDensity = 'screen'
? That will use the device pixel ratio directly.
🙇🏽 1
b
For your purposes, have you tried
adaptiveStream.pixelDensity = 'screen'
? That will use the device pixel ratio directly.
Yes I saw that, however I like the reasoning of not going x3 or higher on mobile devices. I might have to do a ``pixelDensity`` check myself and use
screen
as long as it’s
<=2
.
we chose that >2 value in a recent change to avoid changing default bandwidth consumption in desktop browsers
I don’t fully understand that move as a.) on desktop browsers bandwidth usually isn’t that big of a problem and b.) especially on desktop people often share text files / documents which need a high res to be readable.
e
It might be worth at that point passing the pixel density value directly then, to have control over the value used. For the bandwidth thing, it's more of a cost decision for server side iirc, since bumping up what layers users see by default is a big bump in cost. We can reexamine the default values when dz gets back
🙇🏽 1
b
👍 thanks for the details
p
hi @best-parrot-43500, trying to follow up on this. when specifying
pixelDensity
yourself, the only problem remaining is the confusion between lower and higher framerates with the same resolution when using adaptiveStream, right?
b
@polite-kilobyte-67570 I guess yes even though from a logical standpoint I’d still prefer using
if (devicePixelRatio >= 2)
over
if (devicePixelRatio > 2)
p
got it, this was a conscious decision in order to not alter the previous default behaviour in unexpected ways for desktop devices. We’ll discuss your feedback internally, but the main idea behind the change was to improve experience on mobile devices with devicePixelRatio > 2, where the resolution pulled in was just too low to have a decent experience when using the default settings.
b
This was a conscious decision in order to not alter the previous default behavior in unexpected ways for desktop devices
Okay, I can fully understand that. Unexpected behavior / breaking change management is a huge topic and I fully understand your more defensive stance there.
🙌 1
@polite-kilobyte-67570 do you have any updates on the automatic selection of layers with the same resolution?
p
confirmed that we are currently only selecting the lowest layer on the server side in this scenario. In case you are able to use vp8 for the screen share, it will automatically construct temporal layers for you that should have the same effect as what you’re looking for. E.g.
Copy code
screenShareSimulcastLayers: [ScreenSharePresets.h720],
screenShareEncoding: ScreenSharePresets.h1080.encoding
in your options should result in two spatial layers with three temporal layers each. If you were to specify 16 as the original framerate, the other temporal layers would have 8 and 4 respectively.
👍 1
b
This sounds interesting, however do the temporal layers also do dynacsting only sending the full 16fps (in this example) if at least one person consumes them?
f
Dynacast is only for spatial layers David. Temporal layers cannot be turned off individually with VP8. So, as long as a spatial layer is consumed by some subscriber, dynacast will enable that spatial layer. All temporal layers in that spatial layer will be enabled. However, note that encoder could choose to turn off a temporal layer if it is hitting some constraint (CPU or upstream congestion).
b
hmm okay thanks for the input @fancy-wire-61616
p
@best-parrot-43500 we merged a PR regarding this into main. Once that is released you should be able to use your original setup with equally sized spatial layers, too.
b
nice @polite-kilobyte-67570 thanks! 👍
@polite-kilobyte-67570 coming back to this once more. Can you tell me in which the lower quality layer would be chosen? I would expect that to happen if there are bandwidth issues / congestions, correct?
p
yeah, what the PR addresses is the “maximum” layer for the adaptive stream dimension mapping before any bandwidth constraints kick in additionally.
b
perfect 👌
@polite-kilobyte-67570 Sorry for digging out this super old thread but it seems like there is a rounding missing in the current
adaptive screen video dimension calculation
implementation. I’ve created an issue on GitHub. Do you want me to find the right spot and create a PR?
p
oh, I was under the impression this was fixed already in https://github.com/livekit/client-sdk-js/pull/846. are you using the latest client version?
b
I thought I had checked but somehow I was still on
1.12.3
let me check with
1.13.2
Looking good, seems to be fixed now 👍
🙌 1
I think i can’t close / reject the issue on github. I’m sorry you have to do that
p
no worries, will do that!