Playing DASH-video on website depending on the size of the container (not the bandwidth) - html5-video

I have a video in different resolutions (1200x900, 800x600, 400x300, 200x150) and a DASH-manifest. I tried to embed the video on a responsive webpage using the shaka or video-js video-player. This works so far, but the video displayed is depending more on the bandwidth than the size of the container which is playing in, for example:
Container size: 800x600, low bandwidth -> Playing the video 200x150 -> Ok
Container size: 1200x900, high bandwidth -> Playing the video 1200x900 -> Ok
Container size: 200x150, high bandwidth -> Playing the video 1200x900 -> Not ok, because it's not necessary
I want to prevent the last case because it increases my traffic and on some devices/browsers the downscaling of the video is really horrible.
This happens on shaka and video-js. How can I tell the video-player not to use a bigger video than the size of it's container? Or is there any other player which is able to do that?

Not sure how to acheive this using those players, but dash.js will certainly do this using the player option limitBitrateByPortal.
The documentation is horrible, but search http://cdn.dashjs.org/latest/jsdoc/module-Settings.html#~AbrSettings__anchor for limitBitrateByPortal.
An example of how to use it is available at https://reference.dashif.org/dash.js/latest/samples/advanced/settings.html.

Related

MP4 with single video track and multiple resolutions

I have an MP4 which concatenates video from two separate camera streams of different resolutions. This plays back appropriately in VLC, but Chrome and Windows Media Player fails to handle the change in resolution - the second half of the video is totally distorted.
Tools which read an MP4 file and provide technical data all report a single AVC1 video track of resolution #1.
ffprobe shows:
Duration: 00:00:34.93, start: 0.000000, bitrate: 825 kb/s
Stream #0:0(und): Video: h264 High avc1, yuv420p, 2688x1520 (Resolution #1) , 823 kb/s, SAR 189:190 DAR 15876:9025, 13.42 fps, 15 tbr, 90k tbn, 30 tbc
Deconstructing the MP4 (using onlinemp4parser.com and gpac.github.io/mp4box.js/test), it shows one track atom (trak) which contains one media atom (mdia). further down, inside the sample table description atom (stsd), there are two AVC1 items. The two AVC1 items describe the two resolutions correctly.
e.g.
trak -> mdia -> minf -> stbl -> stsd
AVC1 (showing resolution #1)
AVC1 (showing resolution #2)
These tools also show resolution #1 under the track header (tkhd).
Can someone comment on how the final playback resolution is determined? Or why VLC is able to correctly read the second AVC1 block out of the sample description? Is there a different way to express the change in resolution between the samples that browsers will interpret correctly?
If anyone knows of a better way please feel free to share. The best way I've found to do this is by ensuring the SPS/PPS (and VPS for H.265) NAL units precede every I frame. This still isn't always perfect because I assume the players just don't expect to handle the the video parameters changing mid-stream.
Here's a quick reference of how some players handle this as of 12/5/2022:
Chrome:
Regular playback good, seeking initially shows corruption then resolves.
Firefox
Regular playback does not resize the player on the transition and makes the right and bottom pixels of video fill the remainder. Seeking shows corruption but will eventually resolve itself, but player still doesn't resize.
Edge
Regular playback shows only the top left corner on transition, but will eventually correct itself. Seeking shows corruption, size corrects itself, then just jumps to the end, but remains corrupted.
Media Player
Regular playback good, seeking shows corruption (note you must click the spot you want to jump rather than sliding it).
VLC is good in pretty much all cases. It's just much more resilient than most other players.

How to create youtube-like seekbar preview images for HTML video

For many of the videos on Youtube, if one hovers over the seekbar, a small image will pop up reflecting the frame at about that place in the video.
Is there some way to create this if using an HTML video element?
The thumbnails are actually typically contained in a separate media stream or 'track' that is created on the server side and delivered as part of the streamed video.
The client downloads this stream and when a user seeks, it displays the thumbnail image that is closest to the time the user is seeking to.
You can see a good example of how the player handles this with the dash.js reference player:
https://reference.dashif.org/dash.js/latest/samples/thumbnails/thumbnails.html
Generating the thumbnails on the fly on the browser would require the video to be delivered, decoded and a frame displayed at the point the user was seeking to which is typically too much to do in the time available to be practical for streamed videos.

How to set framerate on uv4l with external usb camera

I'm using external usb camera plugged to my raspberry 3b+. Since I'm not using raspicam, but the uvc driver I can't just set the framerate in config file like when you are using raspicam.
Is there a way to set it somwhere?
v4l2-ctl --set-parm=30
seems like it should change the fps, you also need to specify the device with
--device=
With
set-fmt-video=width=1024, height=640
you can change the resolution. The changes however seem to not affect the video though. Fps seem to have no effect at all, so the mjpeg stream stutters alot, and WebRTC works great even though it's set to 5 for example. Changing the resolution only seems to upscale the image with 0 quality improvement.

Why did 9gag migrate gifs to html5 video

http://9gag.com/gif used to show the animations as gifs, now they are html5 videos. What is the reasoning behind such a decision?
The reason is simply that video compresses better than gif in many cases, particular when the gif is of some size or length.
Additionally, video can be streamed affecting traffic and when the displaying can start (almost right away), gifs has to be loaded completely before they can be shown (ore they will be shown slowly and progressive).
Now that most browsers are able to show video natively, video becomes a viable and desired option to animated gifs.

Rendering video on HTML5 CANVAS takes huge amount of CPU

I am using HTML5 Canvas for rendering video, but the rendering is taking huge amount of CPU? I am using GtkLauncher (with webkit 1.8.0) for rendering the video on the Canvas.
Can some one please throw some light on this? Is video rendering on Canvas not efficient for embedded systems?
Also I would like to know, whether there is a way in HTML5 video tag to know the video frame rate, before I actually start to render the data on the Canvas. This I would need to know because I would have to set the timer (used for drawing the video frames) at that same frame rate .
Thanks and Regards,
Souvik
Most likely the video rendering is not accelerated and needs to
Decode on software
Resize on software
You did not give system details so this is just a guess. By poking browser internals you can dig out the truth.
Video framerate cannot be known beforehand and in theory can vary within one source. However if you host file yourself you can pre-extract this information using tools like ffmpeg and transfer the number in side-band (e.g. using AJAX / JSON).