I have an MP4 which concatenates video from two separate camera streams of different resolutions. This plays back appropriately in VLC, but Chrome and Windows Media Player fails to handle the change in resolution - the second half of the video is totally distorted.
Tools which read an MP4 file and provide technical data all report a single AVC1 video track of resolution #1.
ffprobe shows:
Duration: 00:00:34.93, start: 0.000000, bitrate: 825 kb/s
Stream #0:0(und): Video: h264 High avc1, yuv420p, 2688x1520 (Resolution #1) , 823 kb/s, SAR 189:190 DAR 15876:9025, 13.42 fps, 15 tbr, 90k tbn, 30 tbc
Deconstructing the MP4 (using onlinemp4parser.com and gpac.github.io/mp4box.js/test), it shows one track atom (trak) which contains one media atom (mdia). further down, inside the sample table description atom (stsd), there are two AVC1 items. The two AVC1 items describe the two resolutions correctly.
e.g.
trak -> mdia -> minf -> stbl -> stsd
AVC1 (showing resolution #1)
AVC1 (showing resolution #2)
These tools also show resolution #1 under the track header (tkhd).
Can someone comment on how the final playback resolution is determined? Or why VLC is able to correctly read the second AVC1 block out of the sample description? Is there a different way to express the change in resolution between the samples that browsers will interpret correctly?
If anyone knows of a better way please feel free to share. The best way I've found to do this is by ensuring the SPS/PPS (and VPS for H.265) NAL units precede every I frame. This still isn't always perfect because I assume the players just don't expect to handle the the video parameters changing mid-stream.
Here's a quick reference of how some players handle this as of 12/5/2022:
Chrome:
Regular playback good, seeking initially shows corruption then resolves.
Firefox
Regular playback does not resize the player on the transition and makes the right and bottom pixels of video fill the remainder. Seeking shows corruption but will eventually resolve itself, but player still doesn't resize.
Edge
Regular playback shows only the top left corner on transition, but will eventually correct itself. Seeking shows corruption, size corrects itself, then just jumps to the end, but remains corrupted.
Media Player
Regular playback good, seeking shows corruption (note you must click the spot you want to jump rather than sliding it).
VLC is good in pretty much all cases. It's just much more resilient than most other players.
Related
I'm working on a short video with blender, and the first few strips I imported were fine. I then imported another strip(and tried re-recording it and importing that). For the new strip, the video is much shorter than the audio.
In fact, I did a quick calculation, and the video is exactly 8 times shorter.(And faster)
I have looked it up, and it says to match up the framerate in the settings, but then it messes up the sync of the other strips.
I was having the same issue. When you change the framerate, it seems to align everything with the same framerate but throw the others off sync. However, if you add a speed control effect to the video strip, turn on 'Stretch to input strip length' in that effect, and stretch the video strip to match the audio strip, they align. This person explains it better than I do: https://www.youtube.com/watch?v=-MZ4VXgEdzo
I have a video in different resolutions (1200x900, 800x600, 400x300, 200x150) and a DASH-manifest. I tried to embed the video on a responsive webpage using the shaka or video-js video-player. This works so far, but the video displayed is depending more on the bandwidth than the size of the container which is playing in, for example:
Container size: 800x600, low bandwidth -> Playing the video 200x150 -> Ok
Container size: 1200x900, high bandwidth -> Playing the video 1200x900 -> Ok
Container size: 200x150, high bandwidth -> Playing the video 1200x900 -> Not ok, because it's not necessary
I want to prevent the last case because it increases my traffic and on some devices/browsers the downscaling of the video is really horrible.
This happens on shaka and video-js. How can I tell the video-player not to use a bigger video than the size of it's container? Or is there any other player which is able to do that?
Not sure how to acheive this using those players, but dash.js will certainly do this using the player option limitBitrateByPortal.
The documentation is horrible, but search http://cdn.dashjs.org/latest/jsdoc/module-Settings.html#~AbrSettings__anchor for limitBitrateByPortal.
An example of how to use it is available at https://reference.dashif.org/dash.js/latest/samples/advanced/settings.html.
I have a problem regarding a video that is not displayed correctly in the video.js player:
http://www.ulrichbangert.de/kakteen/zeitraffer_vjs.php?Idx=10
As you can see from the page source the dimensions of the player are set to 640x480. The video has the same dimensions which I verified by loading it into my local player and displaying the properties. But: At the left and the right of the video there is a gap of some pixels. The poster is displayed correctly without these gaps. This results in an ugly skip when the player switches from the video to the poster. The poster image is the last frame of the video.
Other videos like this one
http://www.ulrichbangert.de/orchid/zeitraffer.php?Idx=1
are playing fine without a skip but I can't find any difference between these and the faulty one.
My browser is Firefox 23.0.1 thus the ogv video is used.
Can anybody help?
Best regards - Ulrich
I am testing HTML5's video API. The plan is to have a video play with an effect, like making it black and white. I have and working together using a buffer. I take the current video frame and copy to the scratch buffer where I can process it. The problem is the rate at which it runs.
The Video API of HTML5 has the 'timeupdate' event. I tried using this to have the handler process frames, once for every frame, but it runs at a slower rate than the video.
Any ideas to speed up processing frames?
You can get much more frequent redraws by using requestAnimationFrame to determine when to update your canvas, rather than relying on timeupdate, which only updates every 200-250ms. It's definitely not enough for frame-accurate animation. requestAnimationFrame will update at most every 16ms (approx 60fps), but the browser will throttle it as necessary and sync with video buffer draw calls. It's pretty much exactly what you want for this sort of thing.
Even with higher frame rates, processing video frames with a 2D canvas is going to be pretty slow. For one thing, you're processing every pixel sequentially in the CPU, running Javascript. The other problem is that you're copying around a lot of memory. There's no way to directly access pixels in a video element. Instead, you have to copy the whole frame into a canvas first. Then, you have to call getImageData, which not only copies the whole frame a second time, but it also has to allocate the whole block of memory again, since it creates a new ImageData every time. Would be nice if you could copy into an existing buffer, but you can't.
It turns out you can do extremely fast image processing with WebGL. I've written a library called Seriously.js for exactly this purpose. Check out the wiki for a FAQ and tutorial. There's a Hue/Saturation plugin you can use - just drop the saturation to -1 to get your video to grayscale.
The code will look something like this:
var composition = new Seriously();
var effect = composition.effect('hue-saturation');
var target = composition.target('#mycanvas');
effect.source = '#myvideo';
effect.saturation = -1;
target.source = effect;
composition.go();
The big down side of using WebGL is that not every browser or computer will support it - Internet Explorer is out, as is any machine with old or weird video drivers. Most mobile browsers don't support it. You can get good stats on it here and here. But you can get very high frame rates on pretty large videos, even with much more complex effects.
(There is also a small issue with a browser bug that, oddly enough, shows up in both Chrome and Firefox. Your canvas will often be one frame behind the video, which is only an issue if the video is paused, and is most egregious if you're skipping around. The only workaround seems to be to keep forcing updates, even if your video is paused, which is less efficient. Please feel free to vote those tickets up so they get some attention.)
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.