I'm working on a short video with blender, and the first few strips I imported were fine. I then imported another strip(and tried re-recording it and importing that). For the new strip, the video is much shorter than the audio.
In fact, I did a quick calculation, and the video is exactly 8 times shorter.(And faster)
I have looked it up, and it says to match up the framerate in the settings, but then it messes up the sync of the other strips.
I was having the same issue. When you change the framerate, it seems to align everything with the same framerate but throw the others off sync. However, if you add a speed control effect to the video strip, turn on 'Stretch to input strip length' in that effect, and stretch the video strip to match the audio strip, they align. This person explains it better than I do: https://www.youtube.com/watch?v=-MZ4VXgEdzo
Related
I have an MP4 which concatenates video from two separate camera streams of different resolutions. This plays back appropriately in VLC, but Chrome and Windows Media Player fails to handle the change in resolution - the second half of the video is totally distorted.
Tools which read an MP4 file and provide technical data all report a single AVC1 video track of resolution #1.
ffprobe shows:
Duration: 00:00:34.93, start: 0.000000, bitrate: 825 kb/s
Stream #0:0(und): Video: h264 High avc1, yuv420p, 2688x1520 (Resolution #1) , 823 kb/s, SAR 189:190 DAR 15876:9025, 13.42 fps, 15 tbr, 90k tbn, 30 tbc
Deconstructing the MP4 (using onlinemp4parser.com and gpac.github.io/mp4box.js/test), it shows one track atom (trak) which contains one media atom (mdia). further down, inside the sample table description atom (stsd), there are two AVC1 items. The two AVC1 items describe the two resolutions correctly.
e.g.
trak -> mdia -> minf -> stbl -> stsd
AVC1 (showing resolution #1)
AVC1 (showing resolution #2)
These tools also show resolution #1 under the track header (tkhd).
Can someone comment on how the final playback resolution is determined? Or why VLC is able to correctly read the second AVC1 block out of the sample description? Is there a different way to express the change in resolution between the samples that browsers will interpret correctly?
If anyone knows of a better way please feel free to share. The best way I've found to do this is by ensuring the SPS/PPS (and VPS for H.265) NAL units precede every I frame. This still isn't always perfect because I assume the players just don't expect to handle the the video parameters changing mid-stream.
Here's a quick reference of how some players handle this as of 12/5/2022:
Chrome:
Regular playback good, seeking initially shows corruption then resolves.
Firefox
Regular playback does not resize the player on the transition and makes the right and bottom pixels of video fill the remainder. Seeking shows corruption but will eventually resolve itself, but player still doesn't resize.
Edge
Regular playback shows only the top left corner on transition, but will eventually correct itself. Seeking shows corruption, size corrects itself, then just jumps to the end, but remains corrupted.
Media Player
Regular playback good, seeking shows corruption (note you must click the spot you want to jump rather than sliding it).
VLC is good in pretty much all cases. It's just much more resilient than most other players.
I have some code that used CreateJS /EaselJS to create a MovieClip that contains a Tween that contains an mp4 video. In MovieClip there is a method called 'gotoAndPlay' that you can use to change the timeline position of the playhead to a certain frame number. When using this method to change the play position of the video the tweens work but not the Tween that contains the mp4 movie...this object does not load is result in a blank video tag on the page except for the first play through of the clip. Once the mp4 video has been played it didn't play again if the position was set to it through gotoAndPlay...any ideas on how to fix this or if something wrong might be happening?
In ActionScript animations, FLV movies can be locked to the timeline. But in HTML Canvas animations, MP4 movies are not really fully-fledged "Animate" objects. They look the same for the most part but the integration is not as tight as in Flash.
Since the videos exist outside of the Canvas, you'll need to use jQuery or JavaScript to address them. This can be done by using the Code Snippets in the HTML5 Canvas - Components - Video folder.
As an advance warning, "seeking" to different locations in an MP4 video the way you described is not as reliable as it was in Flash. Browsers like Internet Explorer don't handle seeking well and will likely crash. If frame -by-frame accuracy is important, you may find the best visual results by avoiding the video component and converting your movie to an actual MovieClip in Animate CC, which will increase your file size significantly.
http://9gag.com/gif used to show the animations as gifs, now they are html5 videos. What is the reasoning behind such a decision?
The reason is simply that video compresses better than gif in many cases, particular when the gif is of some size or length.
Additionally, video can be streamed affecting traffic and when the displaying can start (almost right away), gifs has to be loaded completely before they can be shown (ore they will be shown slowly and progressive).
Now that most browsers are able to show video natively, video becomes a viable and desired option to animated gifs.
i have here a rendered .mov video file with the raw codec and 10 frames per second. The video shows a camera that rotates around a house. If I open this file with the Quicktime Player I can move around the house by dragging the mouse over the video. It's like an interactive video.
Now I want to embed this function in my website with javascript. The problem is that I want to use HTML5 videos, so I have to convert the .mov file into .avi or .mp4.
My Problem is now, if I do that the video laggs when I drag with the mouse over it. Even if I just play it it laggs. How can I convert this video so that I have the same quality as in the original?
Thanks in advance,
conansc
You could try using a GOP length of 1 (also known as using all I-frames). This makes it easier to play backwards. But you might need to just turn it into a series of still images, like JPEGs, and swap them to the screen as needed. Video formats are meant to be played forwards, at normal speed.
I've had theft problems outside my house so I setup a simple webcam to capture every second with Dorgem (http://dorgem.sf.net).
Dorgem does offer a feature to use motion detection to only capture frames where something is moving on the screen. The problem is that the motion detection algorithm it uses is extremely sensitive. It goes off because of variations in color between successive shots on my cheap webcam, and it also goes off because the trees in front of the house are blowing in the wind. Additionally, the front of my house is a high traffic area so there is also a large number of legitimately captured frames.
I average capturing 2800/3600 frames every second using Dorgem's motion detection. This is too much for me to search through to find out where the interesting activity is.
I wish I could re-position the camera to a more optimal position where it would only capture the areas I'm interested in, so that motion detection would be simpler, however this is not an option for me.
I think that because my camera has a fixed position and each picture frames the same area in front of my house, then I should be able to scan the images and figure out which ones have motion in some interesting region of that image, throwing out all other frames.
For example: if there's a change in pixel 320,240 then someone has stepped in front of my house and I want to see that frame, but if there's a change in pixel 1,1 then its just the trees blowing in the wind and the frame can be discarded.
I've looked at pdiff, a tool for finding diffs in sets of pictures, but it seems to be also focused on diffing the entire picture, rather than a specific region of it:
http://pdiff.sourceforge.net/
I've also looked at phash, a tool for calculating a hash based on human perception of an image, but it seems too complex:
http://www.phash.org/
I suppose I could implement it in a shell script using imagemagick's mogrify -crop to cherry pick the regions of the image I'm interested in, then running pdiff to find the interesting ones, and using that to pick out the interesting frames.
Any thoughts? ideas? existing tools?
cropping and then using pdiff seems like the best choice to me.