By default, browsers without APNG support will display the first frame of the sequence. Is there a way of choosing which frame will be used as a fallback? In most cases, the last frame would make more sense than the first for me.
You can't specify which frame is used as a fallback, but you can tell it to skip the first frame when playing the animation. Then you can add the first frame with whatever you want to display to unsupported viewers.
Related
Old-school computer graphics sometimes produced animations (cycles and fades) without actually redrawing anything to video memory, purely by updating the color palette.
Is it possible to do this in an animated gif? That is, optimise (reduce file-size of) the gif by only providing a single frame of (significant) raster content, but have each (delayed) animation frame update colour values in the (global) palette?
The short answer is no.
According to the existing standard, every GIF frame containing a local palette must have its own data to be displayed using that palette, otherwise the local palette is of no use.
One of the possible solutions is to define your own GIF Application Extension block (like Netscape did; see the link) to store additional palettes and their time delays. Apparently, those extension blocks should appear after frames whose data they affect.
The downside of this approach is that no one except your decoder would support palette cycling unless your block type somehow makes its way to become a new de-facto standard.
Nevertheless, your handcrafted GIFs would remain valid for all other GIF decoders (even though without any palette cycling), as the standard requires them to silently ignore any GIF Application Extensions with IDs unknown to them.
I am trying to use PhantomJS image capture to capture the image of the browser.
Each time I run the image capture function, the dimensions of the image is slightly different. Example, once I get 1400x5185, if I open the same url after few hrs, I get 1399x5185 or 1400x5186.
I have tried croping from left top corner, but then pixels are slightly skewed.
Note:The content of the page is always constant
How do I always ensure that I get the same dimension of image without copping the pixels?
Something probably changes on the page, otherwise there is no reason for PhantomJS to render different images.
You should check the differences of the images in detail. Ads are probably the culprit when they are not uniformly formatted. If you identified the changing DOM elements, you can use casper.evaluate() to access the DOM and remove/hide those elements before capturing the screenshot.
You could also change the viewport size to 1920x1080 for example using casper.viewport(). If the page is vertically scrolling, then only one of the y-direction might change. If you want to be sure, then change the viewport size to 1400x5187.
I've been looking all over the place to find the solution to this but haven't had success. I have a Sketchflow project and I want to scale every Screen to the browser resolution on running, as in scaling every element of the current Layout to fit the screen.
Do you want the objects themselves to get bigger to fill the screen or to spread out? For objects to get bigger you can wrap the whole thing in a ViewBox.
I have an APNG animation with several frames, and it happens that some frames are exactly the same. Each of those frames are carrying its own image data in the file, where they could be just reusing the image data of a prior frame that is equal and save some space.
Any APNG expert know if in the APNG specification have a way of reusing the same image data for multiple frames?
If the duplicate frames have no other different frames between them, you can eliminate all but one and reset the frame duration for the remaining one. If there are different frames between them, then no. APNG doesn't have a capability of reusing images.
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.