Can I duplicate a frame of an APNG animated image and have no noticeable increase in the file size? - frame

I have an APNG animation with several frames, and it happens that some frames are exactly the same. Each of those frames are carrying its own image data in the file, where they could be just reusing the image data of a prior frame that is equal and save some space.
Any APNG expert know if in the APNG specification have a way of reusing the same image data for multiple frames?

If the duplicate frames have no other different frames between them, you can eliminate all but one and reset the frame duration for the remaining one. If there are different frames between them, then no. APNG doesn't have a capability of reusing images.

Related

3 vs 2 VkSwapchain images?

So a Vulkan swapchain basically has a pool of images, user-defined in number, that it allocates when it is created, and images from the pool cycle like this:
1. An unused image is acquired by the program from the swapchain
2. The image is rendered by the program
3. The image is delivered for presentation to the surface.
4. The image is returned to the pool.
Given that, of what advantage is having 3 images in the swapchain rather than 2?
(I'm asking because I received a BestPractice validation complaint that I was using 2 rather than 3.)
With 2 images, isn't it the case that one will be being presented (3.) while the other is rendered (2.), and then they alternate those roles back and forth?
With 3 images, isn't it the case that one of the three will be idle? Particularly if the loop is locked to the refresh rate anyway?
I'm probably missing something.
So what happens if one image is being presented, one has finished being rendered to by the GPU, and you have some work to render for later presentation? Well, if you're rendering to a swapchain image, that GPU work cannot start until image 0 has finished being presented.
Double buffering will therefore lead to stalling the GPU if the time to present a frame is greater than the time to render a frame. Yes, triple buffering will do the same if the render time consistently is shorter than the present time, but your frame time is right on the edge of the present time, then double buffering has a greater chance of wasting GPU time.
Of course, the downside is latency; triple buffering means that the image you display will be seen one frame later than double buffering.

ipad frame max size is not enough

I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.

About animating frame by frame with sprite files

I used to animate my CCSprites by iterating through 30 image files (rather big ones) and on each file I changed the CCSprite's texture to that image file.
Someone told me that was not efficient and I should use spritesheets instead. But, can I ask why is this not efficient exactly?
There are two parts to this question:
Memory.
OpenGL ES requires textures to have width and height's to the power of 2 eg 64x128, 256x1024, 512x512 etc. If the images don't comply, Cocos2D will automatically resize your image to fit the dimensions by adding in extra transparent space. With successive images being loaded in, you are constantly wasting more and more space. By using a sprite sheet, you already have all the images tightly packed in to reduce wastage.
Speed. Related to above, it takes time to load an image and resize it. By only calling the 'load' once, you speed the entire process up.

QTKit capture: what frame size to use?

I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.

HTML5 Large canvas

I've noticed that when dynamically creating a large canvas (6400x6400) that quite alot of the time nothing will be drawn on it, and when setting the canvas to a small size it works 100% of the time, however as I don't know any better, I have no other choice than to try and get the large canvas working correctly.
thisObj.oMapCanvas = jQuery( document.createElement('canvas') ).attr('width', 6400).attr('height', 6400).css('border','1px solid green').prependTo( thisObj.oMapLayer ).get(0);
// getContext and then drawing stuff here...
The purpose of the canvas is to simply draw a line between two nodes (images), which are within a div container that can be dragged around (viewport I think people call them).
What I "think" may be happening is that on a canvas resize it emptys the canvas, and that is interfering with the context drawing, as like I said previously it works all the time when the canvas is alot smaller.
Has anyone experienced this before and/or know any possible solutions?
That is an enormous sized canvas. 6400 x 6400 x 4 bytes per pixel is 156 MB, and your implementation may need to allocate two or more buffers of that size, for double buffering, or need to allocate video memory of that size as well. It's going to take a while to allocate and clear all that memory, and you may not be guaranteed to succeed at such an allocation. Is there a reason you need such an enormous canvas? You could instead try sizing your canvas to be only as large as necessary to draw the line between those two divs, or you could try using SVG instead of a canvas.
Another possibility would be to try dividing your canvas up into large tiles, and only rendering those tiles that are actually visible on the screen. Google Maps does this with images, to only load images for the portion of the map that is currently visible (plus some extra one each side of the screen to make sure that when you scroll you won't need to wait for it to render), maintaining an illusion that there is an enormous canvas while really only rendering something a bit bigger than the window.
Most browsers that implement HTML5 are still in early beta - so it's quite likely they are still working the bugs out.
However, the resolution of the canvas you are trying to create is very high .. much higher than what most people's monitors can even display. Is there are reason you need it quite so large? Why not restrict the draggable area to something more in line with typical display resolutions?
I had the same problem! I was trtying to use a big canvas to connect some divs. Eventually I gave up and drew a line using javascript (I drew my line using little images as pixels- I did it with divs first, but in IE the divs came out too big).