About animating frame by frame with sprite files - objective-c

I used to animate my CCSprites by iterating through 30 image files (rather big ones) and on each file I changed the CCSprite's texture to that image file.
Someone told me that was not efficient and I should use spritesheets instead. But, can I ask why is this not efficient exactly?

There are two parts to this question:
Memory.
OpenGL ES requires textures to have width and height's to the power of 2 eg 64x128, 256x1024, 512x512 etc. If the images don't comply, Cocos2D will automatically resize your image to fit the dimensions by adding in extra transparent space. With successive images being loaded in, you are constantly wasting more and more space. By using a sprite sheet, you already have all the images tightly packed in to reduce wastage.
Speed. Related to above, it takes time to load an image and resize it. By only calling the 'load' once, you speed the entire process up.

Related

How to improve MTKView rendering when using MPSImageScale and MTLBlitCommandEncoder

TL;DR: From within my MTKView's delegate drawInMTKView: method, part of my rendering pass involves adding an MPSImageBilinearScale performance shader and zero or more MTLBlitCommandEncoder requests for generateMipmapsForTexture. Is that a smart thing to do from within drawInMTKView:, which happens on the main thread? Do either of them block the main thread while running or are they only being encoded and then executed later and entirely on the GPU?
Longer Version:
I'm playing around with Metal within the context of an imaging application. I use Core Image to load an image and apply filters. The output image is displayed as a 2D plane in a metal view with a single texture. This works, but to improve performance I wanted to experiment with Core Image's ability to render out smaller tiles at a time. Each tile is rendered into its own IOSurface.
On each render pass, I check if there are any tiles that have been recently rendered. For each rendered tile (which is now an IOSurface), I create a Metal texture from a CVMetalTextureCache that is backed by the surface.
I think use a scaling MPS to copy from the tile-texture into the "master" texture. If a tile was copied over, then I issue a blit command to generate the mipmaps on the master texture.
What I'm seeing is that if my master texture is quite large, then generate the mipmaps can take "a bit of time". The same is true if I have a lot of tiles. It appears this is blocking the main thread because my FPS drops significantly. (The MTKView is running at the standard 60fps.)
If I play around with tile sizes, then I can improve performance in some areas but decrease it in others. For example, increasing the tile size that Core Image renders it creates less tiles, and thus less calls to generate mipmaps and blits, but at the cost of Core Image taking longer to render a region.
If I decrease the size of my "master" texture, then mipmap generation goes faster since only the dirty textures are updates, but there appears to be a lower bounds on how small I should make the master texture because if I make it too small, then I need to pass in a large number of textures to the fragment shader. (And it looks like that limit might be 128?)
What's not entirely clear to me is how much of this I can move off the main thread while still using MTKView. If part of the rendering pass is going to block the main thread, then I'd prefer to move it to a background through so that UI elements (like sliders and checkboxes) remain fully responsive.
Or maybe this isn't the right strategy in the first place? Is there a better way to display really large images in Metal other than tiling? (i.e.: Images larger than Metal's texture size limit of 16384?)

NSImageView with high-resolution image causes extreme slowdown when resizing the window

I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).

Optimizing tiled maps in cocos2d-iphone

My cocos2d-iphone game has tiled maps. The tilesets textures are rather big. I got around 5 tilesets and each one is 2048x2048 (retina).
My maps are around 80x80. They have around 8 layers and each one is obviously using one tileset.
The frame rate falls (it goes around 30 sometimes. I know 30 is rather aceptable, but still, I want 50+).
So given that textures are huge I can't afford to make many layers (since each one loads a texture of these).
So how about I divide my tileset textures into much smaller tilesets (like 1024x1024 each)? That will allow me to use many more layers for my maps, right?
Are there any other tips for huge retina display tile maps?
Texture with 2048x2048 and 32-Bit colors equals 16 MB (!) of memory. Five times that is 80 MB of memory just for the textures. Ouch! For a tilemap that is relatively tiny (80x80) that's an enormous amount of texture memory.
First order of optimization would be to use PVR textures if you really can't reduce the number of tilesets or images within them. You lose some image quality but the memory consumption will go down dramatically, and the rendering performance of PVR textures is a lot better. Of course while working with Tiled you'll have to use the (presumably) PNG texture which you then convert to PVR for use in the project, for example using TexturePacker.
8 tile layers can be pretty hefty, but depends on how you use them and how many tiles of each layer are actually drawn. Try this: set all but one layer to visible = NO. Then turn the layers back on one by one and see how that affects the framerate.
Finally you should know that Cocos2D's tilemap implementation is utterly inefficient past a certain number of tiles. There have been attempts to improve the tilemap renderer, for example this one (HKTMXTiledMap) may be worth giving a shot.
I had the same problem. My solution is simply convert .png files to .pvr.ccz, and both the file size and in memory footprint reduce dramatically. Here are my steps:
use TexturePacker to convert tileset files (png files) to pvr.ccz. Make sure it's a 1:1 mapping (same size, no rotation, no border padding, no trim...), and they should be the same size (e.g. 2048 x 2048)
open your .tmx file and change the png file path to your pvr.ccz file.
that's it! It works for cocos2d-x in my case. Before the change my game takes 106 MB in memory, and only ~90MB after the change, and this is only for 1 tileset texture.

SpriteSheet, AtlasSprite, Sprite and optimization

I'm developing an iPhone Cocos2D game and reading about optimization. some say use spritesheet whenever possible. others say use atlassprite whenever possible and others say sprite is fine.
I don't get the "whenever possible", when each one can and can't be used?
Also what is the best case for each type?
My game will typically use 100 sprites in a grid, with about 5 types of sprites and some other single sprites. What is the best setup for that? guidelines for deciding for general cases will help too.
Here's what you need to know about spritesheets vs. sprites, generally.
A spritesheet is just a bunch of images put together onto one big image, and then there will be a separate file for image location data (i.e. image 1 starts at coordinate 0,0 with a size of 100,100, image 2 starts at coordinate 100,0, etc).
The advantage here is that loading textures (sprites) is a pretty I/O and memory-alloc intensive operation. If you're trying to do this continually in your game, you may get lags.
The second advantage is memory optimization. If you're using transparent PNGs for your images, there may be a lot of blank pixels -- and you can remove those and "pack" your texture sizes way down than if you used individual images. Good for both space & memory concerns. (TexturePacker is the tool I use for the latter).
So, generally, I'd say it's always a good idea to use a sprite sheet, unless you have non-transparent sprites.

HTML5 Large canvas

I've noticed that when dynamically creating a large canvas (6400x6400) that quite alot of the time nothing will be drawn on it, and when setting the canvas to a small size it works 100% of the time, however as I don't know any better, I have no other choice than to try and get the large canvas working correctly.
thisObj.oMapCanvas = jQuery( document.createElement('canvas') ).attr('width', 6400).attr('height', 6400).css('border','1px solid green').prependTo( thisObj.oMapLayer ).get(0);
// getContext and then drawing stuff here...
The purpose of the canvas is to simply draw a line between two nodes (images), which are within a div container that can be dragged around (viewport I think people call them).
What I "think" may be happening is that on a canvas resize it emptys the canvas, and that is interfering with the context drawing, as like I said previously it works all the time when the canvas is alot smaller.
Has anyone experienced this before and/or know any possible solutions?
That is an enormous sized canvas. 6400 x 6400 x 4 bytes per pixel is 156 MB, and your implementation may need to allocate two or more buffers of that size, for double buffering, or need to allocate video memory of that size as well. It's going to take a while to allocate and clear all that memory, and you may not be guaranteed to succeed at such an allocation. Is there a reason you need such an enormous canvas? You could instead try sizing your canvas to be only as large as necessary to draw the line between those two divs, or you could try using SVG instead of a canvas.
Another possibility would be to try dividing your canvas up into large tiles, and only rendering those tiles that are actually visible on the screen. Google Maps does this with images, to only load images for the portion of the map that is currently visible (plus some extra one each side of the screen to make sure that when you scroll you won't need to wait for it to render), maintaining an illusion that there is an enormous canvas while really only rendering something a bit bigger than the window.
Most browsers that implement HTML5 are still in early beta - so it's quite likely they are still working the bugs out.
However, the resolution of the canvas you are trying to create is very high .. much higher than what most people's monitors can even display. Is there are reason you need it quite so large? Why not restrict the draggable area to something more in line with typical display resolutions?
I had the same problem! I was trtying to use a big canvas to connect some divs. Eventually I gave up and drew a line using javascript (I drew my line using little images as pixels- I did it with divs first, but in IE the divs came out too big).