When to use CCSpriteBatchNode? - objective-c

in Cocos2d I will be playing an animation. The animation has about 12 frames, and each frame is rather big. In fact, the -hd version of each frame is quite huge.
Anyway, first, I created it by putting all 12 frames in a texture using Zwoptex. The texture is about 2048x2048.
This is so I can animate a CCSprite in a CCSpriteBatchNode using that texture.
But I seem to be getting a level 2 memory warning.
Now that I think of it, I don't think CCSpriteBatchNode was supposed to be used for one sprite. I guess it was only useful if you wanted to draw lots of sprites that use the same texture.
So I want to know: Should I animate the sprite frame by frame (no huge texture)? Or is it possible to use that huge texture but in a different way?

You are right about CCSpriteBatchNode.
CCSpriteBatchNode is like a batch node: if it contains children, it will draw them in 1 single OpenGL call (often known as "batch draw"), in absence of CCSpriteBatchNode (in this case) all the "batch draw" will be called as many time as number of children (sprites).
A CCSpriteBatchNode can reference one and only one texture (one image file, one texture atlas) i.e. sprite sheet created by zwoptex. Only the CCSprites that are contained in that texture can be added to the CCSpriteBatchNode. All CCSprites added to a CCSpriteBatchNode are drawn in one OpenGL ES draw call. If the CCSprites are not added to a CCSpriteBatchNode then an OpenGL ES draw call will be needed for each one, which is less efficient.
According to your scenario, you don't have to use CCSpriteBatchNode since there is only one texture rendered at any given time.
So I want to know: Should I animate the sprite frame by frame (no huge texture)? Or is it possible to use that huge texture but in a different way?
It doesn't matter. You will be loading the 2048 x 2048 texture anyways. The issue to ponder is why only one 2048 x 2048 texture is giving you a Level 2 warning? how many such textures are you loading. BTW 2048 x 2048 is only supported on iPod3G/iPhone3GS above (which is fine).
In case you are loading many textures (which seems true to me). You need to program some logic where you can unload the texture when they are not necessary.
Do look at the following methods:
[[CCSpriteFrameCache sharedSpriteFrameCache] removeSpriteFramesFromFile:fileName];
[[CCSpriteFrameCache sharedSpriteFrameCache] removeUnusedSpriteFrames];
[[CCTextureCache sharedTextureCache] removeUnusedTextures];
As far as animation is concerned you can create CCAnimation and use that or (depending upon your scenario) you can use setDisplayFrame(CCSpriteFrame frame);

Related

Animating retina images

I'm trying to animate some images. The images are working well on non-retina iPads but their retina counterparts are slow and the animations will not cycle through at the specified rate. The code i'm using is below with the method called every 1/25th second. This method appears to perform better than UIViewAnimations.
if (counter < 285) {
NSString *file = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Animation HD1.2 png sequence/file_HD1.2_%d", counter] ofType:#"png"];
#autoreleasepool {
UIImage *someImage = [UIImage imageWithContentsOfFile:file];
falling.image = someImage;
}
counter ++;
} else {
NSLog(#"Timer invalidated");
[timer invalidate];
timer = nil;
counter = 1;
}
}
I realise there are a lot of images but the performance is the same for animations with less frames. Like i said, the non-retina animations work well. Each image above is about 90KB. Am i doing something wrong or is this simply a limitation of the iPad? To be honest, i find it hard to believe that it couldn't handle something like this when it can handle the likes of complex 3D games so i imagine i'm doing something wrong. Any help would be appreciated.
EDIT 1:
From the answers below, I have edited my code but to no avail. Executing the code below results in the device crashing.
in viewDidLoad
NSString *fileName;
myArray = [[NSMutableArray alloc] init];
for(int i = 1; i < 285; i++) {
fileName = [NSString stringWithFormat:#"Animation HD1.2 png sequence/HD1.2_%d.png", i];
[myArray addObject:[UIImage imageNamed:fileName]];
NSLog(#"Loaded image: %d", i);
}
falling.userInteractionEnabled = NO;
falling.animationImages = humptyArray;
falling.animationDuration = 11.3;
falling.animationRepeatCount = 1;
falling.contentMode = UIViewContentModeCenter;
the animation method
-(void) triggerAnimation {
[falling startAnimating];
}
First of all, animation performance on the retina iPad is notoriously choppy. That said, there are a few things you could do to make sure your getting the best performance for your animation (in no particular order).
Preloading the images - As some others have mentioned, your animation speed suffers when you have to wait for the reading of your image before you draw it. If you use UIImageView's animation properties this preloading will be taken care of automatically.
Using the right image type - Despite the advantage in file size, using JPEGs instead of PNGs will slow your animation down significantly. PNGs are less compressed and are easier for the system to decompress. Also, Apple has significantly optimized the iOS system for reading and drawing PNG images.
Reducing Blending - If at all possible, try and remove any transparency from your animation images. Make sure there is no alpha channel in your images even if it seems completely opaque. You can verify by opening the image in Preview and opening the inspector. By reducing or removing these transparent pixels, you eliminate extra rendering passes the system has to do when displaying the image. This can make a significant difference.
Using a GPU backed animation - Your current method of using a timer to animate the image is not recommended for optimal performance. By not using UIViewAnimation or CAAnimation you are forcing the CPU to do most of the animation work. Many of the animation techniques of Core Animation and UIViewAnimation are optimized and backed by OpenGL which using the GPU to process images and animate. Graphics processing is what the GPU is made for and by utilizing it you will maximize your animation performance.
Avoiding pixel misalignment - Make sure your animation images are at the right size on screen when displaying them. If you are stretching your image while animating or using an incorrect frame, the system has to do more work to process each frame. Also, using whole numbers for any frame or point values will keep from anti-aliasing when the system tries to position an image on a fractional pixel.
Be wary of shadows and rounded corners - CALayer has lots of easy ways to create shadows and rounded corners, but if you are moving these layers in animations, often times the system will redraw the layer in each frame of the animation. This is the case when specifying a shadow using the shadowOffset property (using UILabel's shadow properties will not render every frame). Also, borders and using maskToBounds and clipToBounds will be more performance intensive rather than just using an image editor to crop the actual asset.
There are a few things to notice here:
If "falling" is UIImageView, make sure it's content mode says something like "center" and not some sort of scaling (make sure your images fit it, of course).
Other than that, as #FogleBird said, test if your device have enough memory to preload all images, if not, try to at least preload the data by creating NSData objects with the image files.
Your use of #autorelease pool is not very useful, you end up creating an auto release object that does a single thing - remove a reference to an already retained object - no memory gain, but performance loss.
If anything, you should have wrapped the file name formatter code, and considering this method is called by an NSTimer, it is already wrapped in an autorelease pool.
just wanted to point out - when you are creating the NSString with the image name - what is the "Animation HD1.2 png sequence/HD1.2_%d.png" ?
It looks likey you are trying to put a path there, try just the image name - eg. "HD1.2_%d.png".

if vertex array count > 1000, glDrawArrays becomes slow?

I have painting app. Mouse event coordinates are stored to VertexArray. Then vertex array is being drawn to screen. My code structure looks like this
// I get mouse event coordinates and store them to VertexArray
glPushMatrix();
//some new matrix settings
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);
glClear(GL_COLOR_BUFFER_BIT);
//now I draw first full size textured quad and later I draw vertexArray
glDrawArrays(.....);
//and now I draw second full size textured quad on top of first quad ant that what have been drawn from vertex array
glPopMatrix();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//immediately after that I draw FBO to screen:
glBindTexture(GL_TEXTURE_2D, fbTexture);
//Code for drawing textured quad
glBindTexture(GL_TEXTURE_2D, 0);
So everything is redrawn every time when new mouse event coordinate is being registered. And if there are more than 1000 coordinates, drawing becomes really slow. Where could be my problem? I thing 1000 vertices for OpenGL is not much
It's not the number of vertices; it's how you're sending them.
First, you never defined "really slow"; often times people will mistakenly think that a change from 400fps to 300fps is "slow". It's not. It only represents a render time increase from 2.5ms-per-frame to 3.3ms, a change of less than a single millisecond. Non-trivial, but probably not something to be too concerned over.
It's always important to measure performance in terms of render time, not FPS.
That being said, your main problem is that you're drawing a single quad at a time. Each one coming from a separate glDrawArrays command. That's not necessarily a good thing, especially if you change state between drawing commands (like binding a texture and so forth).
If you're doing that, then you need to find ways to avoid doing that. What you want to do is render a lot of quads all with one draw calls. This means you have to use the same texture for all of them.
The common solution to this problem is to make a larger texture that has multiple images in different locations. This is commonly called a "texture atlas" (Google that for the details). Each quad would have texture coordinates for the particular image it renders. Text is often drawn in such a way, where each letter (glyph) is stored in the same texture.

see what files are in Frame Cache

i am having many plist, in my game-for each level.
i am using this to unload the previous frameCache ,
for(int i=1;i<stage;i++)
[[CCSpriteFrameCache sharedSpriteFrameCache] removeSpriteFramesFromFile:[NSString stringWithFormat:#"candys%i.plist",i]];
but after a while it seems that the game becomes a little bit slower.
i am also loading in real time, the images like this :
sprite = [CCSprite spriteWithSpriteFrameName:[NSString stringWithFormat:#"candy%i.png",1]];
where candy1 is a sprite in a spritesheet on the cache and is being loaded
in REAL time- which mean many times a second.
can it be bad ? is it loads the memory to get the sprite from the spriteSheet many times a second ? do i have to pre define it ?
many thanks .
You don't want to load/unload individual sprite frames. A sprite frame references a texture. Usually this will be a texture atlas which many different sprite frames use. The sprite frame itself is maybe 16 Bytes of data. The texture may be up to 16 Megabytes.
So unless you remove the entire texture atlas and all the associated sprite frames, all you'll be getting is reduced performance because you're frequently deallocating and loading sprite frames. If you do that multiple times per second you're wasting a lot of time just to load/unload sprite frames.
Rule of thumb: load your entire scene up front, keep everything in memory until scene ends. Only if the entire scene doesn't fit into memory at once should you consider unloading/reloading of objects and data.

CCLiquid effect on a specific area of CCSprite?

Developing an iPhone game with Cocos2d-iphone. I have a huge sprite and I want to apply a CCLiquid (or any other liquid-wave-like effect) on it.
However, the image is huge, so it consumes a lot of memory (without mentioning I have many other big elements during gameplay).
Well, I figured I could try to "only apply the liquid effect on the area that is visible by the player" (dimensions of such area being 480x320). That could help a lot.
I already got a CGRect representing the area of the CCSprite that should be affected. However, how would I actually apply the effect only within such area? Any ideas?
You could manually create a CCSprite from a sprite frame and set the boundaries of that frame to your CGRect. Then use the effect on this resulting CCSprite. Essentially, your original CCSprite image would act like a larger texture atlas form which you are specifying a small portion of that image to be the actual frame of your sprite. If you layered this new copied sprite on top of your main, larger one in the exact position, it would appear to be part of that larger sprite, but only the small CGRect portion would be affected by your code.

Drawing a line using openGL es 2.0 and iphone touchscreen

This is the super simple version of the question I posted earlier (Which I think is too complicated)
How do I draw a Line in OpenGL ES 2.0 Using as a reference a stroke on the Touch Screen?
For example If i draw a square with my finger on the screen, i want it to be drawn on the screen with OpenGL.
I have tried researching a lot but no luck so far.
(I only now how to draw objects which already have fixed vertex arrays, no idea of how to draw one with constantly changing array nor how to implement it)
You should use vertex buffer objects (VBOs) as the backing OpenGL structure for your vertex data. Then, the gesture must be converted to a series of positions (I don't know how that happens on your platform). These positions must then be pushed to the VBO with glBufferSubData if the existing VBO is large enough or glBufferData if the existing VBO is too small.
Using VBOs to draw lines or any other OpenGL shape is easy and many tutorials exist to accomplish it.
update
Based on your other question, you seem to be almost there! You already create VBOs like I mentioned but they are probably not large enough. The current size is sizeof(Vertices) as specified in glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
You need to change the size given to glBufferData to something large enough to hold all the original vertices + those added later. You should also use GL_STREAM as the last argument (read up on the function).
To add a new vertex, use something like this :
glBufferSubData(GL_ARRAY_BUFFER, current_nb_vertices*3*sizeof(float), nb_vertices_to_add, newVertices);
current_nb_vertices += nb_vertices_to_add;
//...
// drawing lines
glDrawArrays(GL_LINE_STRIP, 0, current_nb_vertices);
You don't need the indices in the element array to draw lines.