How to efficiently show many Images? (iPhone programming) - objective-c

In my application I needed something like a particle system so I did the following:
While the application initializes I load a UIImage
laserImage = [UIImage imageNamed:#"laser.png"];
UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one:
// add new Laserimage
UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage];
[newLaser setTag:[model.lasers count]-9];
[newLaser setBounds:CGRectMake(0, 0, 17, 1)];
[newLaser setOpaque:YES];
[self.view addSubview:newLaser];
[newLaser release];
Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array:
for (int i = 0; i < [model.lasers count]; i++) {
[[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]];
}
I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag.
So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES?

As jeff7 and FenderMostro said, you're using the high-level API (UIKit), and you'd have better performance using the lower APIs, either CoreAnimation or OpenGL. (cocos2d is built on top of OpenGL)
Your best option would be to use CALayers instead of UIImageViews, get a CGImageRef from your UIImage and set it as the contents for these layers.
Also, you might want to keep a pool of CALayers and reuse them by hiding/showing as necessary. 60 CALayers of 17*1 pixels is not much, I've been doing it with hundreds of them without needing extra optimization.
This way, the images will already be decompressed and available in video memory. When using UIKit, everything goes through the CPU, not to mention the creation of UIViews which are pretty heavy objects.

Seems like you're trying to code a game by using the UIKit API, which is not really very suitable for this kind of purpose. You are expending the device's resources whenever you allocate a UIView, which incurs slowdowns because object creation is costly. You might be able to obtain the performance you want by dropping to CoreAnimation though, which is really good at drawing hundreds of images in a limited time frame, although it would still be much better if you used OpenGL or an engine like Cocos2d.

The UIImageView is made to display single OR multiple images. So, instead of creating every time a UIImageView, you should consider creating a new image and add it to the UIImageView instead.
See here.

I'd recommend starting over using OpenGL ES, there is an excellent framework called cocos2d for iPhone that can make this type of programming very easy and fast. From a quick look at your code, you're lasers can be remodeled as CCSprite which is an easy way to move images around a scene among many other things.

Related

How do I draw a grid of images in Cocoa and export it to a PDF?

I'm making a tool that will pull data from a .csv and create a grid of images with captions [like "This"] in Cocoa, then export that to a PDF. I do not need to actually display the view, just save a file. As a complete beginner to drawing programmatically, I have some questions about the process:
What class should I use? I'm assuming NSView, but like I said I've never done this before so I'm not sure.
Do I need to specify the pixel coordinates for every single object, or can I make each object relative to another in some way?
How do I create separate pages for the view?
Keep in mind that I read the Apple guides, and while it had some helpful tidbits, overall it was unusually hard for me to comprehend. If someone could explain in layman's terms what I need to know it would be very appreciated! Thank you in advance.
Have a look at NSCollectionView
Overview
NSCollectionView class displays an array of content as a grid of
views. The views are specified using the NSCollectionViewItem class
which makes loadings nibs containing the view easy, and supports
bindings
There are lots of tutorials.
Including:
Cocoa Programming L42 - NSCollectionView
And
Apples own quick guide to Collection Views
And maybe also look at NSDocuments
Overview
The NSDocument abstract class defines the interface for OS X
documents. A document is an object that can internally represent data
displayed in a window and that can read data from and write data to a
file or file package. Documents create and manage one or more window
controllers and are in turn managed by a document controller.
Documents respond to first-responder action messages to save, revert,
and print their data.
Conceptually, a document is a container for a body of information
identified by a name under which it is stored in a disk file. In this
sense, however, the document is not the same as the file but is an
object in memory that owns and manages the document data. In the
context of AppKit, a document is an instance of a custom NSDocument
subclass that knows how to represent internally, in one or more
formats, persistent data that is displayed in windows.
A document can read that data from a file and write it to a file. It
is also the first-responder target for many menu commands related to
documents, such as Save, Revert, and Print. A document manages its
window’s edited status and is set up to perform undo and redo
operations. When a window is closing, the document is asked before the
window delegate to approve the closing.
NSDocument is one of the triad of AppKit classes that establish an
architectural basis for document-based apps (the others being
NSDocumentController and NSWindowController).
Figured it out a few days ago, thought I'd come back to answer for anyone else with the same question.
What class should I use? I'm assuming NSView, but like I said I've never done this before so I'm not sure.
NSView is in fact the class I used to draw each page.
Do I need to specify the pixel coordinates for every single object, or can I make each object relative to another in some way?
I did end up specifying the pixel coordinates for each image on the grid (plus its caption), but it was easy to calculate where they should be placed once I learned the size of a 8.50 x 11 inch page in points. The next challenge was drawing them in a for loop rather than having to explicitly declare each possible NSRect. Here's my code in drawRect:
// Declared elsewhere: constants for horizontal/vertical spacing,
// the width/height for an image, and a value for what row the image
// should be drawn on
for (int i = 0; i < [_people count]; i++) {
float horizontalPoint = 0.0; // What column should the image be in?
if (i % 2 != 0) { // Is i odd? (i.e. Should the image be in the right column?)
horizontalPoint += (imageWidth + horizontalSpace); // Push it to the right
}
NSRect imageRect = NSMakeRect(horizontalSpace + horizontalPoint, verticalSpace + verticalPoint,
imageWidth, imageHeight);
// Draw the image with imageRect
if (i % 2 != 0) { // Is i odd? (i.e. Is the current row drawn?)
verticalPoint = (imageRect.origin.y + imageRect.size.height); // Push the row down
}
}
I do realize that I could've coded that more efficiently (e.g. making a BOOL for i % 2 != 0), but I was rushing the whole project because my friend who needed it was on a deadline.
How do I create separate pages for the view?
With some googling, I came up with this SO answer. However, this wasn't going to work unless I had one big view with all the pages concatenated together. I came up with a way to do just that:
// Get an array of arrays containing 1-6 JANPerson objects each using data from a parsed in .csv
NSArray *paginatedPeople = [JANGridView paginatedPeople:people];
int pages = [JANGridView numberOfPages:people];
// Create a custom JANFlippedView (just an NSView subclass overriding isFlipped to YES)
// This will hold all of our pages, so the height should be the # of pages * the size of one page
JANFlippedView *view = [[JANFlippedView alloc] initWithFrame:NSMakeRect(0, 0, 612, 792 * pages)];
for (int i = 0; i < [paginatedPeople count]; i++) { // Iterate through each page
// Create a custom JANGridView with an array of people to draw on a grid
JANGridView *gridView = [[JANGridView alloc] initWithFrame:NSMakeRect(0, 0, 612, 792) people:paginatedPeople[i]];
// Push the view's frame down by 792 points for each page drawn already
// and add it to the main view
gridView.frame = NSMakeRect(0, 792 * i, gridView.frame.size.width, gridView.frame.size.height);
[view addSubview:gridView];
}
I apologize if this is hard to understand for anybody; I'm better at talking through my process than writing! I welcome anyone to ask for help if there's something unclear, or edit if they can make it better.
NsView; so tis a mac app?
CGPointMake Returns a point with the specified coordinates. i.e. placing an image in a specific spot on the screen using matrices i.e.
layer.position = CGPointMake ([self view].bounds.size.width /2, [self view].bounds.size.height /3 );
(this example is oriented around core animation (moving objects on screen so please don't take it too literally) hence the layer attribute)
Also this line
layer.bounds= CGRectMake (100,100,1000,1000);
specifies a rectangles boundaries (rectangles can be filled with images and custom data using a bridge i believe; like this):
UIImage *image2 = [[UIImage alloc]initWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"flogo#2x"ofType:#"png"]];
layer.contents = (__bridge id)image2.CGImage;
Also i believe the cgdrawrect class when combined with matrices i.e. (x,x,x,x) can draw custom rectangles as in your image.
But hopefully you catch my drift with drawing and substituting images . The Core graphics framework will probably be used here. ( my whole answer used core animation as a reference)

Memory warning and crash (ARC) - how to identify why it's happening?

I've started to use the ARC recently and since then I blame it for every single memory problem. :) Perhaps, you could help me better understand what I'm doing wrong.
My current project is about CoreGraphics a lot - charts drawing, views filled with thumbnails and so on. I believe there would be no problem while using manual memory management, except maybe a few zombies... But as of now, application simply crashes every time I try to either create a lot of thumbnails or redraw a bit more complicated chart.
While profiling with Instruments I can see an awfully high value in resident memory as well as in the dirty one. Heap analysis shows rather alarming irregular grow...
When drawing just a few thumbnails, resident memory grows for about 200 MB. When everything is drawn, memory drops back on almost the same value as before drawing. However, with a lot of thumbnails, a value in resident memory is higher than 400 MB and that obviously crashes the app. I've tried to limit number of thumbnails drawn at the same time (NSOperationQueue and its maxConcurrentOperationCount), but as releasing so much memory seems to take a bit more time, it didn't really solve the issue.
Right now my app basically doesn't work as the real data works with a lot of complicated charts = lot of thumbnails.
Every thumbnail is created with this code I got from around here: (category on UIImage)
+ (void)beginImageContextWithSize:(CGSize)size
{
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
if ([[UIScreen mainScreen] scale] == 2.0) {
UIGraphicsBeginImageContextWithOptions(size, YES, 2.0);
} else {
UIGraphicsBeginImageContext(size);
}
} else {
UIGraphicsBeginImageContext(size);
}
}
+ (void)endImageContext
{
UIGraphicsEndImageContext();
}
+ (UIImage*)imageFromView:(UIView*)view
{
[self beginImageContextWithSize:[view bounds].size];
BOOL hidden = [view isHidden];
[view setHidden:NO];
[[view layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[self endImageContext];
[view setHidden:hidden];
return image;
}
+ (UIImage*)imageFromView:(UIView*)view scaledToSize:(CGSize)newSize
{
UIImage *image = [self imageFromView:view];
if ([view bounds].size.width != newSize.width ||
[view bounds].size.height != newSize.height) {
image = [self imageWithImage:image scaledToSize:newSize];
}
return image;
}
+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
[self beginImageContextWithSize:newSize];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
[self endImageContext];
return newImage;
}
Is there some other way which wouldn't eat so much memory or is something really wrong with the code when using ARC?
The other place where memory warning + crash is happening is when there is too much redrawing of any view. It doesn't need to be quick, just many times. Memory stacks up until it crashes and I'm not able to find anything really responsible for it. (I can see a growing resident/dirty memory in VM Tracker and a heap growth in Allocations instrument)
My question basically is: how to find why it is even happening? My understanding is when there is no owner for given object, it's released ASAP. My inspection of code suggests a lot of objects are not released at all even though I don't see any reason for it to happen. I don't know about any retain cycles...
I've read through the Transitioning to ARC Release Notes, bbum's article about heap analysis and probably a dozen of others. Differs somehow heap analysis with and without ARC? I can't seem to do anything useful with its output.
Thank you for any ideas.
UPDATE: (to not force everybody read all the comments and to hold my promise)
By carefully getting through my code and adding #autoreleasepool, where it had any sense, memory consumption got lowered. The biggest problem was calling UIGraphicsBeginImageContext from background thread. After fixing it (see #Tammo Freese's answer for details) deallocation occurred soon enough to not crash an app.
My second crash (caused by many redrawing of the same chart), was completely solved by adding CGContextFlush(context) at the end of my drawing method. Shame on me.
A small warning for anyone trying to do something similar: use OpenGL. CoreGraphics is not quick enough for animating big drawings, especially not on an iPad 3. (first one with retina)
To answer your question: Identifying problems with memory warnings and crashes with ARC basically works like before with manual retain-release (MRR). ARC uses retain, release and autorelease just like MRR, it only inserts the calls for you, and has some optimizations in place that should even lower the memory consumption in some cases.
Regarding your problem:
In the screenshot of Instruments you posted, there are allocation spikes visible. In most cases I encountered so far, these spikes were caused by autoreleased objects hanging around too long.
You mentioned that you use NSOperationQueue. If you override -[NSOperationQueue main], make sure that you wrap the whole content of the method in #autoreleasepool { ... }. An autorelease pool may already be in place, but it is not guaranteed (and even if there is one, it may be around for longer than you think).
If 1. has not helped and you have a loop that processes the images, wrap the inner part of the loop in #autoreleasepool { ... } so that temporary objects are cleaned up immediately.
You mentioned that you use NSOperationQueue. Since iOS 4, drawing to a graphics context in UIKit is thread-safe, but if the documentation is right, UIGraphicsBeginImageContext should still only be called on the main thread! Update: The docs now state that since iOS 4, the function can be called from any thread, to the following is actually unnecessary! To be on the safe side, create the context with CGBitmapContextCreate and retrieve the image with CGBitmapContextCreateImage. Something along these lines:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
// draw to the context here
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *result = [UIImage imageWithCGImage:newCGImage scale:scale orientation: UIImageOrientationUp];
CGImageRelease(newCGImage);
return result;
So, nothing you are doing relative to memory management (there is none!) looks improper. However, you mention using NSOperationQueue. Those UIGraphics... calls are marked as not thread safe, but others have stated they are as of iOS 4 (I cannot find a definitive answer to this, but recall that this is true.
In any case, you should not be calling these class methods from multiple threads. You could create a serial dispatch queue and feed all the work through that to insure single threaded usage.
What's missing here of course is what you do with the images after using them. Its possible you are retaining them in some way that is not obvious. Here are some tricks:
in any of your classes that use lots of images, add a dealloc() method that just logs its name and some identifier.
you can try to add a dealloc to UIImage to do the same.
try to drive your app using the simplest possible setup - fewest images etc - so you can verify that in fact that the images and their owners are getting dealloc'd.
when you want to make sure something is released, set the ivar or property to nil
I converted a 100 file project to ARC last summer and it worked perfectly out of the box. I have converted several open source projects to ARC as well with only one problem when I improperly used bridging. The technology is rock solid.
This is not an answer to your question but I was trying to solve similar problems long before ARC was introduced.
Recently I was working on an application that was caching images in memory and releasing them all after receiving memory warning. This worked fine as long as I was using the application at a normal speed (no crazy tapping). But when I started to generate a lot of events and many images started to load, the application did not manage to get the memory warning and it was crashing.
I once wrote a test application that was creating many autoreleased objects after tapping a button. I was able to tap faster (and create objects) than the OS managed to release the memory. The memory was slowly increasing so after a significant time or simply using bigger objects I would surely crash the application and cause device to reboot (looks really effective ;)). I checked that using Instruments which unfortunately affected the test and made everything slower but I suppose this is true also when not using Instruments.
On the other occasion I was working on a bigger project that is quite complex and has a lot of UI created from code. It also has a lot of string processing and nobody cared to use release - there were few thousands of autorelease calls when I checked last time. So after 5 minutes of slightly extensive usage of this application, it was crashing and rebooting the device.
If I'm correct then the OS/logic that is responsible for actually deallocating memory is not fast enough or has not high enough priority to save an application from crashing when a lot of memory operations are performed. I never confirmed these suspicions and I don't know how to solve this problem other than simply reducing allocated memory.

How do I properly use CCSpriteFrameCache and CCSpriteBatchNode?

I do not understand what I do exactly when I add a CCSpriteFrameCache or CCSpriteBatchNode to my cocos2d application. Can somebody please explain the following points (it would be helpful if you could explain a few; please write the corresponding letter in front of your answer according to which question you are answering):
[all questions imply the achievement of best performance and lowest memory-use]
a) Is it crucial to create spritesheets for every single layer ? (For example: Menu - own spritesheet, GameLayer - own spritesheet...)
b) Can somebody explain why I have to add sprites to the batch node, and what a batch node generally is ?
b1)So, why can't I just do something like:
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"menusprites.plist"];
CCSpriteBatchNode *spriteSheet = [CCSpriteBatchNode batchNodeWithFile:#"menusprites.png"];
[self addChild:spriteSheet];
And then just add sprites to my layer by calling
CCSprite *mySprite = [CCSprite spriteWithSpriteFrameName:#""];
[self addChild:mySprite];
without adding them to the batch node ? Because from what I understand it works like this :
I add my spritesheet with all the sprites on it to the screen. My app then goes into the plist and looks for the coordinates of the sprite I want to display and then places it on the screen. So why should I call
[spriteSheet addChild:mySprite];
?
c) How do I then get rid of the spritesheet for memory purposes when I do not need it anymore ?
a) It is best to create as few spritesheets (CCSpriteBatchNodes) as is possible. Sprite batching reduces draw calls. Draw calls are expensive. Still, every batch node creates one draw call. So you want to use as few as possible because the ultimate goal is to keep draw calls as low as possible.
b) The CCSpriteBatchNode renders all of its children in one go, in one batched draw call. That's why you need to add sprites to the batch node so it can render them all together. Only sprites using the same texture as the batch node can be added to a batch node, because you can only batch draw from the same texture. Whenever the engine has to switch from one texture to another, it issues a new draw call.
b1) You can't do this because the batch node renders its children. If you add the sprites to any other node, each sprite draws itself, which means one additional draw call per sprite. And the sprite batch node has nothing to do.
c) The CCSpriteBatchNode is just a regular node. You can remove it from the scene like any other node. The texture and sprite frames are cached in the CCTextureCache and CCSpriteFrameCache singleton classes. If you want to remove the textures and sprite frames from memory, you have to do it through the cache classes.
a) no
b) batchNode increases your performance when you need to draw many sprites at the same time, in case of small number of sprites (10, 20. etc.) i dont think that you will notice any performance increasing. batchNode is much faster because opengl should draw only it to see all content. in other case opengl will draw all your objects separately. that is - if you will have 500, 600, 700 sprites, the draw() and visit() methods will be called for each one. if they all will be placed to the batchNode, it will be only one draw() call and one visit() call
c) you can purge cached data manually to force memory freeing by calling these methods:
[CCTextureCache purgeSharedTextureCache];
[CCSpriteFrameCache purgeSharedSpriteFrameCache];
[CCAnimationCache purgeSharedAnimationCache];

Frame synchronization with AVPlayer

I'm having an issue syncing external content in a CALayer with an AVPlayer at high precision.
My first thought was to lay out an array of frames (equal to the number of frames in the video) within a CAKeyframeAnimation and sync with an AVSynchronizedLayer. However, upon stepping through the video frame-by-frame, it appears that AVPlayer and Core Animation redraw on different cycles, as there is a slight (but noticeable) delay between them before they sync up.
Short of processing and displaying through Core Video, is there a way to accurately sync with an AVPlayer on the frame level?
Update: February 5, 2012
So far the best way I've found to do this is to pre-render through AVAssetExportSession coupled with AVVideoCompositionCoreAnimationTool and a CAKeyFrameAnimation.
I'm still very interested in learning of any real-time ways to do this, however.
What do you mean by 'high precision?'
Although the docs claim that an AVAssetReader is not designed for real-time usage, in practice I have had no problems reading video in real-time using it (cf https://stackoverflow.com/a/4216161/42961). The returned frames come with a 'Presentation timestamp' which you can fetch using CMSampleBufferGetPresentationTimeStamp.
You'll want one part of the project to be the 'master' timekeeper here. Assuming your CALayer animation is quick to compute and doesn't involve potentially blocky things like disk access, I'd use that as the master time source. When you need to draw content (eg in the draw selector on your UIView subclass) you should read currentTime from the CALayer animation, if necessary proceed through the AVAssetReader's video frames using copyNextSampleBuffer until CMSampleBufferGetPresentationTimeStamp returns >= currentTime, draw the frame, and then draw the CALayer animation content over the top.
If your player is using an AVURLAsset, did you load it with the precise duration flag set? I.e. something like:
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *urlAsset = [AVURLAsset URLAssetWithURL:aUrl options:options];

How to use CCSpriteBatchNode properly?

I add my sprite frames to CCSpriteFrameCache. Then I create a CCSpriteBatchNode with my desired image file.
This is what I don't quite understand:
When I make a CCSprite, if I want to take advantage of the CCSpriteBatchNode, I need to initialize the CCSprite with [CCSprite spriteWithBatchNode: rect:]? But if that's the case, I don't see how am I taking advantage of CCSpriteFrameCache to get the frames, since now I would be manually making the rect.
So I guess I use [CCSprite spriteWithSpriteFrameName:] and then I add this sprite to the batch node. But I am still unsure.
You should use:
CCSprite *sp = [CCSprite spriteWithSpriteFrameName:#"monster.png"];
The .plist that you specified in the SpriteFrameCache will take care of the frames for you.
Then you create the sprite and add to the batch.
If you create the batchnode with a file called "myArt.png", you CAN ONLY add a sprite to it that is contained inside "myArt.png".
Hope it helps!
According to what I've learned of cocos2d. SpriteFrameCache and SpriteBatchNode have the same result but are used differently and can notice a slight performance difference if your game is very big...
CCSpriteFrameCache loads your frames according to when they are called by their named according to the plist file it was given. The atlas associated with the plist has to be added to the project as well or else the frames will be called but nothing will be found to be drawn. The Plist is like the address of where the image is located inside the image atlas.
The good part of CCSpriteFrameCache is that the code is neater, and smaller than CCSpriteBatchNode method, at the cost that for every call of that frame, it goes to that specific atlas and draws it.
CCSpriteBatchNode, on the other hand, loads the atlas and loads it in one draw call. This is efficient because it reduces the amount of times the draw has to be done per need in the game. The only difficulty here is that you need to do math for the rectangles of each sprite in the atlas. This is because lets say your atlas is of 2 actions of a character, the atlas image file has a size of 1024x1024, and each sprite has a size of 128x128. so you would do the math to get each rectangle for the whole jump action for example.(This is where .plist come in handy to avoid doing such math)
The code gets complicated as you can see but it will only do one call, making it performance-wise your best call.
Another way to use CCSpriteBatchNode is to have different static sprites and you would just do one draw call for those multiple static images or sprites.
If you need example code just ask, I would be more than happy to provide it.
Update: Adding Link for SpriteBatchNode and an Example of my own.
SpriteBatchNode:
Example using SpriteBatchNode with Ray Wenderlich
I believe in this guy, and I have learned alot of Cocos2d from his tutorials. I would suggest you to read other of his tutorials.
In a nutshell, CCSpriteBatchNode is the exact same process we did below with the CCSpriteFrameCache the ONLY difference and its that you add the Sprite Child Node to the CCSpriteBatchNode and not the Layer, BUT you do Add the CCSpriteBatchNode to the Layer.
This is the hard concept that new comers to Cocos2d get entangled at.
SpriteFrameCache:
The SpriteFrameCache I couldn't find a good example so here is one simple one.
//By doing this your sprites are now in the cache ready to be used
//by their names declared in the .plist file.
-(void) loadingSprites:(NSString*) plistName {
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:plistName];
}
-(id)initGameLayer {
//CCSprite accepts CCSpriteFrame and your img is now ready to be displayed.
//However is still not drawn yet.
CCSprite * mySprite = [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:framename];
//set a position if desired
//20 pixels to the right and 0 pixels to the top.
mySprite.position = CGPointMake(20,0);
//Now the Image has been drawn, making 1 draw call.
[self addChild:mySprite];
}
It is noteworthy to point out that CCSpriteBatchNode makes just 1 drawcall, HOWEVER all the sprites being added to the batchnode have to be part of the same SpriteAtlas.
And using SpriteFrameCache only its easier and simpler, but for every child added to the layer it means +1 draw call is being done.(This is the downside, performance)
So if you add 10 Sprites to the layer with SpriteFrameCache you will have 10 drawcalls.
However if you implement the SpriteBatchNode and add those 10 Sprites in the CCSpriteBatchNode instead and just add that CCSpriteBatchNode to the layer, you will have the same 10 sprites added but only ONE draw call will be done. Hence the Performance difference(for the best) will be significant in larger games.
Hope it helps, Cheers!