i have an ipad app (>30 views / pages) each view has a unique background.
the problem:
whats the best way to set the background (memory friendly)
is there a better way than adding: uiimageview "backgroundView" as a subview?
version1:
[[UIImage alloc] initWithData:imageData];
which seems to be problematic with the retina switch
version2:
self.layer.contents = (id)image.CGImage;
version 3:
UIImage* image = [UIImage imageWithContentsOfFile:fileLocation];
version 2 seems to work fine. maybe someone tell me whats the best approach, and why ;)
thank you
Alex
CGImage is problematic with retina ... version3. is best for memory friendly !
In version 2, you generate a new image object which you have to release manually if you don't use ARC. Version 3 uses an autoreleased object.
Both versions are equal in memory-friendlyness.
I'd prefer version 3 because you don't have to do anything yourself to free the memory.
You could also use [UIImage imageNamed:#"image-name.png"], which also generates an autoreleased object.
If you want it as memory-friendly as possible, you should consider using PVR images, as those are natively supported by the graphics hardware.
Best,
Flo
Related
I have two PNGs in a Mac project. Normal and #2x. Xcode combines these into a single TIFF with the #2x being at index 0 and the #1x at index 1.
What is the suggested approach to get the appropriate image as CGImageRef version (for use with Quartz) for the current display scale?
I can get the image manually via CGImageSource:
NSBundle *mainBundle = [NSBundle mainBundle];
NSURL *URL = [mainBundle URLForResource:#"Canvas-Bkgd-Tile" withExtension:#"tiff"];
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)(URL), NULL);
_patternImage = CGImageSourceCreateImageAtIndex(source, 1, NULL); // index 1 is #1x, index 0 is #2x
CFRelease(source);
I also found this to be working, but I am not certain that this will return the Retina version on a Retina display:
NSImage *patternImage = [NSImage imageNamed:#"Canvas-Bkgd-Tile.tiff"];
_patternImage = [patternImage CGImageForProposedRect:NULL context:nil hints:nil];
CGImageRetain(_patternImage); // retain image, because NSImage goes away
An acceptable answer to this question either provides a solution how to get the CGImage suitable from a combined multi-resolution TIFF, or explains why the second approach here is working. Or what changes are required.
I am opting to answer on "why the second approach here is working".
In one of the WWDC videos published since 2010, they said that :
+[NSImage imageNamed:] chooses the best image representation object available for the current display.
So chances are that you are calling this class method from within a locked focus context (e.g. within a drawRect: method or similar), or maybe you actually called lockFocus yourself. Anyway, the result is that you get the most suitable image. But only when calling +[NSImage imageNamed:].
EDIT: Found it here:
http://adcdownload.apple.com//wwdc_2012/wwdc_2012_session_pdfs/session_213__introduction_to_high_resolution_on_os_x.pdf
Search for the keyword "best" in the slides: "NSImage automatically chooses best representation […]".
So, your second version will return the Retina version on a Retina display, you can be certain of it, it is advertised in the documentation[*].
[*] This will only work if you provide valid artwork.
Instruments is telling me that alot of memory is being allocated when I rapidly set the image name of a UIImageview in my app. I have a UIImageView that changes its image name every frame in my game. When profiled with zombie checking in instruments, the app seems to be constantly gaining live bytes at an enourmous rate. Is there a way that I can deallocate the UIImageView's current image to stop it from doing this? I am using ARC.
My code to assign the UIImageView's image is as follows:
aPlanet.image = [UIImage imageNamed:tempPlanetName];
Where aPlanet is the UIImageView and tempPlanetName is the name of the image. This is called every frame.
[UIImage ImageNamed:] method loads the image into image view and adds this newly created uiimage object to autorelease pool. To get rid of this problem you should use -
NSString *imgPath = [NSBundle mainbundle] pathForResource:#"imageName" ofType:#"png"];
aPlanet.image = [[UIImage alloc] ]initWithContentsOfFile:imgPath];
if you are using arc then you don't need to bother about releasing this newly allocated object of uiimage which was created using initWithContentsOfFile: method.
When you use UIImage imageNamed: it will load and cache that image file. This is intended for reuse of icons and other image resources that will be utilized more than once in your application.
Apart from it seeming somewhat unusual to update an image view with a new image every frame, you should look into alternative means of loading images that you will not need more than once - or even if you do, when you need more control over its lifecycle.
For example have a look at UIImage imageWithContentsOfFile: (documented here: Apple Developer Library Reference). It explicitly states that this method will not do any caching of the image contents.
I hope this helps you, but for every frame I doubt that your performance will be good enough with this approach, but this is probably the topic of a different question should the need arise.
I am developing a complete application using the template "page-based storyboard".
But whenever I turn page, I see thru the instruments that the amount of memory allocated only increase and never decrease, so until a crash occurs.
Trying in iPad device crash too.
To simplify and try to find the problem, I created a test app using the same template and opting for ARC, only loading the image of the pages that I use and no change anything in the original apple code, even in this simple application the problem occurs.
I think the problem is because all page are stay allocated like this issue
PageViewController: How to release ViewControllers added to it?
but I m using storyboard, so, where is:
PageView *pView = [[PageView alloc] init];
I have:
MWViewController *dataViewController = [storyboard instantiateViewControllerWithIdentifier:#"MWDataViewController"]
I try to put autorelease but no effect.
The problem that I was having is that I was using background image in all pages and the imageNamed: method caches images which makes my memory footprint grow. I used the UIImage initWithContentsOfFile: method and my foot print stayed mostly flat.
I need to differentiate between the retina screens or normal screens in my app for iPhone, similar to this:
#if TARGET_OS_IPHONE_VERSION < 3
NSString *uniquePath = [[NSHomeDirectory() stringByAppendingPathComponent:#"Documents"] stringByAppendingPathComponent:#"close.png"];
UIImage *image = [UIImage imageWithContentsOfFile: uniquePath];
#endif
#if TARGET_OS_IPHONE_VERSION >= 4
NSString *uniquePath = [[NSHomeDirectory() stringByAppendingPathComponent:#"Documents"] stringByAppendingPathComponent:#"close#2X.png"];
UIImage *image = [UIImage imageWithContentsOfFile: uniquePath];
#endif
Any idea?
No you don't. Not with UIImage, it does that for you.
You can't do it with the preprocessor. You could define your own symbol but I'm not sure what you'd do then. Somehow tell Apple different versions of the app worked on different devices?
Better is to do it at run time. Look at the UIScreen scale property.
Of course, normally you don't need to do this, as the other answer says. Most UIKit functions will add the #2x for you. There are some corner cases where you need to know, which is when the scale property comes into play.
Preprocessor directives are resolved at compile time. In your example, this means that the compiler will not decide between those blocks of code when it's running on an iphone, it will decide when you build your application. So unless you're building a different application for each platform, you'll have to determine this at runtime.
I'm sure there's a way to get which version of iOS/iPhone you're running on. Just do that when you initialize your app or whenever you need this code, and then use an if/else if there.
You don't need this. First of all, preprocessor would not help to define Retina screen as they are defined at compile time. But anyway, you don't need it because of Apple convention. Simply having a "close.png" in your project will be enough
NSString *uniquePath = [[NSHomeDirectory() stringByAppendingPathComponent:#"Documents"] stringByAppendingPathComponent:#"close.png"];
UIImage *image = [UIImage imageWithContentsOfFile: uniquePath];
I think you just want this though
UIImage *image = [UIImage imageNamed:#"close.png"];
If close.png (and the double size close#2x.png) is in your project when you build it, the second sample is the one to use. Test it, you will see that a retina phone will display the #2x file
In my application I needed something like a particle system so I did the following:
While the application initializes I load a UIImage
laserImage = [UIImage imageNamed:#"laser.png"];
UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one:
// add new Laserimage
UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage];
[newLaser setTag:[model.lasers count]-9];
[newLaser setBounds:CGRectMake(0, 0, 17, 1)];
[newLaser setOpaque:YES];
[self.view addSubview:newLaser];
[newLaser release];
Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array:
for (int i = 0; i < [model.lasers count]; i++) {
[[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]];
}
I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag.
So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES?
As jeff7 and FenderMostro said, you're using the high-level API (UIKit), and you'd have better performance using the lower APIs, either CoreAnimation or OpenGL. (cocos2d is built on top of OpenGL)
Your best option would be to use CALayers instead of UIImageViews, get a CGImageRef from your UIImage and set it as the contents for these layers.
Also, you might want to keep a pool of CALayers and reuse them by hiding/showing as necessary. 60 CALayers of 17*1 pixels is not much, I've been doing it with hundreds of them without needing extra optimization.
This way, the images will already be decompressed and available in video memory. When using UIKit, everything goes through the CPU, not to mention the creation of UIViews which are pretty heavy objects.
Seems like you're trying to code a game by using the UIKit API, which is not really very suitable for this kind of purpose. You are expending the device's resources whenever you allocate a UIView, which incurs slowdowns because object creation is costly. You might be able to obtain the performance you want by dropping to CoreAnimation though, which is really good at drawing hundreds of images in a limited time frame, although it would still be much better if you used OpenGL or an engine like Cocos2d.
The UIImageView is made to display single OR multiple images. So, instead of creating every time a UIImageView, you should consider creating a new image and add it to the UIImageView instead.
See here.
I'd recommend starting over using OpenGL ES, there is an excellent framework called cocos2d for iPhone that can make this type of programming very easy and fast. From a quick look at your code, you're lasers can be remodeled as CCSprite which is an easy way to move images around a scene among many other things.