App crashes when using __bridge for CoreGraphics gradient on ARC - core-graphics

I'm creating an application for iOS 5 and I'm drawing some gradients. The following gradient code I've always used before ARC, but now it does not work on my device anymore (however, it works on the simulator) when I use it several times (so I suppose it's a memory management issue). Anyways, here's the code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat locations[] = { 0.0, 1.0 };
NSArray *colors = [NSArray arrayWithObjects:(__bridge id)startColor, (__bridge id)endColor, nil];
CGGradientRef gradient = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef) colors, locations);
CGPoint startPoint = CGPointMake(CGRectGetMidX(rect), CGRectGetMinY(rect));
CGPoint endPoint = CGPointMake(CGRectGetMidX(rect), CGRectGetMaxY(rect));
CGContextSaveGState(context);
CGContextAddRect(context, rect);
CGContextClip(context);
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, 0);
CGContextRestoreGState(context);
CGGradientRelease(gradient);
Originally, there were no __bridge statements, I added them as suggested by Xcode. What exactly is causing the problem?

I ran into this exact same issue. I have resorted to using CGGradientCreateWithColorComponents which solves the problem for me. You have to convert your NSArray of CGColorRefs to an array of CGFloats.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 0.909, 0.909, 0.909, 1.0, // Start color
0.698f, 0.698f, 0.698f, 1.0 }; // End color
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, components,
locations, 2);

Your issue might be with the lifetime of the startColor, etc. variables. I'm guessing that these might be CGColorRefs created via UIColor's -CGColor method at some point above the code you've listed.
As I describe in this answer, unless you explicitly retain those CGColorRefs, they may go away after the UIColor that generated them has been deallocated. Given that you never use the UIColors again after you've extracted the CGColorRefs from them, ARC may decide to deallocate these UIColors before you've had a chance to use the CGColorRefs. I've seen object lifetimes differ between the Simulator and actual devices, so this could explain the crash on one but not the other.
My solution to this has been to use an immediate cast to id, like in the following:
NSArray *colors = [NSArray arrayWithObjects:(id)[color1 CGColor],
(id)[color2 CGColor], nil];
where the compiler does the right thing as far as transferring ownership of the CGColorRefs.
There's also the possibility that your NSArray is being deallocated early, in which case the following code might make sure it hangs around long enough:
NSArray *colors = [NSArray arrayWithObjects:(__bridge id)startColor, (__bridge id)endColor, nil];
CFArrayRef colorArray = (__bridge_retained CFArrayRef)colors;
CGGradientRef gradient = CGGradientCreateWithColors(colorSpace, colorArray, locations);
CFRelease(colorArray);
This bridges the NSArray across to Core Foundation, leaving behind a CFArrayRef with a retain count of at least 1. You can then use that in your gradient creation, where the gradient will hopefully keep a reference to it, and release it manually when done.
However, Bringo's suggestion to work entirely within Core Graphics' C API for this might be the easiest way to go here. I just thought I'd explain a potential source of your problems, in case you run into something similar in the future.

Related

Working CIAreaHistogram / CIHistogramDisplayFilter example for NSImage

Somehow I cannot figure out to get an actual, meaningful histogram image from an NSImage input using the CIAreaHistogram and CIHistogramDisplayFilter filters.
I read Apple's "Core Image Filter Reference" and the relevant posts here on SO, but whatever I try I get no meaningful output.
Here's my code so far:
- (void) testHist3:(NSImage *)image {
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
NSBitmapImageRep *rep = [image bitmapImageRepresentation];
CIImage *ciImage = [[CIImage alloc] initWithBitmapImageRep:rep];
ciImage = [CIFilter filterWithName:#"CIAreaHistogram" keysAndValues:kCIInputImageKey, ciImage, #"inputExtent", ciImage.extent, #"inputScale", [NSNumber numberWithFloat:1.0], #"inputCount", [NSNumber numberWithFloat:256.0], nil].outputImage;
ciImage = [CIFilter filterWithName:#"CIHistogramDisplayFilter" keysAndValues:kCIInputImageKey, ciImage, #"inputHeight", [NSNumber numberWithFloat:100.0], #"inputHighLimit", [NSNumber numberWithFloat:1.0], #"inputLowLimit", [NSNumber numberWithFloat:0.0], nil].outputImage;
CGImageRef cgImage2 = [context createCGImage:ciImage fromRect:ciImage.extent];
NSImage *img2 = [[NSImage alloc] initWithCGImage:cgImage2 size:ciImage.extent.size];
NSLog(#"Histogram image: %#", img2);
self.histImage = img2;
}
What I get is a 64x100 image with zero representations (=invisible). If I create the CI context with
CIContext *context = [[CIContext alloc] init];
then the resulting image is grey, but at least it does have a representation:
Histogram image: <NSImage 0x6100002612c0 Size={64, 100} Reps=(
"<NSCGImageSnapshotRep:0x6100002620c0 cgImage=<CGImage 0x6100001a1880>>" )>
The input image is a 1024x768 JPEG image.
I have little experience with Core Image or Core Graphics, so the mistake might be with the conversion back to NSImage... any ideas?
Edit 2016-10-26: With rickster's very comprehensive answer I was able to make a lot of progress.
Indeed it was the inputExtent parameter that was messing up my result. Supplying a CIVector there solved the problem. I found that you cannot leave that to the default either; I don't know what the default value is, but it is not the input image's full size. (I found that out by running an image and a mirrored version of it through the filter; I got different histograms.)
Edit 2016-10-28:
So, I've got a working, displayable histogram now; my next step will be to figure out how the "intermediate" histogram (the 256x1 pixel image coming out of the filter) can contain the actual histogram information even though all but the last pixel are always (0, 0, 0, 0).
I presume the [image bitmapImageRepresentation] in your code is a local category method that's roughly equivalent to (NSBitmapImageRep *)image.representations[0]? Otherwise, first make sure that you're getting the right input.
Next, it looks like you're passing the raw output of ciImage.extent into your filter parameters — given that said parameter expects a CIVector object and not a CGRect struct, you're probably borking the input to your filter at run time. You can get a bit more useful diagnostics for such problems by using the dictionary-based filter methods filterWithName:withInputParameters or imageByApplyingFilter:withInputParameters: — that way, if you try to pass nil for a filter key or pass something that isn't a proper object, you'll get a compile-time error. The latter gives you an easy way to go straight from input image to output image, or chain filters, without creating intermediary CIFilter objects and needing to set the input image on each.
A related tip: most of the parameters you're passing are the default values for those filters, so you can pass only the values you need:
CIImage *hist = [inputImage imageByApplyingFilter:#"CIAreaHistogram"
withInputParameters:#{ #"inputCount": #256 }];
CIImage *outputImage = [hist imageByApplyingFilter:#"CIHistogramDisplayFilter"
withInputParameters:nil];
Finally, you might still get an almost-all-gray image out of CIHistogramDisplayFilter depending on what your input image looks like, because all of the histogram bins may have very small bars. I get the following for Lenna:
Increasing the value for kCIInputScaleKey can help with that.
Also, you don't need to go through CGImage to get from CIImage to NSImage — create an NSCIImageRep instead and AppKit will automatically manage a CIContext behind the scenes when it comes time to render the image for display/output.
// input from NSImage
NSBitmapImageRep *inRep = [nsImage bitmapImageRepresentation];
CIImage *inputImage = [[CIImage alloc] initWithBitmapImageRep:inRep];
CIImage *outputImage = // filter, rinse, repeat
// output to NSImage
NSCIImageRep *outRep = [NSCIImageRep imageRepWithCIImage: outputImage];
NSImage *outNSImage = [[NSImage alloc] init];
[outNSImage addRepresentation: outRep];

Memory issues with ARC on iOS and Mac

I am trying to mirror screen of my mac to iphone. I have this method in Mac app delegate to capture screeen into base64 string.
-(NSString*)baseString{
CGImageRef screen = CGDisplayCreateImage(displays[0]);
CGFloat w = CGImageGetWidth(screen);
CGFloat h = CGImageGetHeight(screen);
NSImage * image = [[NSImage alloc] initWithCGImage:screen size:(NSSize){w,h}];
[image lockFocus];
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, w, h)];
[bitmapRep setCompression:NSTIFFCompressionJPEG factor:.3];
[image unlockFocus];
NSData *imageData = [bitmapRep representationUsingType:NSJPEGFileType properties:_options];
NSString *base64String = [imageData base64EncodedStringWithOptions:0];
image = nil;
bitmapRep = nil;
imageData = nil;
return base64String;}
after that I am sending it to iphone and present it in UIImageView.
Delay between screenshots is 40 miliseconds. Everything works as expected until there is enough memory. After minute of streaming it starts swapping and use 6GB of RAM. iOS app memory usage is also growing lineary. By the time iOS reaches 90MB of ram, mac has 6GB.
Even if I stop streaming memory is not released.
I'm using ARC in both projects. Would it make any difference if migrate it to manual reference counting ?
I also tried #autoreleasepool {...} block, but it didn't help.
Any ideas ?
EDIT
My iOS code is here
NSString message = [NSString stringWithFormat:#"data:image/png;base64,%#",base64];
NSURL *url = [NSURL URLWithString:message];
NSData *imageData = [NSData dataWithContentsOfURL:url];
UIImage *ret = [UIImage imageWithData:imageData];
self.image.image = ret;
You have a serious memory leak. The docs for CGDisplayCreateImage clearly state:
The caller is responsible for releasing the image created by calling CGImageRelease.
Update your code with a call to:
CGImageRelease(screen);
I'd add that just after creating the NSImage.
We can't help with your iOS memory leaks since you didn't post your iOS code, but I see a big memory leak in your Mac code.
You are calling a Core Foundation function, CGDisplayCreateImage. Core Foundation objects are not managed by ARC. If a Core Foundation function has "Create" (or "copy") in the name then it follows the "create rule" and you are responsible for releasing the returned CF object when you are done with it.
Some CF objects have special release calls. For those that don't, just call CFRelease. CGImageRef has a special release call, CGImageRelease().
You need a corresponding call to CGImageRelease(screen), probably after the call to initWithCGImage.

Memory leak in image

I'm using the profiler in xcode 4 to determinate if I have any memory leaks. I didn't have this leak before, but with xcode 5 I have this one.
I'm trying to set an image for the tab item of my `UIViewController and the profiler marks this line :
image = [[UIImage alloc] initWithContentsOfFile:imgPath]; <<=== Leak : 9.1%
This is part of my code I don't understand why. What's the best way to resolve this issue?
NSString *imgPath;
UIImage *image;
IBNewsViewController *newsView = [[IBNewsViewController alloc] initWithURL:[tvLocal urlFlux] title:#"News" isEmission:NO];
[newsView setTitle:#"News"];
imgPath = [[NSBundle mainBundle] pathForResource:"news" ofType:#"png"];
image = [[UIImage alloc] initWithContentsOfFile:imgPath]; <<=== Leak : 9.1%
newsView.tabBarItem.image = image;
[image release];
image = nil;
UINavigationController* navNew = [[UINavigationController alloc] initWithRootViewController:newsView];
[newsView release];
newsView = nil;
EDIT:
No leak on iOS6.
Why it's leak on iOS7?
You should switch to the autoreleasing imageNamed: method. This has the added benefit of system level cacheing of the image.
NSString *imgPath;
UIImage *image;
IBNewsViewController *newsView = [[IBNewsViewController alloc] initWithURL:[tvLocal urlFlux] title:#"News" isEmission:NO];
[newsView setTitle:#"News"];
image = [UIImage imageNamed: #"news"];
newsView.tabBarItem.image = image;
UINavigationController* navNew = [[UINavigationController alloc] initWithRootViewController:newsView];
[newsView release];
newsView = nil;
To make life easier on yourself I'd switch your project to use ARC so you have less to worry about WRT memory management.
Replace this line
image = [[UIImage alloc] initWithContentsOfFile:imgPath];
With
image = [UIImage imageWithContentsOfFile:imgPath];
and check once.
First, switch to ARC. There is no single thing you can do on iOS that will more improve your code and remove whole classes of memory problems with a single move.
Beyond that, the code above does not appear to have a leak itself. That suggests that the actual mistake is elsewhere. There are several ways this could happen:
You're leaking the IBNewsViewController somewhere else
IBNewsViewController messes with its tabBarItem incorrectly and leaks that
You're leaking the UINavigationController somewhere else
You're retaining the tabBarItem.image somewhere else and failing to release it
Those are the most likely that I would hunt for. If you're directly accessing ivars, that can often cause these kinds of mistakes. You should use accessors everywhere except in init and dealloc. (This is true in ARC, but is absolutely critical without ARC.)
Leak detection is not perfect. There are all kinds of "abandoned" memory that may not appear to be a leak. I often recommend using Heapshot (now "Generation") analysis to see what other objects may be abandoned; that may give you a better insight into this leak.
Why differences in iOS 6 vs iOS 7? I suspect you have the same problem on iOS 6, but it doesn't look like a "leak", possibly because there is something caching the image that was removed in iOS 7. The cache pointer may make it look like it's not a leak to Instruments.
Speaking of which, do make sure to run the static analyzer. It can help you find problems.
And of course, switch to ARC.

Non-CALayer animation framework for arbitrary objects and properties

I've searched and searched and can't seem to find either a way to use CoreAnimation to animate properties on objects of custom classes or a 3rd party framework to accomplish the task. Can anyone shed some light on the subject?
My particular use case is that I wish to animate a property which gets passed as an OpenGL uniform on each draw.
Two fine options I've discovered:
PRTween: https://github.com/domhofmann/PRTween
and
POP (by Facebook): https://github.com/facebook/pop
PRTween wins on simplicity...
[PRTween tween:someObject property:#"myProp" from:1 to:0 duration:3];
POP has some interesting animation curves developed for the FB Paper app. It's a bit more convoluted for custom props but gives you more flexibility w/o further digging:
POPSpringAnimation *anim = [POPSpringAnimation animation];
prop = [POPAnimatableProperty propertyWithName:#"com.foo.radio.volume" initializer:^(POPMutableAnimatableProperty *prop) {
// read value
prop.readBlock = ^(id obj, CGFloat values[]) {
values[0] = [obj volume];
};
// write value
prop.writeBlock = ^(id obj, const CGFloat values[]) {
[obj setVolume:values[0]];
};
// dynamics threshold
prop.threshold = 0.01;
}];
anim.property = prop;

How to get [UIImage imageWithContentsOfFile:] and High Res Images working

As many people are complaining it seems that in the Apple SDK for the Retina Display there's a bug and imageWithContentsOfFile actually does not automatically load the 2x images.
I've stumbled into a nice post how to make a function which detects UIScreen scale factor and properly loads low or high res images ( http://atastypixel.com/blog/uiimage-resolution-independence-and-the-iphone-4s-retina-display/ ), but the solution loads a 2x image and still has the scale factor of the image set to 1.0 and this results to a 2x images scaled 2 times (so, 4 times bigger than what it has to look like)
imageNamed seems to accurately load low and high res images, but is no option for me.
Does anybody have a solution for loading low/high res images not using the automatic loading of imageNamed or imageWithContentsOfFile ? (Or eventually solution how to make imageWithContentsOfFile work correct)
Ok, actual solution found by Michael here :
http://atastypixel.com/blog/uiimage-resolution-independence-and-the-iphone-4s-retina-display/
He figured out that UIImage has the method "initWithCGImage" which also takes a scale factor as input (I guess the only method where you can set yourself the scale factor)
[UIImage initWithCGImage:scale:orientation:]
And this seems to work great, you can custom load your high res images and just set that the scale factor is 2.0
The problem with imageWithContentsOfFile is that since it currently does not work properly, we can't trust it even when it's fixed (because some users will still have an older iOS on their devices)
We just ran into this here at work.
Here is my work-around that seems to hold water:
NSString *imgFile = ...path to your file;
NSData *imgData = [[NSData alloc] initWithContentsOfFile:imgFile];
UIImage *img = [[UIImage alloc] initWithData:imgData];
imageWithContentsOfFile works properly (considering #2x images with correct scale) starting iOS 4.1 and onwards.
Enhancing Lisa Rossellis's answer to keep retina images at desired size (not scaling them up):
NSString *imagePath = ...Path to your image
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfFile:imagePath] scale:[UIScreen mainScreen].scale];
I've developed a drop-in workaround for this problem.
It uses method swizzling to replace the behavior of the "imageWithContentsOfFile:" method of UIImage.
It works fine on iPhones/iPods pre/post retina.
Not sure about the iPad.
Hope this is of help.
#import </usr/include/objc/objc-class.h>
#implementation NSString(LoadHighDef)
/** If self is the path to an image, returns the nominal path to the high-res variant of that image */
-(NSString*) stringByInsertingHighResPathModifier {
NSString *path = [self stringByDeletingPathExtension];
// We determine whether a device modifier is present, and in case it is, where is
// the "split position" at which the "#2x" token is to be added
NSArray *deviceModifiers = [NSArray arrayWithObjects:#"~iphone", #"~ipad", nil];
NSInteger splitIdx = [path length];
for (NSString *modifier in deviceModifiers) {
if ([path hasSuffix:modifier]) {
splitIdx -= [modifier length];
break;
}
}
// We insert the "#2x" token in the string at the proper position; if no
// device modifier is present the token is added at the end of the string
NSString *highDefPath = [NSString stringWithFormat:#"%##2x%#",[path substringToIndex:splitIdx], [path substringFromIndex:splitIdx]];
// We possibly add the extension, if there is any extension at all
NSString *ext = [self pathExtension];
return [ext length]>0? [highDefPath stringByAppendingPathExtension:ext] : highDefPath;
}
#end
#implementation UIImage (LoadHighDef)
/* Upon loading this category, the implementation of "imageWithContentsOfFile:" is exchanged with the implementation
* of our custom "imageWithContentsOfFile_custom:" method, whereby we replace and fix the behavior of the system selector. */
+(void)load {
Method originalMethod = class_getClassMethod([UIImage class], #selector(imageWithContentsOfFile:));
Method replacementMethod = class_getClassMethod([UIImage class], #selector(imageWithContentsOfFile_custom:));
method_exchangeImplementations(replacementMethod, originalMethod);
}
/** This method works just like the system "imageWithContentsOfFile:", but it loads the high-res version of the image
* instead of the default one in case the device's screen is high-res and the high-res variant of the image is present.
*
* We assume that the original "imageWithContentsOfFile:" implementation properly sets the "scale" factor upon
* loading a "#2x" image . (this is its behavior as of OS 4.0.1).
*
* Note: The "imageWithContentsOfFile_custom:" invocations in this code are not recursive calls by virtue of
* method swizzling. In fact, the original UIImage implementation of "imageWithContentsOfFile:" gets called.
*/
+ (UIImage*) imageWithContentsOfFile_custom:(NSString*)imgName {
// If high-res is supported by the device...
UIScreen *screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)] && [screen scale]>=2.0) {
// then we look for the high-res version of the image first
UIImage *hiDefImg = [UIImage imageWithContentsOfFile_custom:[imgName stringByInsertingHighResPathModifier]];
// If such high-res version exists, we return it
// The scale factor will be correctly set because once you give imageWithContentsOfFile:
// the full hi-res path it properly takes it into account
if (hiDefImg!=nil)
return hiDefImg;
}
// If the device does not support high-res of it does but there is
// no high-res variant of imgName, we return the base version
return [UIImage imageWithContentsOfFile_custom:imgName];
}
#end
[UIImage imageWithContentsOfFile:] doesn't load #2x graphics if you specify an absolute path.
Here is a solution:
- (UIImage *)loadRetinaImageIfAvailable:(NSString *)path {
NSString *retinaPath = [[path stringByDeletingLastPathComponent] stringByAppendingPathComponent:[NSString stringWithFormat:#"%##2x.%#", [[path lastPathComponent] stringByDeletingPathExtension], [path pathExtension]]];
if( [UIScreen mainScreen].scale == 2.0 && [[NSFileManager defaultManager] fileExistsAtPath:retinaPath] == YES)
return [[[UIImage alloc] initWithCGImage:[[UIImage imageWithData:[NSData dataWithContentsOfFile:retinaPath]] CGImage] scale:2.0 orientation:UIImageOrientationUp] autorelease];
else
return [UIImage imageWithContentsOfFile:path];
}
Credit goes to Christof Dorner for his simple solution (which I modified and pasted here).