CGImage of UIImage return NULL - objective-c

I created a function for splitting an image into multiple images, but when I take take the CGImage of the UIImage, the CGImage returns NULL
NSArray* splitImage(UIImage* image,NSUInteger pieces) {
NSLog(#"width: %f, %zu",image.size.width,CGImageGetWidth(image.CGImage));
NSLog(#"%#",image.CGImage);
returns NULL
NSMutableArray* tempArray = [[NSMutableArray alloc]initWithCapacity:pieces];
CGFloat piecesSize = image.size.height/pieces;
for (NSUInteger i = 0; i < pieces; i++) {
// take in account retina displays
CGRect subFrame = CGRectMake(0,i * piecesSize * image.scale ,image.size.width * image.scale,piecesSize * image.scale);
CGImageRef newImage = CGImageCreateWithImageInRect(image.CGImage,subFrame);
UIImage* finalImage =[UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
[tempArray addObject:finalImage];
}
NSArray* finalArray = [NSArray arrayWithArray:tempArray];
[tempArray release];
return finalArray;
}

I have created UIImage from CGImage.
CIImage *ciImage = image.CIImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef ref = [context createCGImage:ciImage fromRect:ciImage.extent];
UIImage *newImage = [UIImage imageWithCGImage:ref];
And now newImage.CGImage is not nil

The CGImage property will return nil if the UIImage was created from another image such as an IOSurface or CIImage. To get around this in this particular case I can create a CGImage from an IOSurface using the c function then convert that to a UIImage.
UICreateCGImageFromIOSurface(IOSurfaceRef surface);

Convert CGImage to UIImage with this and cgImage will not be null:
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
return image
}

What you did now is just create pieces with 0.f width, you should use two fors to define the width & height for your pieces. Code sample like this (not tested yet, but it should works):
for (int height = 0; height < image.size.height; height += piecesSize) {
for (int width = 0; width < image.size.width; width += piecesSize) {
CGRect subFrame = CGRectMake(width, height, piecesSize, piecesSize);
CGImageRef newImage = CGImageCreateWithImageInRect(image.CGImage, subFrame);
UIImage * finalImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
[tempArray addObject:finalImage];
}
}

It happens in some cases when we try to crop image. I found the solution like this try this may be this can help you:-
NSData *imageData = UIImageJPEGRepresentation(yourImage, 0.9);
newImage = [UIImage imageWithData:imageData];

Related

Objective-C: Memory leak with CGDataProviderCopyData

With the help of the instrument, it is shown that CGDataProviderCopyData is using too much memory. How to fix the issue?
-(UIImage*)imageNamed:(NSString*)name {
UIImage *uiimage = [UIImage imageNamed:name];
CGImageRef originalImage = [uiimage CGImage];
CFDataRef imageData = CGDataProviderCopyData(
CGImageGetDataProvider(originalImage));
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
CFRelease(imageData);
CGImageRef image = CGImageCreate(
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
imageDataProvider,
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
CGDataProviderRelease(imageDataProvider);
return [UIImage imageWithCGImage:image];
}
Finally, I was able to solve this. Actually there were few extra unnecessary steps which were causing memory leaks. Here is the updated function:
-(UIImage*)imageNamed:(NSString*)name {
UIImage *uiimage = [UIImage imageNamed:name];
CGImageRef originalImage = [uiimage CGImage];
CGImageRef image = CGImageCreate(
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
CGImageGetDataProvider(originalImage),
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
return [UIImage imageWithCGImage:image];
}
CGImageGetDataProvider(originalImage) returns CGDataProviderRef and that is required as 8th parameter in CGImageCreate function. Above steps to copy image data and then creating CGDataProviderRef were unnecessary.
The problem is that you never release image.
Update your code as follows:
-(UIImage*)imageNamed:(NSString*)name {
UIImage *uiimage = [UIImage imageNamed:name];
CGImageRef originalImage = [uiimage CGImage];
CFDataRef imageData = CGDataProviderCopyData(
CGImageGetDataProvider(originalImage));
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
CFRelease(imageData);
CGImageRef image = CGImageCreate(
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
imageDataProvider,
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
CGDataProviderRelease(imageDataProvider);
UIImage *result = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return result;
}

How control memory usage when applying CIFilters?

When I apply CIFilters to images the memory usage keeps growing and I don't know what to do.
I've tried everything I could:
using #autoreleasepool:
- (UIImage *)applySepiaToneTo:(UIImage *)img //Sepia
{
#autoreleasepool
{
CIImage *ciimageToFilter = [CIImage imageWithCGImage:img.CGImage];
CIFilter *sepia = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, ciimageToFilter,
#"inputIntensity", #1.0, nil];
return [self retrieveFilteredImageWithFilter:sepia];
}
}
- (UIImage *)retrieveFilteredImageWithFilter:(CIFilter *)filtro
{
#autoreleasepool
{
CIImage *ciimageFiltered = [filtro outputImage];
CGImageRef cgimg = [_context createCGImage:ciimageFiltered
fromRect:[ciimageFiltered extent]];
UIImage *filteredImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return filteredImage;
}
}
I'm also downsizing the image to be filtered and doing the filtering in a background thread:
- (void)filterWasSelected:(NSNotification *)notification
{
self.darkeningView.alpha = 0.5;
self.darkeningView.userInteractionEnabled = YES;
[self.view bringSubviewToFront:self.darkeningView];
[self.activityIndic startAnimating];
[self.view bringSubviewToFront:self.activityIndic];
int indice = [notification.object intValue];
__block NSArray *returnObj;
__block UIImage *auxUiimage;
if(choosenImage.size.width == 1280 || choosenImage.size.height == 1280)
{
UIImageView *iv;
if(choosenImage.size.width >= choosenImage.size.height)
{
float altura = (320 * choosenImage.size.height)/choosenImage.size.width;
iv = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,320,altura)];
iv.image = choosenImage;
}
else
{
float largura = (choosenImage.size.width * 320)/choosenImage.size.height;
iv = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,largura,320)];
iv.image = choosenImage;
}
UIGraphicsBeginImageContextWithOptions(iv.bounds.size, YES, 0.0);
[iv.layer renderInContext:UIGraphicsGetCurrentContext()];
auxUiimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
else
auxUiimage = choosenImage;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
if(artisticCollection)
returnObj = [self.filterCoordinator setupFilterArtisticType:indice toImage:auxUiimage];
else
returnObj = [self.filterCoordinator setupFilterOldOrVintageType:indice toImage:auxUiimage];
dispatch_async(dispatch_get_main_queue(), ^{
self.darkeningView.alpha = 0.3;
self.darkeningView.userInteractionEnabled = NO;
[self.activityIndic stopAnimating];
[self.view bringSubviewToFront:stageBackground];
[self.view bringSubviewToFront:stage];
[self.view bringSubviewToFront:self.filtersContainerView];
[self.view bringSubviewToFront:self.framesContainerView];
[self.view bringSubviewToFront:self.colorsContainerView];
if(returnObj)
{
auxUiimage = [returnObj firstObject];
NSLog(#"filtered image width = %f and height = %f", auxUiimage.size.width, auxUiimage.size.height);
returnObj = nil;
choosenImageContainer.image = auxUiimage;
}
});
});
}
I've also tried creating the context using the contextWithEAGLContext method, nothing changed.
I've researched a lot including stack overflow and found nothing.
Until I place the image in the image view (the image comes from the photo album) I'm only using 23 mega of memory, when I apply a filter, the use jumps to 51 mega and does not comes down. If I continue to apply other filters the memory usage only grows.
There's no linking in my app, I've checked in Instruments.
Also the bringSubviewToFront methods are not responsible, I've checked.
It's in the creation of the CIImage followed by the creation of the CIFilter object.
I know that in the process of applying the filter data is loaded into memory, but how to clean the memory after applying the filter?
Is there any secret that I'm not aware of?? Please help

UIImage from CIImage - Data length is zero?

I'm using an AVCaptureVideoDataOutput along with its delegate method to manipulate video frames. In the delegate method, I am using the sampleBuffer to create a CIImage, and from here I crop the CIImage, convert it to a UIImage and display it. Unfortunately, I need to determine the file-size of this new UIImage, but it's returning 0. The code works, the image is cropped beautifully, everything. I just don't see why it has no data!
Why might this be? Relevant code follows:
//In delegate method, given sampleBuffer...
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer
options:(NSDictionary *)attachments];
...
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGRect rect = [self drawFaceBoxesForFeatures:features forVideoBox:clap
orientation:curDeviceOrientation];
CIImage *cropped = [ciImage imageByCroppingToRect:rect];
UIImage *image = [[UIImage alloc] initWithCIImage:cropped];
NSData *data = UIImageJPEGRepresentation(image, 1);
NSLog(#"Image size is %d", data.length); //returns 0???
[imageView setImage:image];
[image release];
});
I had the same Problem, but with simple filtered images.
I stumbled upon this and it solved the issue. After this, I was able to save my image.
CGSize size = self.originalImage.size;
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
UIGraphicsBeginImageContext(size);
[[UIImage imageWithCIImage:self.filteredImage] drawInRect:rect];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * jpegData = UIImageJPEGRepresentation(image, 1.0);
But I only needed this two lines in the "ImageContext"

Render a single UIImage and rotation has to be applied

I have rendered few UIImage objects using CGContextDrawImage. But when i apply rotation to the image, it is not applied and the view disappears.
code here
-(void )renderImage: (ItemView *)array
{
NSArray *selectedImages = self.slideView.selectedView.subviews;
CGSize combinedSize = CGSizeMake(0, 0);
for (int i = 0; i < [selectedImages count]; ++i) {
CGSize sourceSize = [selectedImages[i] size];
NSLog(#"sdfsd %f %f ", sourceSize.width, sourceSize.height);
combinedSize.width = MAX(combinedSize.width, sourceSize.width);
combinedSize.height += sourceSize.height;
}
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
{
CGContextTranslateCTM(context, 0, 768);
CGContextScaleCTM(context, 1,-1);
for (int i = 0; i < [selectedImages count]; ++i) {
UIImageView *imageview = selectedImages[i];
UIImage *sourceImage = imageview.image;
CGContextSaveGState(context);
float radians1 = atan2(imageview.transform.a, imageview.transform.b);
CGFloat angle = [(NSNumber *)[imageview valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
printf("\n radians %f",radians1);
printf("\n angle %f",angle);
CGContextRotateCTM(context, radians(angle));
CGContextDrawImage(context, imageview.frame, imageview.image.CGImage);
CGContextRestoreGState(context);
//CGContextDrawImage(context, imageview.frame, imageview.image);
}
}
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(combinedImage, nil, nil, nil);
}
Thanks,
Any help would be appreciated
Let's say you have an array of source images:
NSArray *sourceImages = ...;
// each element of sourceImages is a UIImage
Now you want to take the first count images from the array and concatenate them vertically into a new image. Start by figuring out the size of the combined image:
CGSize combinedSize = CGSizeMake(0, 0);
for (int i = 0; i < count; ++i) {
CGSize sourceSize = [sourceImages[i] size];
combinedSize.width = MAX(combinedSize.width, sourceSize.width);
combinedSize.height += sourceSize.height;
}
Next, create a graphics context of that size:
UIGraphicsBeginImageContextWithOptions(combinedSize, NO, 0); {
Now draw each component image into the graphics context at the appropriate position:
CGFloat y = 0;
for (int i = 0; i < count; ++i) {
UIImage *sourceImage = sourceImages[i];
[sourceImage drawAtPoint:CGPointMake(0, y)];
y += sourceImage.size.height;
}
Finally, create the combined image from the context and dispose of the context:
}
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Read Drawing and Printing Guide for iOS - Drawing and Creating Images for more information.
I cracked the answer . # rob mayoff : Thanks much who helped me out.
code here
//code to render group of images excluding the image which has been selected in a particular view
-(void )renderImage: (ItemView *)selectedItem
{
NSArray *selectedImages = self.slideView.selectedView.subviews;
int selectedItemIndex = [selectedImages indexOfObject:selectedItem];
if(selectedItem.image==nil){
[selectedItem loadImageFromFile];
}
// first set
UIGraphicsBeginImageContextWithOptions(self.slideView.selectedView.frame.size, NO, 0);
for (int i = 0; i < selectedItemIndex; ++i) {
if([selectedImages[i] isKindOfClass:[ItemView class]]){
ItemView *imageview = (ItemView *)selectedImages[i];
CGAffineTransform transform = imageview.transform;
imageview.transform = CGAffineTransformIdentity;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:imageview.filename];
UIImageView *renderImageView = [[UIImageView alloc] initWithImage:image]; //[imageview copy];
renderImageView.frame = imageview.frame;
renderImageView.bounds = imageview.bounds;
imageview.transform = transform;
renderImageView.transform = transform;
renderImageView.center = CGPointMake(imageview.frame.size.width/2,imageview.frame.size.height/2);
UIView *view = [[UIView alloc] initWithFrame:renderImageView.frame];
view.backgroundColor = [UIColor clearColor];
[view addSubview:renderImageView];
[renderImageView release];
[image release];
imageview.image = nil;
UIGraphicsBeginImageContext(imageview.frame.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context1];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[view release];
[result drawAtPoint:imageview.frame.origin];
}
}
UIImage *combinedImage1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//UIImageWriteToSavedPhotosAlbum(combinedImage1, nil, nil, nil);
if (groupBgImage1) {
[groupBgImage1 removeFromSuperview];
[groupBgImage1 release];
groupBgImage1 = nil;
}
groupBgImage1 = [[UIImageView alloc] initWithImage:combinedImage1];
[self.slideView.selectedView insertSubview:groupBgImage1 belowSubview:selectedItem ];
// second set
UIGraphicsBeginImageContextWithOptions(self.slideView.selectedView.frame.size, NO, 0);
for (int i = selectedItemIndex+1; i < [selectedImages count]; ++i) {
if([selectedImages[i] isKindOfClass:[ItemView class]]){
ItemView *imageview = (ItemView *)selectedImages[i];
CGAffineTransform transform = imageview.transform;
imageview.transform = CGAffineTransformIdentity;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:imageview.filename];
UIImageView *renderImageView = [[UIImageView alloc] initWithImage:image]; //[imageview copy];
renderImageView.frame = imageview.frame;
renderImageView.bounds = imageview.bounds;
imageview.transform = transform;
renderImageView.transform = transform;
renderImageView.center = CGPointMake(imageview.frame.size.width/2,imageview.frame.size.height/2);
UIView *view = [[UIView alloc] initWithFrame:renderImageView.frame];
view.backgroundColor = [UIColor clearColor];
[view addSubview:renderImageView];
[renderImageView release];
[image release];
imageview.image = nil;
UIGraphicsBeginImageContext(imageview.frame.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context1];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[view release];
[result drawAtPoint:imageview.frame.origin];
}
}
UIImage *combinedImage2 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(combinedImage2, nil, nil, nil);
if (groupBgImage2) {
[groupBgImage2 removeFromSuperview];
[groupBgImage2 release];
groupBgImage2 = nil;
}
groupBgImage2 = [[UIImageView alloc] initWithImage:combinedImage2];
[self.slideView.selectedView insertSubview:groupBgImage2 aboveSubview:selectedItem ];
}

NSImage doesn't scale

I'm developing a quick app in which I have a method that should rescale a #2x image to a regular one. The problem is that it doesn't :(
Why?
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
NSSize size = NSZeroSize;
size.width = inputRetinaImage.size.width*0.5;
size.height = inputRetinaImage.size.height*0.5;
[inputRetinaImage setSize:size];
NSLog(#"%f",inputRetinaImage.size.height);
NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSData *data = [imgRep representationUsingType: NSPNGFileType properties: nil];
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
NSLog([#"Normal version file path: " stringByAppendingString:outputFilePath]);
[data writeToFile:outputFilePath atomically: NO];
return true;
}
You have to be very wary of the size attribute of an NSImage. It doesn't necessarily refer to the bitmapRepresentation's pixel dimensions, it could refer to the displayed size for example. An NSImage may have a number of bitmapRepresentations for use at different output sizes.
Likewise, changing the size attribute of an NSImage does nothing to alter the bitmapRepresentations
So what you need to do is work out the size you want your output image to be, and then draw a new image at that size using a bitmapRepresentation from the source NSImage.
Getting that size depends on how you have obtained your input image and what you know about it. For example, if you are confident that your input image has only one bitmapImageRep you can use this type of thing (as a category on NSImage)
- (NSSize) pixelSize
{
NSBitmapImageRep* bitmap = [[self representations] objectAtIndex:0];
return NSMakeSize(bitmap.pixelsWide,bitmap.pixelsHigh);
}
Even if you have a number of bitmapImageReps, the first one should be the largest one, and if that is the size that your Retina image was created at, it should be the Retina size you are after.
When you have worked out your final size, you can make the image:
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
update
Here is a more elaborate version of a pixel-size-getting category on NSImage... let's assume nothing about the image, how many imageReps it has, whether it has any bitmapImageReps... this will return the largest pixel dimensions it can find. If it can't find bitMapImageRep pixel dimensions it will use whatever else it can get, which will most likely be bounding box dimensions (used by eps and pdfs).
NSImage+PixelSize.h
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>
#interface NSImage (PixelSize)
- (NSInteger) pixelsWide;
- (NSInteger) pixelsHigh;
- (NSSize) pixelSize;
#end
NSImage+PixelSize.m
#import "NSImage+PixelSize.h"
#implementation NSImage (Extensions)
- (NSInteger) pixelsWide
{
/*
returns the pixel width of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsWide > bitmapResult)
bitmapResult = imageRep.pixelsWide;
} else {
if (imageRep.pixelsWide > result)
result = imageRep.pixelsWide;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSInteger) pixelsHigh
{
/*
returns the pixel height of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsHigh > bitmapResult)
bitmapResult = imageRep.pixelsHigh;
} else {
if (imageRep.pixelsHigh > result)
result = imageRep.pixelsHigh;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSSize) pixelSize
{
return NSMakeSize(self.pixelsWide,self.pixelsHigh);
}
#end
You would #import "NSImage+PixelSize.h" in your current file to make it accessible.
With this image category and the resize: method, you would modify your method thus:
//size.width = inputRetinaImage.size.width*0.5;
//size.height = inputRetinaImage.size.height*0.5;
size.width = inputRetinaImage.pixelsWide*0.5;
size.height = inputRetinaImage.pixelsHigh*0.5;
//[inputRetinaImage setSize:size];
NSImage* outputImage = [self resizeImage:inputRetinaImage size:size];
//NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSBitmapImageRep *imgRep = [[outputImage representations] objectAtIndex: 0];
That should fix things for you (proviso: I haven't tested it on your code)
I modified the script i use to downscale my images for you :)
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
//determine new size
NSBitmapImageRep* bitmapImageRep = [[inputRetinaImage representations] objectAtIndex:0];
NSSize size = NSMakeSize(bitmapImageRep.pixelsWide * 0.5,bitmapImageRep.pixelsHigh * 0.5);
NSLog(#"size = %#", NSStringFromSize(size));
//get CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[inputRetinaImage TIFFRepresentation], NULL);
CGImageRef oldImageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(oldImageRef);
if (alphaInfo == kCGImageAlphaNone) alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context
CGContextRef bitmap = CGBitmapContextCreate(NULL, size.width, size.height, 8, 4 * size.width, CGImageGetColorSpace(oldImageRef), alphaInfo);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, CGRectMake(0, 0, size.width, size.height), oldImageRef);
// Get an image from the context
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
//this does not work in my test.
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
//but this does!
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* docsDirectory = [paths objectAtIndex:0];
NSString *newfileName = [docsDirectory stringByAppendingFormat:#"/%#", [outputFilePath lastPathComponent]];
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:newfileName];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, newImageRef, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", newfileName);
}
CFRelease(destination);
return true;
}