Can any one help me to find the memory leak in the below code, which adjusts the brightness of an image?
+(NSImage *)brightness:(NSImage *)image andLevel:(int)level
{
CGImageSourceRef source= CGImageSourceCreateWithData((CFDataRef)[image TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSSize size = image.size;
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
//getting bitmap data from receiver's CGImage
CFDataRef dataref=CGDataProviderCopyData(CGImageGetDataProvider(img));
//getting bytes from bitmap image
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
//getting length
int length=CFDataGetLength(dataref);
// Perform operation on pixels
for(int index=0;index<length;index += 1)
{
// Go For BRIGHTNESS
for(int i=0;i<3;i++)
{
//printf("This pixel is:%d",data[index + i]);
if(data[index+i]+level<0)
{
data[index+i]=0;
}
else
{
if(data[index+i]+level>255)
{
data[index+i]=255;
}
else
{
data[index+i]+=level;
}
}
}
}
// .. Take image attributes
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
// .. Do the pixel manupulation
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
// .. Get the Image out of this raw data
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
// .. Prepare the image from raw data
NSImage* newImage = [[NSImage alloc] initWithSize:size];
//To make the drawing appear on the image instead of on the screen
[newImage lockFocus];
//Draws an image into a graphics context.
CGContextDrawImage([[NSGraphicsContext currentContext] graphicsPort],*(CGRect*)&rect, newImg);
[newImage unlockFocus];
// .. done with all,so release the references
CFRelease(source);
CFRelease(img);
CFRelease(dataref);
CFRelease(colorspace);
CFRelease(newData);
CFRelease(provider);
return [newImage autorelease];
}
You’ve forgotten to release newImg, which you’ve obtained via a Create function. Also, you shouldn’t release colorSpace since you haven’t obtained it via a Create or Copy function and you haven’t retained it.
Replace the following code lines:
NSImage* newImage = [[NSImage alloc] initWithSize:size];
with
NSImage* newImage = [[[NSImage alloc] initWithSize:size] autorelease];
and this one: return [newImage autorelease]; with return newImage;
I'm not 100% sure about this but give it a try, hope it might help.
:)
Related
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
An iPhone library for shape recognition via the camera
I am iPhone mobile apps developer and am looking for image recognition libraries to integrate in one of my App which should work like google's goggle functionality .
I saw https://www.iqengines.com/ and downlaoded its ios sdk and it worked well . But its not free version . I am searching for open source sdk .
also how opencv useful in accomplishing this ? beacuse some of the sources redirecting to openCV .
Please let me know if any one come across this .
Thanks
Yes, OpenCV works on iOS and will provide you with a good library of tools to use. You can either build your own .framework (a bit tiresome) or download one of the internet.
To build one, refer to this OpenCV guide
If you want to download a pre-built framework, head over there
After that, you should be able to build computer vision software on iOS, be careful though, image processing can take a lot of power and memory.
OpenCV has its own C++ classes for images, you will probably need to convert them back and forth to UIImage for input and displaying
I leave here this piece of code I use for NSImage, you should find enough for you to change a bit the code for UIImage
//
// NSImage+OpenCV.h
//
#import <AppKit/AppKit.h>
#interface NSImage (NSImage_OpenCV) {
}
+(NSImage*)imageWithCVMat:(const cv::Mat&)cvMat;
-(id)initWithCVMat:(const cv::Mat&)cvMat;
#property(nonatomic, readonly) cv::Mat CVMat;
#property(nonatomic, readonly) cv::Mat CVGrayscaleMat;
#end
And
//
// NSImage+OpenCV.mm
//
#import "NSImage+OpenCV.h"
static void ProviderReleaseDataNOP(void *info, const void *data, size_t size)
{
return;
}
#implementation NSImage (NSImage_OpenCV)
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return cgImage;
}
-(cv::Mat)CVMat
{
CGImageRef imageRef = [self CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
CGContextRelease(contextRef);
CGImageRelease(imageRef);
return cvMat;
}
-(cv::Mat)CVGrayscaleMat
{
CGImageRef imageRef = [self CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
CGImageRelease(imageRef);
return cvMat;
}
+ (NSImage *)imageWithCVMat:(const cv::Mat&)cvMat
{
return [[[NSImage alloc] initWithCVMat:cvMat] autorelease];
}
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
}
else
{
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:imageRef];
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
#end
(source)
I have an NSImageView which I get an image for from an NSOpenPanel. That works great.
Now, how can I take that NSImage, half its size and save it as the same format in the same directory as the original as well?
If you can help at all with anything I'd appreciate it, thanks.
Check the ImageCrop sample project from Matt Gemmell:
http://mattgemmell.com/source/
Nice example how to resize / crop images.
Finally you can use something like this to save the result (dirty sample):
// Write to TIF
[[resultImg TIFFRepresentation] writeToFile:#"/Users/Anne/Desktop/Result.tif" atomically:YES];
// Write to JPG
NSData *imageData = [resultImg TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[imageData writeToFile:#"/Users/Anne/Desktop/Result.jpg" atomically:NO];
Since NSImage objects are immutable you will have to:
Create a Core Graphics context the size of the new image.
Draw the NSImage into the CGContext. It should automatically scale it for you.
Create an NSImage from that context
Write out the new NSImage
Don't forget to release any temporary objects you allocated.
There are definitely other options, but this is the first one that came to mind.
+(NSImage*) resize:(NSImage*)aImage scale:(CGFloat)aScale
{
NSImageView* kView = [[NSImageView alloc] initWithFrame:NSMakeRect(0, 0, aImage.size.width * aScale, aImage.size.height* aScale)];
[kView setImageScaling:NSImageScaleProportionallyUpOrDown];
[kView setImage:aImage];
NSRect kRect = kView.frame;
NSBitmapImageRep* kRep = [kView bitmapImageRepForCachingDisplayInRect:kRect];
[kView cacheDisplayInRect:kRect toBitmapImageRep:kRep];
NSData* kData = [kRep representationUsingType:NSJPEGFileType properties:nil];
return [[NSImage alloc] initWithData:kData];
}
Here is a specific implementation
-(NSImage*)resizeImage:(NSImage*)input by:(CGFloat)factor
{
NSSize size = NSZeroSize;
size.width = input.size.width*factor;
size.height = input.size.height*factor;
NSImage *ret = [[NSImage alloc] initWithSize:size];
[ret lockFocus];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[input drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[ret unlockFocus];
return [ret autorelease];
}
Keep in mind that this is pixel based, with HiDPI the scaling must be taken into account, it is simple to obtain :
-(CGFloat)pixelScaling
{
NSRect pixelBounds = [self convertRectToBacking:self.bounds];
return pixelBounds.size.width/self.bounds.size.width;
}
Apple has source code for downscaling and saving images found here
http://developer.apple.com/library/mac/#samplecode/Reducer/Introduction/Intro.html
Here is some code that makes a more extensive use of Core Graphics than other answers. It's made according to hints in Mark Thalman's answer to this question.
This code downscales an NSImage based on a target image width. It's somewhat nasty, but still useful as an extra sample for documenting how to draw an NSImage in a CGContext, and how to write contents of CGBitmapContext and CGImage into a file.
You may want to add extra error checking. I didn't need it for my use case.
- (void)generateThumbnailForImage:(NSImage*)image atPath:(NSString*)newFilePath forWidth:(int)width
{
CGSize size = CGSizeMake(width, image.size.height * (float)width / (float)image.size.width);
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width * 4, rgbColorspace, bitmapInfo);
NSGraphicsContext * graphicsContext = [NSGraphicsContext graphicsContextWithGraphicsPort:context flipped:NO];
[NSGraphicsContext setCurrentContext:graphicsContext];
[image drawInRect:NSMakeRect(0, 0, size.width, size.height) fromRect:NSMakeRect(0, 0, image.size.width, image.size.height) operation:NSCompositeCopy fraction:1.0];
CGImageRef outImage = CGBitmapContextCreateImage(context);
CFURLRef outURL = (CFURLRef)[NSURL fileURLWithPath:newFilePath];
CGImageDestinationRef outDestination = CGImageDestinationCreateWithURL(outURL, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(outDestination, outImage, NULL);
if(!CGImageDestinationFinalize(outDestination))
{
NSLog(#"Failed to write image to %#", newFilePath);
}
CFRelease(outDestination);
CGImageRelease(outImage);
CGContextRelease(context);
CGColorSpaceRelease(rgbColorspace);
}
To resize image
- (NSImage *)scaleImage:(NSImage *)anImage newSize:(NSSize)newSize
{
NSImage *sourceImage = anImage;
if ([sourceImage isValid])
{
if (anImage.size.width == newSize.width && anImage.size.height == newSize.height && newSize.width <= 0 && newSize.height <= 0) {
return anImage;
}
NSRect oldRect = NSMakeRect(0.0, 0.0, anImage.size.width, anImage.size.height);
NSRect newRect = NSMakeRect(0,0,newSize.width,newSize.height);
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage lockFocus];
[sourceImage drawInRect:newRect fromRect:oldRect operation:NSCompositeCopy fraction:1.0];
[newImage unlockFocus];
return newImage;
}
}
i am using this function to create Screenshots of the my IPAD App. I am using the Sparrow Framework in my Project. SPDisplayObject uses OpenGl-ES based rendering.
#implementation SPDisplayObject (ScreenshotFromSPDisplayObject)
- (UIImage *)getImageScreenshot{
int WIDTH = 1024;
int HEIGHT = 768;
CGSize size = CGSizeMake(WIDTH,HEIGHT);
//Create un buffer for pixels
GLuint bufferLenght=size.width*size.height*4;
GLubyte *buffer = (GLubyte *) malloc(bufferLenght);
//Read Pixels from OpenGL
glReadPixels(0,0,size.width,size.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLenght, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width,size.height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorSpaceRef,bitmapInfo,provider,NULL,NO,renderingIntent);
uint32_t *pixels = (uint32_t *)malloc(bufferLenght);
CGContextRef context = CGBitmapContextCreate(pixels, WIDTH, HEIGHT, 8, WIDTH*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context,0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, size.width, size.height), iref);
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
//free memory
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return screenshot;
}
#end
I use it like this from an UIViewController:
#interface
UIImageView *screenShot;
UIImage *tempImage;
-(void) deactivePage
{
// attach screenshot
tempImage = [self.stage getImageScreenshot];
screenShot = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,1024,768)];
screenShot.image = tempImage;
[self.view addSubview:screenShot];
}
- (void)dealloc
{
screenShot.image = nil;
[screenShot removeFromSuperview];
[screenShot release];
[super dealloc];
}
The UIViewController is released and deallocated aprox. 5 Seconds after the "deactivePage" function is called.
The Screenshot is used for a View Transition.
Taking Screenshots works like a charm, but with every Screenshot my App is growing around 10 MBs So i can do this around 15 Times till the app crashes.
So where is the leak? I am stuck.. :-(
In the getImageScreenshot function you do this:
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
which creates a CGImageRef and then creates (autorelease) an UIImage from it.
What happens here is that this CGImageRef remains alive and is never released, so it's leaking.
What you should do, instead, is this:
CGImageRef myCGImage = CGBitmapContextCreateImage(context);
UIImage* screenshot = [UIImage imageWithCGImage:myCGImage];
CGImageRelease(myCGImage);
Have you tried to see it using Instruments (Leaks or Heapshots)? you should see these CGImareRef elements still alive.
I don't see where you deallocate tempImage in the UIViewController when it's going down.
I'm trying to save a multiple masked image to disk.
The code below explains How I do it. The maskImagewithStroke masks 2 times a picture : first time to provide an irregular stroke to a texture then I'm remasking it with a bigger mask to get a stroke from it. It's the bast way I found to get an irregular stroke around my texture.
Now, for the sake of the iPhone and its performance, I'd like to generate all masked textures and save them to disk. I'll use thos files in another app to load premasked textures. the masking process is perfect on screen but when saving file i'm have a black image with the irregular stroke BUT without texture in it ... How may I change that ?
I'm calling the method masImageWihStrokandSaveToFile to launch the maskin + saving process..
-(void)masImageWihStrokandSaveToFile:(UIImage *)image withMask:(UIImage *)maskImage imageIndex:(NSInteger)i{
//mask image with stroke
//[self maskImageWithStroke:image withMask:maskImage];
//save masked image to disk
// Create paths to output images
NSString *pngPath = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"Documents/Test_%d.png",i]];
NSString *jpgPath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Test.jpg"];
// Write a UIImage to JPEG with minimum compression (best quality)
// The value 'image' must be a UIImage object
// The value '1.0' represents image compression quality as value from 0.0 to 1.0
[UIImageJPEGRepresentation([self maskImageWithStroke:image withMask:maskImage], 1.0) writeToFile:jpgPath atomically:YES];
// Write image to PNG
[UIImagePNGRepresentation([self maskImageWithStroke:image withMask:maskImage]) writeToFile:pngPath atomically:YES];
// Let's check to see if files were successfully written...
// Create file manager
NSError *error;
NSFileManager *fileMgr = [NSFileManager defaultManager];
// Point to Document directory
NSString *documentsDirectory = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents"];
// Write out the contents of home directory to console
NSLog(#"Documents directory: %#", [fileMgr contentsOfDirectoryAtPath:documentsDirectory error:&error]);
}
- (UIImage *) maskImageWithStroke:(UIImage *)image withMask:(UIImage *)maskImage{
UIImage* maskedImage = [self maskImage:image withMask:maskImage];
//****** Add a Stroke - Begin *******//
//grow the actual mask
UIImage* biggerMaskImage = [maskImage scaleToSize:CGSizeMake(310,310)];
//now add a stroke to the image.
//UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"];
//UIImage *image = [UIImage imageNamed:#"top.png"];
CGSize newSize = CGSizeMake(310, 310);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
//[biggerMaskImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
//[biggerMaskImage drawInRect:CGRectMake(0,0,410,410)];
[maskImage drawInRect:CGRectMake(0,0,310,310)];
// Apply supplied opacity
//[retImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
[maskedImage drawInRect:CGRectMake(0,0,300,300) blendMode:kCGBlendModeNormal alpha:1];
UIImage *maskedStrokedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//****** Add a Stroke - End *******//
//****** Remask - Begin ******//
maskedStrokedImage = [self maskImage:maskedStrokedImage withMask:biggerMaskImage];
//****** Remask - End ******//
return maskedStrokedImage;
}
/*************Mask Images ***************/
CGImageRef CopyImageAndAddAlphaChannel(CGImageRef sourceImage) {
CGImageRef retVal = NULL;
size_t width = CGImageGetWidth(sourceImage);
size_t height = CGImageGetHeight(sourceImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef offscreenContext = CGBitmapContextCreate(NULL, width, height,
8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
if (offscreenContext != NULL) {
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), sourceImage);
retVal = CGBitmapContextCreateImage(offscreenContext);
CGContextRelease(offscreenContext);
}
CGColorSpaceRelease(colorSpace);
return retVal;
}
// masker method
- (UIImage*)maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef sourceImage = [image CGImage];
CGImageRef imageWithAlpha = sourceImage;
//add alpha channel for images that don't have one (ie GIF, JPEG, etc...)
//this however has a computational cost
if (CGImageGetAlphaInfo(sourceImage) == kCGImageAlphaNone) {
imageWithAlpha = CopyImageAndAddAlphaChannel(sourceImage);
}
CGImageRef masked = CGImageCreateWithMask(imageWithAlpha, mask);
CGImageRelease(mask);
//release imageWithAlpha if it was created by CopyImageAndAddAlphaChannel
if (sourceImage != imageWithAlpha) {
CGImageRelease(imageWithAlpha);
}
UIImage* retImage = [UIImage imageWithCGImage:masked];
//UIImage* retImage = [UIImage imageWithCGImage:maskedTransparent];
CGImageRelease(masked);
return retImage;
}
I'm trying to draw a standard NSImage in white instead of black. The following works fine for drawing the image in black in the current NSGraphicsContext:
NSImage* image = [NSImage imageNamed:NSImageNameEnterFullScreenTemplate];
[image drawInRect:r fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
I expected NSCompositeXOR to do the trick, but no. Do I need to go down the complicated [CIFilter filterWithName:#"CIColorInvert"] path? I feel like I must be missing something simple.
The Core Image route would be the most reliable. It's actually not very complicated, I've posted a sample below. If you know none of your images will be flipped then you can remove the transform code. The main thing to be careful of is that the conversion from NSImage to CIImage can be expensive performance-wise, so you should ensure you cache the CIImage if possible and don't re-create it during each drawing operation.
CIImage* ciImage = [[CIImage alloc] initWithData:[yourImage TIFFRepresentation]];
if ([yourImage isFlipped])
{
CGRect cgRect = [ciImage extent];
CGAffineTransform transform;
transform = CGAffineTransformMakeTranslation(0.0,cgRect.size.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
ciImage = [ciImage imageByApplyingTransform:transform];
}
CIFilter* filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setDefaults];
[filter setValue:ciImage forKey:#"inputImage"];
CIImage* output = [filter valueForKey:#"outputImage"];
[output drawAtPoint:NSZeroPoint fromRect:NSRectFromCGRect([output extent]) operation:NSCompositeSourceOver fraction:1.0];
Note: release/retain memory management is left as an exercise, the code above assumes garbage collection.
If you want to render the image at an arbitrary size, you could do the following:
NSSize imageSize = NSMakeSize(1024,768); //or whatever size you want
[yourImage setSize:imageSize];
[yourImage lockFocus];
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[yourImage unlockFocus];
CIImage* image = [CIImage imageWithData:[bitmap TIFFRepresentation]];
Here is a solution using Swift 5.1, somewhat based on the above solutions. Note that I am not cacheing the images, so it likely isn't the most efficient as my primary use case is to flip small monochrome images in toolbar buttons based on whether the current color scheme is light or dark.
import os
import AppKit
import Foundation
public extension NSImage {
func inverted() -> NSImage {
guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
os_log(.error, "Could not create CGImage from NSImage")
return self
}
let ciImage = CIImage(cgImage: cgImage)
guard let filter = CIFilter(name: "CIColorInvert") else {
os_log(.error, "Could not create CIColorInvert filter")
return self
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
guard let outputImage = filter.outputImage else {
os_log(.error, "Could not obtain output CIImage from filter")
return self
}
guard let outputCgImage = outputImage.toCGImage() else {
os_log(.error, "Could not create CGImage from CIImage")
return self
}
return NSImage(cgImage: outputCgImage, size: self.size)
}
}
fileprivate extension CIImage {
func toCGImage() -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(self, from: self.extent) {
return cgImage
}
return nil
}
}
Just one note: I found that CIColorInvert filter isn't always reliable. For example, if you want to invert back an image inverted in Photoshop, the CIFilter will produce a much lighter image. As far as I understood, it happens because of the differences in gamma value of CIFilter (gamma is 1) and images that came from other sources.
While I was looking for ways to change the gamma value for CIFilter, I found a note that there's a bug in CIContext: changing its gamma value from the default 1 will produce unpredictable results.
Regardless, there's another solution to invert NSImage, which always produces the correct results - by inverting pixels of NSBitmapImageRep.
I'm reposting the code from etutorials.org (http://bit.ly/Y6GpLn):
// srcImageRep is the NSBitmapImageRep of the source image
int n = [srcImageRep bitsPerPixel] / 8; // Bytes per pixel
int w = [srcImageRep pixelsWide];
int h = [srcImageRep pixelsHigh];
int rowBytes = [srcImageRep bytesPerRow];
int i;
NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:w
pixelsHigh:h
bitsPerSample:8
samplesPerPixel:n
hasAlpha:[srcImageRep hasAlpha]
isPlanar:NO
colorSpaceName:[srcImageRep colorSpaceName]
bytesPerRow:rowBytes
bitsPerPixel:NULL] autorelease];
unsigned char *srcData = [srcImageRep bitmapData];
unsigned char *destData = [destImageRep bitmapData];
for ( i = 0; i < rowBytes * h; i++ )
*(destData + i) = 255 - *(srcData + i);
[destImage addRepresentation:destImageRep];