I have the following function that opens an image, scales it and saves it to another file.
-(void)writeFileToIcon:(NSString *)fullPath :(NSString *)finalPath :(NSSize)outputSize
{
NSData *dataToWrite;
NSBitmapImageRep *rep;
rep = [NSBitmapImageRep imageRepWithData:[[self scaleImage:[[NSImage alloc]initWithContentsOfFile:fullPath] toSize:outputSize] TIFFRepresentation]];
dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:YES];
}
- (NSImage *)scaleImage:(NSImage *)image toSize:(NSSize)targetSize
{
if ([image isValid])
{
NSSize imageSize = [image size];
float width = imageSize.width;
float height = imageSize.height;
float targetWidth = targetSize.width;
float targetHeight = targetSize.height;
float scaleFactor = 0.0;
float scaledWidth = targetWidth;
float scaledHeight = targetHeight;
NSPoint thumbnailPoint = NSZeroPoint;
if (!NSEqualSizes(imageSize, targetSize))
{
float widthFactor = targetWidth / width;
float heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
{
scaleFactor = widthFactor;
}
else
{
scaleFactor = heightFactor;
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
if (widthFactor < heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor > heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
NSImage *newImage = [[NSImage alloc] initWithSize:NSMakeSize(scaledWidth, scaledHeight)];
[newImage lockFocus];
NSRect thumbnailRect;
thumbnailRect.origin = NSZeroPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[image drawInRect:thumbnailRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[newImage unlockFocus];
return newImage;
}
return nil;
}
return nil;
}
However each time this function is called, the memory usage is getting higher (up to 5 GB for 1000 calls).
The issue is the drawRect function which seems to take a lot of memory (according to the analyser) but does not release it.
How can I "ask" ARC to release it ?
Thanks.
One may need to look at the whole code to find the problem. One idea follows, though: under ARC you cannot call "release" on objects, but if you set the pointer to the object to "nil", the object will be released (unless other strong references to that object exist somewhere).
I suggest you track your code and make sure you don't hold objects you don't need anymore. If your code is well encapsulated and structured, this shouldn't happen.
If your code is well designed, though, there is the chance that this amount of memory is actually needed (unlikely, but don't know without more details). If this would be the case, then let the system manage the memory, it will release the objects when it is appropriate. This, and try to make optimizations somewhere if memory usage is a concern.
Off-topic: these long nested if's with multiple return points within the method are not a very good idea; I suggest you reestructure your code slightly. If you write clearer code, you'll have more control over it, and you will find solutions to problems faster.
Are you calling this from a loop or without returning to the main event loop? Adding an explicit #autoreleasepool might help.
-(void)writeFileToIcon:(NSString *)fullPath :(NSString *)finalPath :(NSSize)outputSize
{
#autoreleasepool {
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[[self scaleImage:[[NSImage alloc]initWithContentsOfFile:fullPath] toSize:outputSize] TIFFRepresentation]];
NSData *dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:YES];
}
}
Theoretically, this isn't necessary as code compiled with ARC short circuits the autoreleasepool in some circumstances. However, you may be defeating that optimization here somehow.
Note that it's generally better to do this in the place where the memory allocation becomes the problem logically. So your for loop where you call this method would be a better place for the #autoreleasepool.
My guess is your issue is related to caching in the image classes, but that could be wrong. What does appear to improve matters:
-(void)writeFileToIcon:(NSString *)fullPath :(NSString *)finalPath :(NSSize)outputSize
{
// wrap in autorelease pool to localise any use of this by the image classes
#autoreleasepool
{
NSImage *dstImage = [self scaleImageFile:finalPath toSize:outputSize];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithData:[dstImage TIFFRepresentation]];
NSData *dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:YES];
}
}
- (NSImage *)scaleImageFile:(NSString *)fullPath toSize:(NSSize)targetSize
{
NSImageRep *srcImageRep = [NSImageRep imageRepWithContentsOfFile:fullPath];
if (srcImageRep == nil)
return nil;
NSSize imageSize = NSMakeSize(srcImageRep.pixelsWide, srcImageRep.pixelsHigh);
NSSize scaledSize;
NSPoint thumbnailPoint = NSZeroPoint;
NSRect thumbnailRect;
if (!NSEqualSizes(imageSize, targetSize))
{
// your existing scale calculation
...
scaledSize = NSMakeSize(scaledWidth, scaledHeight);
}
else
scaledSize = imageSize;
srcImageRep.size = scaledSize;
NSImage *newImage = [[NSImage alloc] initWithSize:scaledSize];
[newImage lockFocus];
thumbnailRect.origin = NSZeroPoint;
thumbnailRect.size = scaledSize;
[srcImageRep drawInRect:thumbnailRect];
[newImage unlockFocus];
return newImage;
}
This uses NSImageRep which appears in this case to reduce memory footprint. On a sample run using full screen desktop images scaled to 32x32 the above hovered around 16Mb while the original NSImage based version steadily grew to 32Mb. YMMV of course.
HTH
Related
I am using QBImagePicker to allow multiple image upload. It works fine for up to 25 images being downloaded, but more than that, and the app will quit do to memory pressure while uploading. I would like to allow infinite image upload, and am uncertain how to do so where memory would not be an issue (i.e. perhaps clearing memory after each save). Here is my method to save images (which is called from a loop within the main QBImagePickerController method to save all the selected images):
- (void) saveTheImage:(UIImage *)image fileName:(NSString *)name width:(CGFloat) width height:(CGFloat) height quality:(CGFloat) quality extension:(int)fileNumberExtension
{
UIImage *resizedImage = [self resizeImage:image width:width height:height]; //this is a simple method I have to resize the image sent from the picker
NSData *data = UIImageJPEGRepresentation(resizedImage, quality); //save as a jpeg
NSString *fileName = [NSString stringWithFormat:#"%#%d", name, fileNumberExtension]; //set the filename
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0]; //will be saved in documents
NSString *tempPath = [documentsDirectory stringByAppendingPathComponent:fileName]; //with the filename given
//create a block operation to save
NSBlockOperation* saveOp = [NSBlockOperation blockOperationWithBlock: ^{
[data writeToFile:tempPath atomically:YES];
}];
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[queue addOperation:saveOp];
}
Thanks in advance!
EDIT
My method to resize the image:
- (UIImage *) resizeImage:(UIImage *)image width:(CGFloat) width height:(CGFloat) height
{
UIImage *resizedImage;
CGSize size = CGSizeMake(width, height);
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
[image drawInRect:CGRectMake(0, 0, width, height)];
resizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resizedImage;
}
EDIT 2
Additional methods:
- (void) imagePickerController:(QBImagePickerController *)imagePickerController didSelectAssets:(NSArray *)assets
{
for (int i=0;i<assets.count;i++)
{
ALAssetRepresentation *rep = [[assets objectAtIndex:i] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
UIImage *pickedImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
int fileNumberExtension = [self getHighestImageNumber] + 1; //new images all have a higher file name
//set the ratio (width of image is 294)
CGFloat ratio = pickedImage.size.width / 294;
CGFloat newHeight = pickedImage.size.height / ratio;
if (newHeight < 430) //image is too wide
{
[self saveTheImage:pickedImage fileName:#"img" width:294 height:newHeight quality:0.8f extension:fileNumberExtension];
}
else //if the image is too narrow
{
//set the ratio (height of image is 430)
CGFloat ratio = pickedImage.size.height / 430;
CGFloat newWidth = pickedImage.size.width / ratio;
[self saveTheImage:pickedImage fileName:#"img" width:newWidth height:430 quality:0.8f extension:fileNumberExtension];
}
[self saveTheImage:pickedImage fileName:#"thm" width:78 height:78 quality:0.0f extension:fileNumberExtension]; //save the thumbnail
}
[self dismissImagePickerController];
}
- (void)dismissImagePickerController
{
[self dismissViewControllerAnimated:YES completion:nil];
}
- (void) addImageClicked
{
QBImagePickerController *imagePickerController = [[QBImagePickerController alloc] init];
imagePickerController.delegate = self;
imagePickerController.allowsMultipleSelection = YES;
imagePickerController.maximumNumberOfSelection = 20; //allow up to 20 photos at once
imagePickerController.filterType = QBImagePickerControllerFilterTypePhotos;
UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:imagePickerController];
[self presentViewController:navigationController animated:YES completion:nil];
}
Solved this issue by adding by using #autoreleasepool around my for loop in this method:
- (void) imagePickerController:(QBImagePickerController *)imagePickerController didSelectAssets:(NSArray *)assets
This thread was very useful.
You have a memory leak. Leaks usually don't happen because ARC takes care of it for you. (every time you finish using an image, it gets cleared from memory). However, NOT ALL objects are governed by ARC. There are some object types (like CGColorSpaceRef, etc.) that need to be freed manually.
You can check this by running Static Analysis in Xcode. In the top menu bar, select Product -> Analyze. If there are places where you need to free your objects, it will tell you.
To free an object, do:
CGColorSpaceRelease(ref); //where ref is a CGColorSpaceRef.
CGImageRelease(iref); //where iref is a CGImageRef.
or the corresponding method that pertains to your object.
When I apply CIFilters to images the memory usage keeps growing and I don't know what to do.
I've tried everything I could:
using #autoreleasepool:
- (UIImage *)applySepiaToneTo:(UIImage *)img //Sepia
{
#autoreleasepool
{
CIImage *ciimageToFilter = [CIImage imageWithCGImage:img.CGImage];
CIFilter *sepia = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, ciimageToFilter,
#"inputIntensity", #1.0, nil];
return [self retrieveFilteredImageWithFilter:sepia];
}
}
- (UIImage *)retrieveFilteredImageWithFilter:(CIFilter *)filtro
{
#autoreleasepool
{
CIImage *ciimageFiltered = [filtro outputImage];
CGImageRef cgimg = [_context createCGImage:ciimageFiltered
fromRect:[ciimageFiltered extent]];
UIImage *filteredImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return filteredImage;
}
}
I'm also downsizing the image to be filtered and doing the filtering in a background thread:
- (void)filterWasSelected:(NSNotification *)notification
{
self.darkeningView.alpha = 0.5;
self.darkeningView.userInteractionEnabled = YES;
[self.view bringSubviewToFront:self.darkeningView];
[self.activityIndic startAnimating];
[self.view bringSubviewToFront:self.activityIndic];
int indice = [notification.object intValue];
__block NSArray *returnObj;
__block UIImage *auxUiimage;
if(choosenImage.size.width == 1280 || choosenImage.size.height == 1280)
{
UIImageView *iv;
if(choosenImage.size.width >= choosenImage.size.height)
{
float altura = (320 * choosenImage.size.height)/choosenImage.size.width;
iv = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,320,altura)];
iv.image = choosenImage;
}
else
{
float largura = (choosenImage.size.width * 320)/choosenImage.size.height;
iv = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,largura,320)];
iv.image = choosenImage;
}
UIGraphicsBeginImageContextWithOptions(iv.bounds.size, YES, 0.0);
[iv.layer renderInContext:UIGraphicsGetCurrentContext()];
auxUiimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
else
auxUiimage = choosenImage;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
if(artisticCollection)
returnObj = [self.filterCoordinator setupFilterArtisticType:indice toImage:auxUiimage];
else
returnObj = [self.filterCoordinator setupFilterOldOrVintageType:indice toImage:auxUiimage];
dispatch_async(dispatch_get_main_queue(), ^{
self.darkeningView.alpha = 0.3;
self.darkeningView.userInteractionEnabled = NO;
[self.activityIndic stopAnimating];
[self.view bringSubviewToFront:stageBackground];
[self.view bringSubviewToFront:stage];
[self.view bringSubviewToFront:self.filtersContainerView];
[self.view bringSubviewToFront:self.framesContainerView];
[self.view bringSubviewToFront:self.colorsContainerView];
if(returnObj)
{
auxUiimage = [returnObj firstObject];
NSLog(#"filtered image width = %f and height = %f", auxUiimage.size.width, auxUiimage.size.height);
returnObj = nil;
choosenImageContainer.image = auxUiimage;
}
});
});
}
I've also tried creating the context using the contextWithEAGLContext method, nothing changed.
I've researched a lot including stack overflow and found nothing.
Until I place the image in the image view (the image comes from the photo album) I'm only using 23 mega of memory, when I apply a filter, the use jumps to 51 mega and does not comes down. If I continue to apply other filters the memory usage only grows.
There's no linking in my app, I've checked in Instruments.
Also the bringSubviewToFront methods are not responsible, I've checked.
It's in the creation of the CIImage followed by the creation of the CIFilter object.
I know that in the process of applying the filter data is loaded into memory, but how to clean the memory after applying the filter?
Is there any secret that I'm not aware of?? Please help
I'm drawing my graph view using UIBezierPathmethods and coretext. I use addQuadCurveToPoint:controlPoint: method to draw curves on graph. I also use CATiledLayer for the purpose of rendering graph with large data set on x axis. I draw my whole graph in an image context and in drawrect: method of my view I draw this image in my whole view. Following is my code.
- (void)drawImage{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
// Draw Curves
[self drawDiagonal];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
[screenshot retain];
UIGraphicsEndImageContext();
}
- (void)drawRect:(CGRect)rect{
NSLog(#"Draw iN rect with Bounds: %#",NSStringFromCGRect(rect));
[screenshot drawInRect:self.frame];
}
However in screenshot the curves drawn between two points are not smooth. I've also set the Render with Antialiasing to YES in my info plist. Please see screenshot.
We'd have to see how you construct the UIBezierPath, but in my experience, for smooth curves, the key issue is whether the slope of the line between a curve's control point and the end point of that particular segment of the curve is equal to the slope between the next segment of the curve's start point and its control point. I find that easier to draw general smoooth curves using addCurveToPoint rather than addQuadCurveToPoint, so that I can adjust the starting and ending control points to satisfy this criterion more generally.
To illustrate this point, the way I usually draw UIBezierPath curves is to have an array of points on the curve, and the angle that the curve should take at that point, and then the "weight" of the addCurveToPoint control points (i.e. how far out the control points should be). Thus, I use those parameters to dictate the second control point of a UIBezierPath and the first controlPoint of the next segment of the UIBezierPath. So, for example:
#interface BezierPoint : NSObject
#property CGPoint point;
#property CGFloat angle;
#property CGFloat weight;
#end
#implementation BezierPoint
- (id)initWithPoint:(CGPoint)point angle:(CGFloat)angle weight:(CGFloat)weight
{
self = [super init];
if (self)
{
self.point = point;
self.angle = angle;
self.weight = weight;
}
return self;
}
#end
And then, an example of how I use that:
- (void)loadBezierPointsArray
{
// clearly, you'd do whatever is appropriate for your chart.
// this is just a unclosed loop. But it illustrates the idea.
CGPoint startPoint = CGPointMake(self.view.frame.size.width / 2.0, 50);
_bezierPoints = [NSMutableArray arrayWithObjects:
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x, startPoint.y)
angle:M_PI_2 * 0.05
weight:100.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x + 100.0, startPoint.y + 70.0)
angle:M_PI_2
weight:70.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x, startPoint.y + 140.0)
angle:M_PI
weight:100.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x - 100.0, startPoint.y + 70.0)
angle:M_PI_2 * 3.0
weight:70.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x + 10.0, startPoint.y + 10)
angle:0.0
weight:100.0 / 1.7],
nil];
}
- (CGPoint)calculateForwardControlPoint:(NSUInteger)index
{
BezierPoint *bezierPoint = _bezierPoints[index];
return CGPointMake(bezierPoint.point.x + cosf(bezierPoint.angle) * bezierPoint.weight,
bezierPoint.point.y + sinf(bezierPoint.angle) * bezierPoint.weight);
}
- (CGPoint)calculateReverseControlPoint:(NSUInteger)index
{
BezierPoint *bezierPoint = _bezierPoints[index];
return CGPointMake(bezierPoint.point.x - cosf(bezierPoint.angle) * bezierPoint.weight,
bezierPoint.point.y - sinf(bezierPoint.angle) * bezierPoint.weight);
}
- (UIBezierPath *)bezierPath
{
UIBezierPath *path = [UIBezierPath bezierPath];
BezierPoint *bezierPoint = _bezierPoints[0];
[path moveToPoint:bezierPoint.point];
for (NSInteger i = 1; i < [_bezierPoints count]; i++)
{
bezierPoint = _bezierPoints[i];
[path addCurveToPoint:bezierPoint.point
controlPoint1:[self calculateForwardControlPoint:i - 1]
controlPoint2:[self calculateReverseControlPoint:i]];
}
return path;
}
When I render this into a UIImage (using the code below), I don't see any softening of the image, but admittedly the images are not identical. (I'm comparing the image rendered by capture against that which I capture manually with a screen snapshot by pressing power and home buttons on my physical device at the same time.)
If you're seeing some softening, I would suggest renderInContext (as shown below). I wonder if you writing the image as JPG (which is lossy). Maybe try PNG, if you used JPG.
- (void)drawBezier
{
UIBezierPath *path = [self bezierPath];
CAShapeLayer *oval = [[CAShapeLayer alloc] init];
oval.path = path.CGPath;
oval.strokeColor = [UIColor redColor].CGColor;
oval.fillColor = [UIColor clearColor].CGColor;
oval.lineWidth = 5.0;
oval.strokeStart = 0.0;
oval.strokeEnd = 1.0;
[self.view.layer addSublayer:oval];
}
- (void)capture
{
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// save the image
NSData *data = UIImagePNGRepresentation(screenshot);
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *imagePath = [documentsPath stringByAppendingPathComponent:#"image.png"];
[data writeToFile:imagePath atomically:YES];
// send it to myself so I can look at the file
NSURL *url = [NSURL fileURLWithPath:imagePath];
UIActivityViewController *controller = [[UIActivityViewController alloc] initWithActivityItems:#[url]
applicationActivities:nil];
[self presentViewController:controller animated:YES completion:nil];
}
I have an NSImageView which I get an image for from an NSOpenPanel. That works great.
Now, how can I take that NSImage, half its size and save it as the same format in the same directory as the original as well?
If you can help at all with anything I'd appreciate it, thanks.
Check the ImageCrop sample project from Matt Gemmell:
http://mattgemmell.com/source/
Nice example how to resize / crop images.
Finally you can use something like this to save the result (dirty sample):
// Write to TIF
[[resultImg TIFFRepresentation] writeToFile:#"/Users/Anne/Desktop/Result.tif" atomically:YES];
// Write to JPG
NSData *imageData = [resultImg TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[imageData writeToFile:#"/Users/Anne/Desktop/Result.jpg" atomically:NO];
Since NSImage objects are immutable you will have to:
Create a Core Graphics context the size of the new image.
Draw the NSImage into the CGContext. It should automatically scale it for you.
Create an NSImage from that context
Write out the new NSImage
Don't forget to release any temporary objects you allocated.
There are definitely other options, but this is the first one that came to mind.
+(NSImage*) resize:(NSImage*)aImage scale:(CGFloat)aScale
{
NSImageView* kView = [[NSImageView alloc] initWithFrame:NSMakeRect(0, 0, aImage.size.width * aScale, aImage.size.height* aScale)];
[kView setImageScaling:NSImageScaleProportionallyUpOrDown];
[kView setImage:aImage];
NSRect kRect = kView.frame;
NSBitmapImageRep* kRep = [kView bitmapImageRepForCachingDisplayInRect:kRect];
[kView cacheDisplayInRect:kRect toBitmapImageRep:kRep];
NSData* kData = [kRep representationUsingType:NSJPEGFileType properties:nil];
return [[NSImage alloc] initWithData:kData];
}
Here is a specific implementation
-(NSImage*)resizeImage:(NSImage*)input by:(CGFloat)factor
{
NSSize size = NSZeroSize;
size.width = input.size.width*factor;
size.height = input.size.height*factor;
NSImage *ret = [[NSImage alloc] initWithSize:size];
[ret lockFocus];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[input drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[ret unlockFocus];
return [ret autorelease];
}
Keep in mind that this is pixel based, with HiDPI the scaling must be taken into account, it is simple to obtain :
-(CGFloat)pixelScaling
{
NSRect pixelBounds = [self convertRectToBacking:self.bounds];
return pixelBounds.size.width/self.bounds.size.width;
}
Apple has source code for downscaling and saving images found here
http://developer.apple.com/library/mac/#samplecode/Reducer/Introduction/Intro.html
Here is some code that makes a more extensive use of Core Graphics than other answers. It's made according to hints in Mark Thalman's answer to this question.
This code downscales an NSImage based on a target image width. It's somewhat nasty, but still useful as an extra sample for documenting how to draw an NSImage in a CGContext, and how to write contents of CGBitmapContext and CGImage into a file.
You may want to add extra error checking. I didn't need it for my use case.
- (void)generateThumbnailForImage:(NSImage*)image atPath:(NSString*)newFilePath forWidth:(int)width
{
CGSize size = CGSizeMake(width, image.size.height * (float)width / (float)image.size.width);
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width * 4, rgbColorspace, bitmapInfo);
NSGraphicsContext * graphicsContext = [NSGraphicsContext graphicsContextWithGraphicsPort:context flipped:NO];
[NSGraphicsContext setCurrentContext:graphicsContext];
[image drawInRect:NSMakeRect(0, 0, size.width, size.height) fromRect:NSMakeRect(0, 0, image.size.width, image.size.height) operation:NSCompositeCopy fraction:1.0];
CGImageRef outImage = CGBitmapContextCreateImage(context);
CFURLRef outURL = (CFURLRef)[NSURL fileURLWithPath:newFilePath];
CGImageDestinationRef outDestination = CGImageDestinationCreateWithURL(outURL, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(outDestination, outImage, NULL);
if(!CGImageDestinationFinalize(outDestination))
{
NSLog(#"Failed to write image to %#", newFilePath);
}
CFRelease(outDestination);
CGImageRelease(outImage);
CGContextRelease(context);
CGColorSpaceRelease(rgbColorspace);
}
To resize image
- (NSImage *)scaleImage:(NSImage *)anImage newSize:(NSSize)newSize
{
NSImage *sourceImage = anImage;
if ([sourceImage isValid])
{
if (anImage.size.width == newSize.width && anImage.size.height == newSize.height && newSize.width <= 0 && newSize.height <= 0) {
return anImage;
}
NSRect oldRect = NSMakeRect(0.0, 0.0, anImage.size.width, anImage.size.height);
NSRect newRect = NSMakeRect(0,0,newSize.width,newSize.height);
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage lockFocus];
[sourceImage drawInRect:newRect fromRect:oldRect operation:NSCompositeCopy fraction:1.0];
[newImage unlockFocus];
return newImage;
}
}
I'm trying to draw a standard NSImage in white instead of black. The following works fine for drawing the image in black in the current NSGraphicsContext:
NSImage* image = [NSImage imageNamed:NSImageNameEnterFullScreenTemplate];
[image drawInRect:r fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
I expected NSCompositeXOR to do the trick, but no. Do I need to go down the complicated [CIFilter filterWithName:#"CIColorInvert"] path? I feel like I must be missing something simple.
The Core Image route would be the most reliable. It's actually not very complicated, I've posted a sample below. If you know none of your images will be flipped then you can remove the transform code. The main thing to be careful of is that the conversion from NSImage to CIImage can be expensive performance-wise, so you should ensure you cache the CIImage if possible and don't re-create it during each drawing operation.
CIImage* ciImage = [[CIImage alloc] initWithData:[yourImage TIFFRepresentation]];
if ([yourImage isFlipped])
{
CGRect cgRect = [ciImage extent];
CGAffineTransform transform;
transform = CGAffineTransformMakeTranslation(0.0,cgRect.size.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
ciImage = [ciImage imageByApplyingTransform:transform];
}
CIFilter* filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setDefaults];
[filter setValue:ciImage forKey:#"inputImage"];
CIImage* output = [filter valueForKey:#"outputImage"];
[output drawAtPoint:NSZeroPoint fromRect:NSRectFromCGRect([output extent]) operation:NSCompositeSourceOver fraction:1.0];
Note: release/retain memory management is left as an exercise, the code above assumes garbage collection.
If you want to render the image at an arbitrary size, you could do the following:
NSSize imageSize = NSMakeSize(1024,768); //or whatever size you want
[yourImage setSize:imageSize];
[yourImage lockFocus];
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[yourImage unlockFocus];
CIImage* image = [CIImage imageWithData:[bitmap TIFFRepresentation]];
Here is a solution using Swift 5.1, somewhat based on the above solutions. Note that I am not cacheing the images, so it likely isn't the most efficient as my primary use case is to flip small monochrome images in toolbar buttons based on whether the current color scheme is light or dark.
import os
import AppKit
import Foundation
public extension NSImage {
func inverted() -> NSImage {
guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
os_log(.error, "Could not create CGImage from NSImage")
return self
}
let ciImage = CIImage(cgImage: cgImage)
guard let filter = CIFilter(name: "CIColorInvert") else {
os_log(.error, "Could not create CIColorInvert filter")
return self
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
guard let outputImage = filter.outputImage else {
os_log(.error, "Could not obtain output CIImage from filter")
return self
}
guard let outputCgImage = outputImage.toCGImage() else {
os_log(.error, "Could not create CGImage from CIImage")
return self
}
return NSImage(cgImage: outputCgImage, size: self.size)
}
}
fileprivate extension CIImage {
func toCGImage() -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(self, from: self.extent) {
return cgImage
}
return nil
}
}
Just one note: I found that CIColorInvert filter isn't always reliable. For example, if you want to invert back an image inverted in Photoshop, the CIFilter will produce a much lighter image. As far as I understood, it happens because of the differences in gamma value of CIFilter (gamma is 1) and images that came from other sources.
While I was looking for ways to change the gamma value for CIFilter, I found a note that there's a bug in CIContext: changing its gamma value from the default 1 will produce unpredictable results.
Regardless, there's another solution to invert NSImage, which always produces the correct results - by inverting pixels of NSBitmapImageRep.
I'm reposting the code from etutorials.org (http://bit.ly/Y6GpLn):
// srcImageRep is the NSBitmapImageRep of the source image
int n = [srcImageRep bitsPerPixel] / 8; // Bytes per pixel
int w = [srcImageRep pixelsWide];
int h = [srcImageRep pixelsHigh];
int rowBytes = [srcImageRep bytesPerRow];
int i;
NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:w
pixelsHigh:h
bitsPerSample:8
samplesPerPixel:n
hasAlpha:[srcImageRep hasAlpha]
isPlanar:NO
colorSpaceName:[srcImageRep colorSpaceName]
bytesPerRow:rowBytes
bitsPerPixel:NULL] autorelease];
unsigned char *srcData = [srcImageRep bitmapData];
unsigned char *destData = [destImageRep bitmapData];
for ( i = 0; i < rowBytes * h; i++ )
*(destData + i) = 255 - *(srcData + i);
[destImage addRepresentation:destImageRep];