Draw standard NSImage inverted (white instead of black) - objective-c

I'm trying to draw a standard NSImage in white instead of black. The following works fine for drawing the image in black in the current NSGraphicsContext:
NSImage* image = [NSImage imageNamed:NSImageNameEnterFullScreenTemplate];
[image drawInRect:r fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
I expected NSCompositeXOR to do the trick, but no. Do I need to go down the complicated [CIFilter filterWithName:#"CIColorInvert"] path? I feel like I must be missing something simple.

The Core Image route would be the most reliable. It's actually not very complicated, I've posted a sample below. If you know none of your images will be flipped then you can remove the transform code. The main thing to be careful of is that the conversion from NSImage to CIImage can be expensive performance-wise, so you should ensure you cache the CIImage if possible and don't re-create it during each drawing operation.
CIImage* ciImage = [[CIImage alloc] initWithData:[yourImage TIFFRepresentation]];
if ([yourImage isFlipped])
{
CGRect cgRect = [ciImage extent];
CGAffineTransform transform;
transform = CGAffineTransformMakeTranslation(0.0,cgRect.size.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
ciImage = [ciImage imageByApplyingTransform:transform];
}
CIFilter* filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setDefaults];
[filter setValue:ciImage forKey:#"inputImage"];
CIImage* output = [filter valueForKey:#"outputImage"];
[output drawAtPoint:NSZeroPoint fromRect:NSRectFromCGRect([output extent]) operation:NSCompositeSourceOver fraction:1.0];
Note: release/retain memory management is left as an exercise, the code above assumes garbage collection.
If you want to render the image at an arbitrary size, you could do the following:
NSSize imageSize = NSMakeSize(1024,768); //or whatever size you want
[yourImage setSize:imageSize];
[yourImage lockFocus];
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[yourImage unlockFocus];
CIImage* image = [CIImage imageWithData:[bitmap TIFFRepresentation]];

Here is a solution using Swift 5.1, somewhat based on the above solutions. Note that I am not cacheing the images, so it likely isn't the most efficient as my primary use case is to flip small monochrome images in toolbar buttons based on whether the current color scheme is light or dark.
import os
import AppKit
import Foundation
public extension NSImage {
func inverted() -> NSImage {
guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
os_log(.error, "Could not create CGImage from NSImage")
return self
}
let ciImage = CIImage(cgImage: cgImage)
guard let filter = CIFilter(name: "CIColorInvert") else {
os_log(.error, "Could not create CIColorInvert filter")
return self
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
guard let outputImage = filter.outputImage else {
os_log(.error, "Could not obtain output CIImage from filter")
return self
}
guard let outputCgImage = outputImage.toCGImage() else {
os_log(.error, "Could not create CGImage from CIImage")
return self
}
return NSImage(cgImage: outputCgImage, size: self.size)
}
}
fileprivate extension CIImage {
func toCGImage() -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(self, from: self.extent) {
return cgImage
}
return nil
}
}

Just one note: I found that CIColorInvert filter isn't always reliable. For example, if you want to invert back an image inverted in Photoshop, the CIFilter will produce a much lighter image. As far as I understood, it happens because of the differences in gamma value of CIFilter (gamma is 1) and images that came from other sources.
While I was looking for ways to change the gamma value for CIFilter, I found a note that there's a bug in CIContext: changing its gamma value from the default 1 will produce unpredictable results.
Regardless, there's another solution to invert NSImage, which always produces the correct results - by inverting pixels of NSBitmapImageRep.
I'm reposting the code from etutorials.org (http://bit.ly/Y6GpLn):
// srcImageRep is the NSBitmapImageRep of the source image
int n = [srcImageRep bitsPerPixel] / 8; // Bytes per pixel
int w = [srcImageRep pixelsWide];
int h = [srcImageRep pixelsHigh];
int rowBytes = [srcImageRep bytesPerRow];
int i;
NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:w
pixelsHigh:h
bitsPerSample:8
samplesPerPixel:n
hasAlpha:[srcImageRep hasAlpha]
isPlanar:NO
colorSpaceName:[srcImageRep colorSpaceName]
bytesPerRow:rowBytes
bitsPerPixel:NULL] autorelease];
unsigned char *srcData = [srcImageRep bitmapData];
unsigned char *destData = [destImageRep bitmapData];
for ( i = 0; i < rowBytes * h; i++ )
*(destData + i) = 255 - *(srcData + i);
[destImage addRepresentation:destImageRep];

Related

Huge memory usage despite ARC

I have the following function that opens an image, scales it and saves it to another file.
-(void)writeFileToIcon:(NSString *)fullPath :(NSString *)finalPath :(NSSize)outputSize
{
NSData *dataToWrite;
NSBitmapImageRep *rep;
rep = [NSBitmapImageRep imageRepWithData:[[self scaleImage:[[NSImage alloc]initWithContentsOfFile:fullPath] toSize:outputSize] TIFFRepresentation]];
dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:YES];
}
- (NSImage *)scaleImage:(NSImage *)image toSize:(NSSize)targetSize
{
if ([image isValid])
{
NSSize imageSize = [image size];
float width = imageSize.width;
float height = imageSize.height;
float targetWidth = targetSize.width;
float targetHeight = targetSize.height;
float scaleFactor = 0.0;
float scaledWidth = targetWidth;
float scaledHeight = targetHeight;
NSPoint thumbnailPoint = NSZeroPoint;
if (!NSEqualSizes(imageSize, targetSize))
{
float widthFactor = targetWidth / width;
float heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
{
scaleFactor = widthFactor;
}
else
{
scaleFactor = heightFactor;
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
if (widthFactor < heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor > heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
NSImage *newImage = [[NSImage alloc] initWithSize:NSMakeSize(scaledWidth, scaledHeight)];
[newImage lockFocus];
NSRect thumbnailRect;
thumbnailRect.origin = NSZeroPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[image drawInRect:thumbnailRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[newImage unlockFocus];
return newImage;
}
return nil;
}
return nil;
}
However each time this function is called, the memory usage is getting higher (up to 5 GB for 1000 calls).
The issue is the drawRect function which seems to take a lot of memory (according to the analyser) but does not release it.
How can I "ask" ARC to release it ?
Thanks.
One may need to look at the whole code to find the problem. One idea follows, though: under ARC you cannot call "release" on objects, but if you set the pointer to the object to "nil", the object will be released (unless other strong references to that object exist somewhere).
I suggest you track your code and make sure you don't hold objects you don't need anymore. If your code is well encapsulated and structured, this shouldn't happen.
If your code is well designed, though, there is the chance that this amount of memory is actually needed (unlikely, but don't know without more details). If this would be the case, then let the system manage the memory, it will release the objects when it is appropriate. This, and try to make optimizations somewhere if memory usage is a concern.
Off-topic: these long nested if's with multiple return points within the method are not a very good idea; I suggest you reestructure your code slightly. If you write clearer code, you'll have more control over it, and you will find solutions to problems faster.
Are you calling this from a loop or without returning to the main event loop? Adding an explicit #autoreleasepool might help.
-(void)writeFileToIcon:(NSString *)fullPath :(NSString *)finalPath :(NSSize)outputSize
{
#autoreleasepool {
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[[self scaleImage:[[NSImage alloc]initWithContentsOfFile:fullPath] toSize:outputSize] TIFFRepresentation]];
NSData *dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:YES];
}
}
Theoretically, this isn't necessary as code compiled with ARC short circuits the autoreleasepool in some circumstances. However, you may be defeating that optimization here somehow.
Note that it's generally better to do this in the place where the memory allocation becomes the problem logically. So your for loop where you call this method would be a better place for the #autoreleasepool.
My guess is your issue is related to caching in the image classes, but that could be wrong. What does appear to improve matters:
-(void)writeFileToIcon:(NSString *)fullPath :(NSString *)finalPath :(NSSize)outputSize
{
// wrap in autorelease pool to localise any use of this by the image classes
#autoreleasepool
{
NSImage *dstImage = [self scaleImageFile:finalPath toSize:outputSize];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithData:[dstImage TIFFRepresentation]];
NSData *dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:YES];
}
}
- (NSImage *)scaleImageFile:(NSString *)fullPath toSize:(NSSize)targetSize
{
NSImageRep *srcImageRep = [NSImageRep imageRepWithContentsOfFile:fullPath];
if (srcImageRep == nil)
return nil;
NSSize imageSize = NSMakeSize(srcImageRep.pixelsWide, srcImageRep.pixelsHigh);
NSSize scaledSize;
NSPoint thumbnailPoint = NSZeroPoint;
NSRect thumbnailRect;
if (!NSEqualSizes(imageSize, targetSize))
{
// your existing scale calculation
...
scaledSize = NSMakeSize(scaledWidth, scaledHeight);
}
else
scaledSize = imageSize;
srcImageRep.size = scaledSize;
NSImage *newImage = [[NSImage alloc] initWithSize:scaledSize];
[newImage lockFocus];
thumbnailRect.origin = NSZeroPoint;
thumbnailRect.size = scaledSize;
[srcImageRep drawInRect:thumbnailRect];
[newImage unlockFocus];
return newImage;
}
This uses NSImageRep which appears in this case to reduce memory footprint. On a sample run using full screen desktop images scaled to 32x32 the above hovered around 16Mb while the original NSImage based version steadily grew to 32Mb. YMMV of course.
HTH

Scale Up NSImage and Save

I would like to scale up an image that's 64px to make it 512px (Even if it's blurry or pixelized)
I'm using this to get the image from my NSImageView and save it:
NSData *customimageData = [[customIcon image] TIFFRepresentation];
NSBitmapImageRep *customimageRep = [NSBitmapImageRep imageRepWithData:customimageData];
customimageData = [customimageRep representationUsingType:NSPNGFileType properties:nil];
NSString* customBundlePath = [[NSBundle mainBundle] pathForResource:#"customIcon" ofType:#"png"];
[customimageData writeToFile:customBundlePath atomically:YES];
I've tried setSize: but it still saves it 64px.
Thanks in advance!
You can't use the NSImage's size property as it bears only an indirect relationship to the pixel dimensions of an image representation. A good way to resize pixel dimensions is to use the drawInRect method of NSImageRep:
- (BOOL)drawInRect:(NSRect)rect
Draws the entire image in the specified rectangle, scaling it as needed to fit.
Here is a image resize method (creates a new NSImage at the pixel size you want).
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
It's from a more detailed answer I gave here: NSImage doesn't scale
Another resize method that works is the NSImage method drawInRect:fromRect:operation:fraction:respectFlipped:hints
- (void)drawInRect:(NSRect)dstSpacePortionRect
fromRect:(NSRect)srcSpacePortionRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)requestedAlpha
respectFlipped:(BOOL)respectContextIsFlipped
hints:(NSDictionary *)hints
The main advantage of this method is the hints NSDictionary, in which you have some control over interpolation. This can yield widely differing results when enlarging an image. NSImageHintInterpolation is an enum that can take one of five values…
enum {
NSImageInterpolationDefault = 0,
NSImageInterpolationNone = 1,
NSImageInterpolationLow = 2,
NSImageInterpolationMedium = 4,
NSImageInterpolationHigh = 3
};
typedef NSUInteger NSImageInterpolation;
Using this method there is no need for the intermediate step of extracting an imageRep, NSImage will do the right thing...
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImage drawInRect:targetFrame
fromRect:NSZeroRect //portion of source image to draw
operation:NSCompositeCopy //compositing operation
fraction:1.0 //alpha (transparency) value
respectFlipped:YES //coordinate system
hints:#{NSImageHintInterpolation:
[NSNumber numberWithInt:NSImageInterpolationLow]}];
[targetImage unlockFocus];
return targetImage;
}

Correct crop of CIGaussianBlur

As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount?
Example:
Original image:
Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything):
Take the following code as an example...
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
This results in the images you provided above. But if I instead use the original images rect to create the CGImage off of the context the resulting image is the desired size.
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from.
The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:
CIFilter *affineClampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.
The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:CGRectOffset([inputImage extend],
offset, offset)];
The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...
To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.
The code is as follows:
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *image = [UIImage imageNamed:#"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:#10.0f forKey:#"inputRadius"];
CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).
I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:
private lazy var coreImageContext = CIContext() // Re-use this.
func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
let blurredImage = image
.clampedToExtent()
.applyingFilter(
"CIGaussianBlur",
parameters: [
kCIInputRadiusKey: radius,
]
)
.cropped(to: image.extent)
return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}
If you need a UIImage afterward, you can of course get it like so:
let image = UIImage(cgImage: cgImage)
... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):
Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.
If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.
This works for me :)
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 20.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"]; // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect]; // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.
func applyBlurEffect() -> UIImage? {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: self)
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setDefaults()
clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
//The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
let inputImageExtent = imageToBlur!.extent
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
currentFilter.setValue(10, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
Here is Swift version:
func applyBlurEffect(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: image)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter!.setValue(imageToBlur, forKey: "inputImage")
blurfilter!.setValue(5.0, forKey: "inputRadius")
let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
let blurredImage = UIImage(CGImage: cgImage)
return blurredImage
}
See below two implementations for Xamarin (C#).
1) Works for iOS 6
public static UIImage Blur(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(cgimage);
}
}
}
2) Implementation for iOS 7
Using the way shown above isn't working properly on iOS 7 anymore (at least at the moment with Xamarin 7.0.1). So I decided to add cropping another way (measures may depend on the blur radius).
private static UIImage BlurImage(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
}
}
}
private static CIImage Crop(CIImage image, float width, float height)
{
var crop = new CICrop
{
Image = image,
Rectangle = new CIVector(10, 10, width - 20, height - 20)
};
return crop.OutputImage;
}
Try this, let the input's extent be -createCGImage:fromRect:'s parameter:
-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *input = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:input forKey:kCIInputImageKey];
[filter setValue:#(radius) forKey:kCIInputRadiusKey];
CIImage *output = [filter valueForKey:kCIOutputImageKey];
CGImageRef imgRef = [context createCGImage:output
fromRect:input.extent];
UIImage *outImage = [UIImage imageWithCGImage:imgRef
scale:UIScreen.mainScreen.scale
orientation:UIImageOrientationUp];
CGImageRelease(imgRef);
return outImage;
}

Get the rotated image from uiimageview

I created an application, in which I can rotate, re-size, translate an image using gestures. Then I need to get the image from the UIImageView. I found this part of the code at some where in Stack-overflow. Although the smiler question is answered here, but it requires the input of the angle. The same person wrote somewhere else the better solution, which I'm using. But it have a problem. Often it returns a blank image. or truncated image (often from top side). So there is something wrong with the code and it requires some changes. My problem is that, I'm new to Core-graphics and badly stuck in this problem.
UIGraphicsBeginImageContext(imgView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = imgView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, imgView.image.size.width, imgView.image.size.height), imgView.image.CGImage);
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
EDIT 1.1
Thanks for the sample code, but again it have the problem. Let me explain in more detail, I'm using gestures for scaling, translating and resizing the image using imageview. So all this data is saved in the transform property of the imageview. I fond another method in core-image. So I changed my code to:
CGRect bounds = CGRectMake(0, 0, imgTop.size.width, imgTop.size.height);
CIImage *ciImage = [[CIImage alloc] initWithCGImage:imageView.image.CGImage options:nil];
CGAffineTransform transform = imgView.transform;
ciImage = [ciImage imageByApplyingTransform:transform];
return [UIImage imageWithCIImage:ciImage] ;
Now I'm getting the squeezed and wrong size mirrored image. Sorry to disturbing you again. Can you guide me how to get the proper image using imageview's transform in coreimage?
CIImage *ciImage = [[CIImage alloc] initWithCGImage:fximage.CGImage options:nil];
CGAffineTransform transform = fxobj.transform;
float angle = atan2(transform.b, transform.a);
transform = CGAffineTransformRotate(transform, - 2 * angle);
ciImage = [ciImage imageByApplyingTransform:transform];
UIImage *screenfxImage = [UIImage imageWithCIImage:ciImage];
Do remember to add code : transform = CGAffineTransformRotate(transform, - 2 * angle); coz the rotation direction is opposite
I created an objective-C class just for this sort of thing. You can check it out on GitHub ANImageBitmapRep. Here's how you would do rotation:
ANImageBitmapRep * ibr = [myImage image];
[ibr rotate:anAngle];
UIImage * rotated = [ibr image];
Note that here, anAngle is in radians.
Here is the link to Documentation:-
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Sample code to rotate image:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}
else {
CGImageRef imageRef = [context createCGImage:displayImage fromRect:displayImage.extent];
photoEditView.image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
context = nil;
[inputImage release];
I created sample app to do this (minus the scaling part) in objective C. If anybody is interested, you can download it here: https://github.com/gene-code/coregraphics-drawing/tree/master/coregraphics-drawing/test

Help finding memory leak

Can any one help me to find the memory leak in the below code, which adjusts the brightness of an image?
+(NSImage *)brightness:(NSImage *)image andLevel:(int)level
{
CGImageSourceRef source= CGImageSourceCreateWithData((CFDataRef)[image TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSSize size = image.size;
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
//getting bitmap data from receiver's CGImage
CFDataRef dataref=CGDataProviderCopyData(CGImageGetDataProvider(img));
//getting bytes from bitmap image
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
//getting length
int length=CFDataGetLength(dataref);
// Perform operation on pixels
for(int index=0;index<length;index += 1)
{
// Go For BRIGHTNESS
for(int i=0;i<3;i++)
{
//printf("This pixel is:%d",data[index + i]);
if(data[index+i]+level<0)
{
data[index+i]=0;
}
else
{
if(data[index+i]+level>255)
{
data[index+i]=255;
}
else
{
data[index+i]+=level;
}
}
}
}
// .. Take image attributes
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
// .. Do the pixel manupulation
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
// .. Get the Image out of this raw data
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
// .. Prepare the image from raw data
NSImage* newImage = [[NSImage alloc] initWithSize:size];
//To make the drawing appear on the image instead of on the screen
[newImage lockFocus];
//Draws an image into a graphics context.
CGContextDrawImage([[NSGraphicsContext currentContext] graphicsPort],*(CGRect*)&rect, newImg);
[newImage unlockFocus];
// .. done with all,so release the references
CFRelease(source);
CFRelease(img);
CFRelease(dataref);
CFRelease(colorspace);
CFRelease(newData);
CFRelease(provider);
return [newImage autorelease];
}
You’ve forgotten to release newImg, which you’ve obtained via a Create function. Also, you shouldn’t release colorSpace since you haven’t obtained it via a Create or Copy function and you haven’t retained it.
Replace the following code lines:
NSImage* newImage = [[NSImage alloc] initWithSize:size];
with
NSImage* newImage = [[[NSImage alloc] initWithSize:size] autorelease];
and this one: return [newImage autorelease]; with return newImage;
I'm not 100% sure about this but give it a try, hope it might help.
:)