Applying core image filters - app crashes - objective-c

I use the following code for applying a few types of image filters. (there are three more 'editImage' functions for brightness, saturation and contrast, with a common completeImageUsingOutput method). I use a slider to vary their values.
If I work with any of them individually, it works fine. As soon as I make two function calls on two different filters, the app crashed.
EDIT: didReceiveMemoryWarning is called. I see the memory allocations using memory leaks instrument, and after each edit memory allocation increases by around 15mb
The crash happens during
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
Moreover, if the instructions completeImageUsingOutputImage method are put into the individual functions, I am able to work with two types of filters without crashing. As soon as I call the third one, the app crashes.
(filters and context have been declared as instance variables and initialized in the init method)
- (UIImage *)editImage:(UIImage *)imageToBeEdited tintValue:(float)tint
{
CIImage *image = [[CIImage alloc] initWithImage:imageToBeEdited];
NSLog(#"in edit Image:\ncheck image: %#\ncheck value:%f", image, tint);
[tintFilter setValue:image forKey:kCIInputImageKey];
[tintFilter setValue:[NSNumber numberWithFloat:tint] forKey:#"inputAngle"];
CIImage *outputImage = [tintFilter outputImage];
NSLog(#"check output image: %#", outputImage);
return [self completeEditingUsingOutputImage:outputImage];
}
- (UIImage *)completeEditingUsingOutputImage:(CIImage *)outputImage
{
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
NSLog(#"check cgimg: %#", cgimg);
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
NSLog(#"check newImge: %#", newImage);
CGImageRelease(cgimg);
return newImage;
}
EDIT : using these filters on a reduced sized image is working now, but still, it would be good if I why was some memory not being released before.

Add this line in at top most of completeEditingUsingOutputImage: method
CIContext *context = [CIContext contextWithOptions:nil];
Also this is how get CIImage:
CIImage *outputImage = [tintFilter valueForKey:#"outputImage"];

Related

Can't apply inputSharpness on UIImage

I'm using a bunch of CIFilter filters in my app to adjust brightness, saturation etc' and they are working fine. I'm having some issues with inputSharpness. If I touch the sharpness slider the picture just disappears. Relevant code:
UIImage *aUIImage = [imageView image];
CGImageRef aCGImage = aUIImage.CGImage;
aCIImage = [CIImage imageWithCGImage:aCGImage];
//Create context
context = [CIContext contextWithOptions:nil];
sharpFilter = [CIFilter filterWithName:#"CIAttributeTypeScalar" keysAndValues: #"inputImage", aCIImage, nil];
....
- (IBAction)sharpSliderChanged:(id)sender
{
//Set filter value
[sharpFilter setValue:[NSNumber numberWithFloat:sharpSlider.value] forKey:#"inputSharpness"];
//Convert CIImage to UIImage
outputImage = [sharpFilter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
newUIImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
//add image to imageView
[imageView setImage:newUIImage];
}
I've read a post with a similar question, there a possible solution was to add a category for the UIImage effect you want to provide. The only difference here is that you should use one of the CIColorControls Parameters: inputSharpness and the CISharpenLuminance filter.
Back to your question: It seems from your comments you have some problem about how you initialize your filter. I take a look to the official documentation and I would use CISharpenLuminance instead during the initialization phase. It is only available in ios 6 though.
EDIT
Like i said if you want to stick with core image the feature you want is available on iOS 6 only. I can recommend you to use a third party lib: GPU library from bradlarson if you want to be compatible with ios 5.

Objective-C imageWithCGImage memory leak

I want to save all my photos from assets to some folder. Doing this in loop by:
ALAssetRepresentation *representation = [asset defaultRepresentation];
CGImageRef imageRef = [representation fullResolutionImage];
ALAssetOrientation orientation = [representation orientation];
UIImage *image = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:(UIImageOrientation)orientation];
CGFloat compressionQuality = 1.0;
NSData *imageData = [NSData dataWithData:UIImageJPEGRepresentation(image, compressionQuality)];
[imageData writeToFile:path atomically:YES];
CGImageRelease(imageRef);
I have Automatic Reference Counting enabled. This code is inside autorelease pool. It has a memory leak of CGImageRef objects. If i'll make
CGImageRelease(imageRef);
CGImageRelease(imageRef);
twice there is no memory leak. Why? Anybody can help me?
This is one unbelievable bug in iOS. Apparently when you create an UIImage using imageWithCGImage: method it retains the original CGImageRef which never gets released even if you release the UIImage itself (like set it to nil if you use ARC)! So you have to release it explicitly like this:
UIImage *image = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:(UIImageOrientation)orientation];
CGImageRelease(imageRef);
...
CGImageRelease(image.CGImage);
image = nil; // once you are done with it
Cost me full day of digging around until I came across this question that actually contained the answer. Who can I send the bill at Apple for all the time I spent on debugging this unspeakable bug?
CORRECTION: This was not an iOS bug, this was my stupid mistake. At some point I have "hijacked" dealloc method of UIImage through a private category to do some debugging and forgot about it. This is a WRONG thing to do, since the dealloc on the actual object never gets called in that case. So the end result is expected: UIImage didn't have a chance to do all the housekeeping it's supposed to do when being deallocated. NEVER EVER override dealloc through a private category.

Applying CIFilter destroys data from .BMP File

I seem to be tying myself up in knots trying to read into all of the different ways you can represent images in a Cocoa app for OSX.
My app reads in an image, applies CIFilters to it and then saves the output. Until this morning, this worked fine for all of the images that I've thrown at it. However, I've found some BMP files that produce empty, transparent images as soon as I try to apply any CIFilter to them.
One such image is one of the adverts from the Dragon Age 2 loader (I was just testing my app on random images this morning); http://www.johnwordsworth.com/wp-content/uploads/2011/08/hires_en.bmp
Specifically, my code does the following.
Load a CIImage using imageWithCGImage (the same problem occurs with initWithContentsOfURL).
Apply a number of CIFilters to the CIImage, all the while storing the current image in my AIImage container class.
Previews the image by adding an NSCIImageRep to an NSImage.
Saves the image using NSBitmapImageRep / initWithCIImage and then representationUsingType.
This process works with 99% of the files I've thrown at it (all JPGs, PNGs, TIFFs so far), just not with certain BMP files. If I skip step 2, the preview and saved image come out OK. However, if I turn step 2 on, the image produced is always blank and transparent.
The code is quite large, but here are what I believe to be the relevant snippets...
AIImage Loading Method
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imagePath], nil);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
CFDictionaryRef dictionaryRef = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil);
self.imageProperties = [NSMutableDictionary dictionaryWithDictionary:((NSDictionary *)dictionaryRef)];
self.imageData = [CIImage imageWithCGImage:imageRef];
AIImageResize Method
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleXBy:(targetSize.width / sourceRect.size.width) yBy:(targetSize.height / sourceRect.size.height)];
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:transform forKey:#"inputTransform"];
[transformFilter setValue:currentImage forKey:#"inputImage"];
currentImage = [transformFilter valueForKey:#"outputImage"];
aiImage.imageData = currentImage;
CIImagePreview Method
NSCIImageRep *imageRep = [[NSCIImageRep alloc] initWithCIImage:ciImage];
NSImage *nsImage = [[[NSImage alloc] initWithSize:ciImage.extent.size] autorelease];
[nsImage addRepresentation:imageRep];
Thanks for looking. Any advice would be greatly appreciated.

Get pixel colour from a Webcam

I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.
There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.

UITableViewCell's imageView fit to 40x40

I use the same big images in a tableView and detailView.
Need to make imageView filled in 40x40 when an imags is showed in tableView, but stretched on a half of a screen. I played with several properties but have no positive result:
[cell.imageView setBounds:CGRectMake(0, 0, 50, 50)];
[cell.imageView setClipsToBounds:NO];
[cell.imageView setFrame:CGRectMake(0, 0, 50, 50)];
[cell.imageView setContentMode:UIViewContentModeScaleAspectFill];
I am using SDK 3.0 with build in "Cell Objects in Predefined Styles".
I put Ben's code as an extension in my NS-Extensions file so that I can tell any image to make a thumbnail of itself, as in:
UIImage *bigImage = [UIImage imageNamed:#"yourImage.png"];
UIImage *thumb = [bigImage makeThumbnailOfSize:CGSizeMake(50,50)];
Here is .h file:
#interface UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size;
#end
and then in the NS-Extensions.m file:
#implementation UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size
{
UIGraphicsBeginImageContextWithOptions(size, NO, UIScreen.mainScreen.scale);
// draw scaled image into thumbnail context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
return newThumbnail;
}
#end
I cache a thumbnail version since using large images scaled down on the fly uses too much memory.
Here's my thumbnail code:
- (UIImage *)thumbnailOfSize:(CGSize)size {
if( self.previewThumbnail )
return self.previewThumbnail; // returned cached thumbnail
UIGraphicsBeginImageContext(size);
// draw scaled image into thumbnail context
[self.preview drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
self.previewThumbnail = newThumbnail;
return self.previewThumbnail;
}
Just make sure you properly clear the cached thumbnail if you change your original image (self.preview in my case).
I have mine wrapped in a UIView and use this code:
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.autoresizingMask = UIViewAutoresizingFlexibleWidth |UIViewAutoresizingFlexibleHeight;
[self addSubview:imageView];
imageView.frame = self.bounds;
(self is the wrapper UIView, with the dimensions I want - I use AsyncImageView).
I thought Ben Lachman's suggestion of generating thumbnails in advance rather than on the fly was smart, so I adapted his code so it could handle a whole array and to make it more portable (no hard-coded property names).
- (NSArray *)arrayOfThumbnailsOfSize:(CGSize)size fromArray:(NSArray*)original {
NSMutableArray *temp = [NSMutableArray arrayWithCapacity:[original count]];
for(UIImage *image in original){
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0,0,size.width,size.height)];
UIImage *thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp addObject:thumb];
}
return [NSArray arrayWithArray:temp];
}
you might be able to use this?
yourTableViewController.rowImage = [UIImage imageNamed:#"yourImage.png"];
and/or
cell.image = yourTableViewController.rowImage;
and if your images are already 40x40 then you shouldn't have to worry about setting bounds and stuff... but, i'm also new to this, so, i wouldn't know, haven't played around with Table View row/cell images much
hope this helps.
I was able to make this work using interface builder and a tableviewcell. You can set the "Mode" properties for an image view to "Aspect Fit". I'm not sure how to do this programatically.
Try setting UIImageView.autoresizesSubviews and/or UIImageView.contentStretch.