I'm using a bunch of CIFilter filters in my app to adjust brightness, saturation etc' and they are working fine. I'm having some issues with inputSharpness. If I touch the sharpness slider the picture just disappears. Relevant code:
UIImage *aUIImage = [imageView image];
CGImageRef aCGImage = aUIImage.CGImage;
aCIImage = [CIImage imageWithCGImage:aCGImage];
//Create context
context = [CIContext contextWithOptions:nil];
sharpFilter = [CIFilter filterWithName:#"CIAttributeTypeScalar" keysAndValues: #"inputImage", aCIImage, nil];
....
- (IBAction)sharpSliderChanged:(id)sender
{
//Set filter value
[sharpFilter setValue:[NSNumber numberWithFloat:sharpSlider.value] forKey:#"inputSharpness"];
//Convert CIImage to UIImage
outputImage = [sharpFilter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
newUIImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
//add image to imageView
[imageView setImage:newUIImage];
}
I've read a post with a similar question, there a possible solution was to add a category for the UIImage effect you want to provide. The only difference here is that you should use one of the CIColorControls Parameters: inputSharpness and the CISharpenLuminance filter.
Back to your question: It seems from your comments you have some problem about how you initialize your filter. I take a look to the official documentation and I would use CISharpenLuminance instead during the initialization phase. It is only available in ios 6 though.
EDIT
Like i said if you want to stick with core image the feature you want is available on iOS 6 only. I can recommend you to use a third party lib: GPU library from bradlarson if you want to be compatible with ios 5.
Related
I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];
I use the following code for applying a few types of image filters. (there are three more 'editImage' functions for brightness, saturation and contrast, with a common completeImageUsingOutput method). I use a slider to vary their values.
If I work with any of them individually, it works fine. As soon as I make two function calls on two different filters, the app crashed.
EDIT: didReceiveMemoryWarning is called. I see the memory allocations using memory leaks instrument, and after each edit memory allocation increases by around 15mb
The crash happens during
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
Moreover, if the instructions completeImageUsingOutputImage method are put into the individual functions, I am able to work with two types of filters without crashing. As soon as I call the third one, the app crashes.
(filters and context have been declared as instance variables and initialized in the init method)
- (UIImage *)editImage:(UIImage *)imageToBeEdited tintValue:(float)tint
{
CIImage *image = [[CIImage alloc] initWithImage:imageToBeEdited];
NSLog(#"in edit Image:\ncheck image: %#\ncheck value:%f", image, tint);
[tintFilter setValue:image forKey:kCIInputImageKey];
[tintFilter setValue:[NSNumber numberWithFloat:tint] forKey:#"inputAngle"];
CIImage *outputImage = [tintFilter outputImage];
NSLog(#"check output image: %#", outputImage);
return [self completeEditingUsingOutputImage:outputImage];
}
- (UIImage *)completeEditingUsingOutputImage:(CIImage *)outputImage
{
CGImageRef cgimg = [context createCGImage:outputImage fromRect:outputImage.extent];
NSLog(#"check cgimg: %#", cgimg);
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
NSLog(#"check newImge: %#", newImage);
CGImageRelease(cgimg);
return newImage;
}
EDIT : using these filters on a reduced sized image is working now, but still, it would be good if I why was some memory not being released before.
Add this line in at top most of completeEditingUsingOutputImage: method
CIContext *context = [CIContext contextWithOptions:nil];
Also this is how get CIImage:
CIImage *outputImage = [tintFilter valueForKey:#"outputImage"];
I seem to be tying myself up in knots trying to read into all of the different ways you can represent images in a Cocoa app for OSX.
My app reads in an image, applies CIFilters to it and then saves the output. Until this morning, this worked fine for all of the images that I've thrown at it. However, I've found some BMP files that produce empty, transparent images as soon as I try to apply any CIFilter to them.
One such image is one of the adverts from the Dragon Age 2 loader (I was just testing my app on random images this morning); http://www.johnwordsworth.com/wp-content/uploads/2011/08/hires_en.bmp
Specifically, my code does the following.
Load a CIImage using imageWithCGImage (the same problem occurs with initWithContentsOfURL).
Apply a number of CIFilters to the CIImage, all the while storing the current image in my AIImage container class.
Previews the image by adding an NSCIImageRep to an NSImage.
Saves the image using NSBitmapImageRep / initWithCIImage and then representationUsingType.
This process works with 99% of the files I've thrown at it (all JPGs, PNGs, TIFFs so far), just not with certain BMP files. If I skip step 2, the preview and saved image come out OK. However, if I turn step 2 on, the image produced is always blank and transparent.
The code is quite large, but here are what I believe to be the relevant snippets...
AIImage Loading Method
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imagePath], nil);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
CFDictionaryRef dictionaryRef = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil);
self.imageProperties = [NSMutableDictionary dictionaryWithDictionary:((NSDictionary *)dictionaryRef)];
self.imageData = [CIImage imageWithCGImage:imageRef];
AIImageResize Method
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleXBy:(targetSize.width / sourceRect.size.width) yBy:(targetSize.height / sourceRect.size.height)];
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:transform forKey:#"inputTransform"];
[transformFilter setValue:currentImage forKey:#"inputImage"];
currentImage = [transformFilter valueForKey:#"outputImage"];
aiImage.imageData = currentImage;
CIImagePreview Method
NSCIImageRep *imageRep = [[NSCIImageRep alloc] initWithCIImage:ciImage];
NSImage *nsImage = [[[NSImage alloc] initWithSize:ciImage.extent.size] autorelease];
[nsImage addRepresentation:imageRep];
Thanks for looking. Any advice would be greatly appreciated.
I have an application that pulls images from an NSURL. Is it possible to inform the application that they are retina ('#2x') versions (the images are of retina resolution)? I currently have the following but the images appear pixelated on the higher resolution displays:
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
self.pictureImageView.image = image;
You need to rescale the UIImage before adding it to the image view.
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (image.scale != screenScale)
image = [UIImage imageWithCGImage:image.CGImage scale:screenScale orientation:image.imageOrientation];
self.pictureImageView.image = image;
It's best to avoid hard-coding the scale value, thus the UIScreen call. See Apple’s documentation on UIImage’s scale property for more information about why this is necessary.
It’s also best to avoid using NSData’s -dataWithContentsOfURL: method (unless your code is running on a background thread), as it uses a synchronous network call which cannot be monitored or cancelled. You can read more about the pains of synchronous networking and the ways to avoid it in this Apple Technical Q&A.
Try using imageWithData:scale: (iOS 6 and later)
NSData *imageData = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:imageData scale:[[UIScreen mainScreen] scale]];
You need to set the scale on the UIImage.
UIImage* img = [[UIImage alloc] initWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (screenScale != img.scale) {
img = [UIImage imageWithCGImage:img.CGImage scale:screenScale orientation:img.imageOrientation];
}
The documentation says to be careful to construct all your UIImages at the same scale, otherwise you might get weird display issues where things show at half size, double size, half resolution, et cetera. To avoid all that, load all UIImages at retina resolution. Resources will be loaded at the correct scale automatically. For UIImages constructed from URL data, you need to set it.
Just to add to this, what I did specifically was the following, in the same situation, works like a charm.
double scaleFactor = [UIScreen mainScreen].scale;
NSLog(#"Scale Factor is %f", scaleFactor);
if (scaleFactor==1.0) {
[cell.videoImageView setImageWithURL:[NSURL URLWithString:regularThumbnailURLString];
}else if (scaleFactor==2.0){
[cell.videoImageView setImageWithURL:[NSURL URLWithString:retinaThumbnailURLString];
}
#2x convention is just convenient way for loading images from application bundle.
If you wan't to show image on retina display then you have to make it 2x bigger:
Image size 100x100
View size: 50x50.
Edit: i think if you're loading images from server the best solution would be adding some additional param (e.g. scale) and return images of the appropriate size:
www.myserver.com/get_image.php?image_name=img.png&scale=2
You can obtain scale using [[UIScreen mainScreen] scale]
To tell the iPhone programmatically that particular image is Retina, you can do something like this:
UIImage *img = [self getImageFromDocumentDirectory];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
In my case, TabBarItem image was dynamic i.e. that was downloading from server. Then the iOS cannot identify it as retina. The above code snippet worked for me like a charm.
I need to horizontally flip some video I'm previewing and capturing. A-la iChat, I have a webcam and want it to appear as though the user is looking in a mirror.
I'm previewing Quicktime video in a QTCaptureView. My capturing is done frame-by-frame (for reasons I won't get into) with something like:
imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: frame]];
image = [[NSImage alloc] initWithSize: [imageRep size]];
[image addRepresentation: imageRep];
[movie addImage: image forDuration: someDuration withAttributes: someAttributes];
Any tips?
Nothing like resurrecting an old question. Anyway I came here and almost found what I was looking for thanks to Brian Webster but if anyone is looking for the wholesale solution try this after setting your class as the delegate of the QTCaptureView instance:
- (CIImage *)view:(QTCaptureView *)view willDisplayImage:(CIImage *)image {
//mirror image across y axis
return [image imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
}
You could do this by taking the CIImage you're getting from the capture and running it through a Core Image filter to flip the image around. You would then pass the resulting image into your image rep rather than the original one. The code would look something like:
CIImage* capturedImage = [CIImage imageWithCVImageBuffer:buffer];
NSAffineTransform* flipTransform = [NSAffineTransform transform];
CIFilter* flipFilter;
CIImage* flippedImage;
[flipTransform scaleByX:-1.0 y:1.0]; //horizontal flip
flipFilter = [CIImage filterWithName:#"CIAffineTransform"];
[flipFilter setValue:flipTransform forKey:#"inputTransform"];
[flipFilter setValue:capturedImage forKey:#"inputImage"];
flippedImage = [flipFilter valueForKey:#"outputImage"];
imageRep = [NSCIImageRep imageRepWithCIImage:flippedImage];
...
Try this!
it will apply filters to CaptureView, but not to the output video.
- (IBAction)Vibrance:(id)sender
{
CIFilter* CIVibrance = [CIFilter filterWithName:#"CIVibrance" keysAndValues:
#"inputAmount", [NSNumber numberWithDouble:2.0f],
nil];
mCaptureView.contentFilters = [NSArray arrayWithObject:CIVibrance];
}
btw, you can apply any filters from this ref: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html