CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing? - objective-c

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?

They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].

You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.

#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

Related

Xcode making a pdf, trying to round corners

I am making a pdf in an iPad app. Now i can make the pdf however want to add a picture with a rounded corner border. For example to achieve the effect i want on the border on a simple view item i use the following code.
self.SaveButtonProp.layer.cornerRadius=8.0f;
self.SaveButtonProp.layer.masksToBounds=YES;
self.SaveButtonProp.layer.borderColor=[[UIColor blackColor]CGColor];
self.SaveButtonProp.layer.borderWidth= 1.0f;
With the pdf i am using the following method to add the picture with the border to the pdf.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage * demoImage = [UIImage imageWithData : Image];
UIColor *borderColor = [UIColor blackColor];
CGRect rectFrame = CGRectMake(20, 125, 200, 200);
[demoImage drawInRect:rectFrame];
CGContextSetStrokeColorWithColor(currentContext, borderColor.CGColor);
CGContextSetLineWidth(currentContext, 2);
CGContextStrokeRect(currentContext, rectFrame);
How do i round the corners?
Thanks
While drawing you can set clipping masks. For example, it's relatively easy to create a Bezier path with the shape of a rounded rectangle and apply that as clipping mask to your graphics context. Everything subsequently drawn will be clipped.
If you want remove the clipping mask later (for example because you have an image with rounded corners but follow that by other elements) you'll have to save the graphic state first, then apply your clipping mask and restore the graphics state when you're done with your rounded corners.
You can see actual code that comes pretty close to what I think you need here:
UIImage with rounded corners
You can use a method to get any UIView/UIImageView to PDF NSData:
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
NSData *data = [self makePDFfromView:imageView];
Method:
- (NSData *)makePDFfromView:(UIView *)view
{
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, view.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
return pdfData;
}
Maybe you can change or use this code to help you with your problem.

Cropping out a face using CoreImage

I need to crop out a face/multiple faces from a given image and use the cropped face image for other use. I am using CIDetectorTypeFace from CoreImage. The problem is the new UIImage that contains just the detected face needs to be bigger in size as the hair is cut-off or the lower jaw is cut-off. How do i increase the size of the initWithFrame:faceFeature.bounds ??
Sample code i am using:
CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[staticBG addSubview:faceView];
// cropping the face
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds);
[resultView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
}
Note: The red frame that i made to show the detected face region does-not at all match with the cropped out image. Maybe i am not displaying the frame right but since i do not need to show the frame, i really need the cropped out face, i am not worrying about it much.
Not sure, but you could try
CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis);
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle);
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset

Generate scaled image from off-screen NSView

I have a sequence of off screen NSViews in a Cocoa application, which are used to compose a PDF for printing. The views are not in an NSWindow, or visible in any way.
I'd like to be able to generate thumbnail images of that view, exactly as the PDF would look, but scaled down to fit a certain pixel size (constrained to a width or height). This needs to be as fast as possible, so I'd like to avoid rendering to PDF, then converting to raster and scaling - I'd like to go direct to the raster.
At the moment I'm doing:
NSBitmapImageRep *bitmapImageRep = [pageView bitmapImageRepForCachingDisplayInRect:pageView.bounds];
[pageView cacheDisplayInRect:pageView.bounds toBitmapImageRep:bitmapImageRep];
NSImage *image = [[NSImage alloc] initWithSize:bitmapImageRep.size];
[image addRepresentation:bitmapImageRep];
This approach is working well, but I can't work out how to apply a scaling to the NSView before rendering the bitmapImageRep. I want to avoid using scaleUnitSquareToSize, because as I understand it, that only changes the bounds, not the frame of the NSView.
Any suggestions on the best way of doing this?
This is what I ended up doing, which works perfectly. We draw directly into an NSBitmapImageRep, but scale the context explicitly using CGContextScaleCTM beforehand. graphicsContext.graphicsPort gives you the handle on the CGContextRef for the NSGraphicsContext.
NSView *pageView = [self viewForPageIndex:pageIndex];
float scale = width / pageView.bounds.size.width;
float height = scale * pageView.bounds.size.height;
NSRect targetRect = NSMakeRect(0.0, 0.0, width, height);
NSBitmapImageRep *bitmapRep;
bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:targetRect.size.width
pixelsHigh:targetRect.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * targetRect.size.width)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep];
[NSGraphicsContext setCurrentContext:graphicsContext];
CGContextScaleCTM(graphicsContext.graphicsPort, scale, scale);
[pageView displayRectIgnoringOpacity:pageView.bounds inContext:graphicsContext];
[NSGraphicsContext restoreGraphicsState];
NSImage *image = [[NSImage alloc] initWithSize:bitmapRep.size];
[image addRepresentation:bitmapRep];
return image;
How about using scaleUnitSquareToSize: and then passing in a smaller rect in to bitmapImageRepForCachingDisplayInRect: and cacheDisplayInRect:toBitmapImageRep:?
So, if you downscale it by a factor of 2, you'd pass a rect to with half with bounds and height.

NSImage losing quality upon writeToFile

Basically, I'm trying to create a program for batch image processing that will resize every image and add a border around the edge (the border will be made up of images as well). Although I have yet to get to that implementation, and that's beyond the scope of my question, I ask it because even if I get a great answer here, I still may be taking the wrong approach to get there, and any help in recognizing that would be greatly appreciated. Anyway, here's my question:
Question:
Can I take the existing code I have below and modify it to create higher-quality images saved-to-file than the code currently outputs? I literally spent 10+ hours trying to figure out what I was doing wrong; "secondaryImage" drew the high quality resized image into the Custom View, but everything I tried to do to save the file resulted in an image that was substantially lower quality (not so much pixelated, just noticeably more blurry). Finally, I found some code in Apple's "Reducer" example (at the end of ImageReducer.m) that locks the focus and gets a NSBitmapImageRep from the current view. This made a substantial increase in image quality, however, the output from Photoshop doing the same thing is a bit clearer. It looks like the image drawn to the view is of the same quality that's saved to file, and so both are below Photoshop's quality of the same image resized to 50%, just as this one is. Is it even possible to get higher quality resized images than this?
Aside from that, how can I modify the existing code to be able to control the quality of image saved to file? Can I change the compression and pixel density? I'd appreciate any help with either modifying my code or pointing me in the way of good examples or tutorials (preferably the later). Thanks so much!
- (void)drawRect:(NSRect)rect {
// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: #"/Users/TheUser/Desktop/4.jpg"];
// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];
[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];
[secondaryImage addRepresentation: bip];
// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];
NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:#"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];
// release from memory
[image release];
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}
I'm not sure why you are round tripping to and from the screen. That could affect the result, and it's not needed.
You can accomplish all this using CGImage and CGBitmapContext, using the resultant image to draw to the screen if needed. I've used those APIs and had good results (but I do not know how they compare to your current approach).
Another note: Render at a higher quality for the intermediate, then resize and reduce to 8bpc for the version you write. This will not make a significant difference now, but it will (in most cases) once you introduce filtering.
Finally, one of those "Aha!" moments! I tried using the same code on a high-quality .tif file, and the resultant image was 8 times smaller (in dimensions), rather than than the 50% I'd told it to do. When I tried displaying it would any rescaling of the image, it wound up still 4 times smaller than the original, when it should have displayed at the same height and width. I found out the way I was taking the NSSize from the imported image was wrong. Previously, it read:
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
Where it should be:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);
Apparently it has something to do with DPI and that jazz, so I needed to get the correct size from the BitmapImageRep rather than from image.size. With this change, I was able to save at a quality nearly indistinguishable from Photoshop.

Flipping Quicktime preview & capture

I need to horizontally flip some video I'm previewing and capturing. A-la iChat, I have a webcam and want it to appear as though the user is looking in a mirror.
I'm previewing Quicktime video in a QTCaptureView. My capturing is done frame-by-frame (for reasons I won't get into) with something like:
imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: frame]];
image = [[NSImage alloc] initWithSize: [imageRep size]];
[image addRepresentation: imageRep];
[movie addImage: image forDuration: someDuration withAttributes: someAttributes];
Any tips?
Nothing like resurrecting an old question. Anyway I came here and almost found what I was looking for thanks to Brian Webster but if anyone is looking for the wholesale solution try this after setting your class as the delegate of the QTCaptureView instance:
- (CIImage *)view:(QTCaptureView *)view willDisplayImage:(CIImage *)image {
//mirror image across y axis
return [image imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
}
You could do this by taking the CIImage you're getting from the capture and running it through a Core Image filter to flip the image around. You would then pass the resulting image into your image rep rather than the original one. The code would look something like:
CIImage* capturedImage = [CIImage imageWithCVImageBuffer:buffer];
NSAffineTransform* flipTransform = [NSAffineTransform transform];
CIFilter* flipFilter;
CIImage* flippedImage;
[flipTransform scaleByX:-1.0 y:1.0]; //horizontal flip
flipFilter = [CIImage filterWithName:#"CIAffineTransform"];
[flipFilter setValue:flipTransform forKey:#"inputTransform"];
[flipFilter setValue:capturedImage forKey:#"inputImage"];
flippedImage = [flipFilter valueForKey:#"outputImage"];
imageRep = [NSCIImageRep imageRepWithCIImage:flippedImage];
...
Try this!
it will apply filters to CaptureView, but not to the output video.
- (IBAction)Vibrance:(id)sender
{
CIFilter* CIVibrance = [CIFilter filterWithName:#"CIVibrance" keysAndValues:
#"inputAmount", [NSNumber numberWithDouble:2.0f],
nil];
mCaptureView.contentFilters = [NSArray arrayWithObject:CIVibrance];
}
btw, you can apply any filters from this ref: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html