Cropping out a face using CoreImage - objective-c

I need to crop out a face/multiple faces from a given image and use the cropped face image for other use. I am using CIDetectorTypeFace from CoreImage. The problem is the new UIImage that contains just the detected face needs to be bigger in size as the hair is cut-off or the lower jaw is cut-off. How do i increase the size of the initWithFrame:faceFeature.bounds ??
Sample code i am using:
CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[staticBG addSubview:faceView];
// cropping the face
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds);
[resultView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
}
Note: The red frame that i made to show the detected face region does-not at all match with the cropped out image. Maybe i am not displaying the frame right but since i do not need to show the frame, i really need the cropped out face, i am not worrying about it much.

Not sure, but you could try
CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis);
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle);
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset

Related

CGAffineTransformMakeRotation bug when masking

I have a bug when masking a rotated image. The bug wasn't present on iOS 8 (even when it was built with the iOS 9 SDK), and it still isn't present on non-retina iOS 9 iPads (iPad 2). I don't have any more retina iPads that are still on iOS 8, and in the meantime, I've also updated the build to support both 32bit and 64bit architectures. Point is, I can't be sure if the update to iOS 9 brought the bug, or the change to the both architectures setting.
The nature of the bug is this. I'm iterating through a series of images, rotating them by an angle determined by the number of segments I want to get out of the picture, and masking the picture on each rotation to get a different part of the picture. Everything works fine EXCEPT when the source image is rotated M_PI, 2*M_PI, 3*M_PI or 4*M_PI times (or multiples, i. e. 5*M_PI, 6*M_PI etc.). When It's rotated 1-3*M_PI times, the resulting images are incorrectly masked - the parts that should be transparent end up black. When it's rotated 4*M_PI times, the masking of the resulting image ends up with a nil image, thus crashing the application in the last step (where I'm adding the resulting image in an array).
I'm using CIImage for rotation, and CGImage masking for performance reasons (this combination showed best performance), so I would prefer keeping this choice of methods if at all possible.
UPDATE: The code shown below is run on a background thread.
Here is the code snippet:
CIContext *context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:kCIContextUseSoftwareRenderer]];
for (int i=0;i<[src count];i++){
//for every image in the source array
CIImage * newImage = [[CIImage alloc] init];
if (IS_RETINA){
newImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
}else {
CIImage *tImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
newImage = [tImage imageByApplyingTransform:CGAffineTransformMakeScale(0.5, 0.5)];
}
float angle = angleOff*M_PI/180.0;
//angleOff is just the inital angle offset, If I want to start cutting from a different start point
for (int j = 0; j < nSegments; j++){
//for the given number of circle slices (segments)
CIImage * ciResult = [newImage imageByApplyingTransform:CGAffineTransformMakeRotation(angle + ((2*M_PI/ (2 * nSegments)) + (2*M_PI/nSegments) * (j)))];
//the way the angle is calculated is specific for the application of the image later. In any case, the bug happens when the resulting angle is M_PI, 2*M_PI, 3*M_PI and 4*M_PI
CGPoint center = CGPointMake([ciResult extent].origin.x + [ciResult extent].size.width/2, [ciResult extent].origin.y+[ciResult extent].size.width/2);
for (int k = 0; k<[src count]; k++){
//this iteration is also specific, it has to do with how much of the image is being masked in each iteration, but the bug happens irrelevant to this iteration
CGSize dim = [[masks objectAtIndex:k] size];
if (IS_RETINA && (floor(NSFoundationVersionNumber)>(NSFoundationVersionNumber_iOS_7_1))) {
dim = CGSizeMake(dim.width*2, dim.height*2);
}//this correction was needed after iOS 7 was introduced, not sure why :)
CGRect newSize = CGRectMake(center.x-dim.width*1/2,center.y+((circRadius + orbitWidth*(k+1))*scale-dim.height)*1, dim.width*1, dim.height*1);
//the calculation of the new size is specific to the application, I don't find it relevant.
CGImageRef imageRef = [context createCGImage:ciResult fromRect:newSize];
CGImageRef maskRef = [[masks objectAtIndex:k] CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef masked = CGImageCreateWithMask(imageRef, mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(masked);
CGImageRelease(maska);
[temps addObject:result];
}
}
}
I would be eternally grateful for any tips anyone might have. This bug has me puzzled beyond words :) .

resizing uiimage is flipping it?

I am trying to resize UIImage . i am taking the image and set its width to 640(for ex), and check the factor i had to use, and use it to the image height also .
Somehow the image is sometimes flips, even if it was portrait image, it becomes landscape.
I am probably not giving attention to something here ..
//scale 'newImage'
CGRect screenRect = CGRectMake(0, 0, 640.0, newImage.size.height* (640/newImage.size.width) );
UIGraphicsBeginImageContext(screenRect.size);
[newImage drawInRect:screenRect];
UIImage *scaled = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Ok i have the answer but its not her.
when i took the image from the assetLibrary i took it with :
CGImageRef imageRef = result.defaultRepresentation.fullResolution;
This flips the image, now i use :
CGImageRef imageRef = result.defaultRepresentation.fullScreenImage;
and the image is just fine .

CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing?

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

Image Cropping API for iOS

Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.

Image Context produces artifacts when compositing UIImages

I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.