CIBloom makes image smaller - cgimage

I'm using CIBloom to blur some text inside a CGImage, but what is happening is the CIBloom filter seems to be making my image very slightly smaller.
The effect is worse when the filter kernel is made bigger. I kind of expected this but I need a way to turn it off, or a formula to resize the image so that each letter is exactly the same size that it originally was. I'm leaving margins on my source image on purpose.
Before CIBloom filter:
After CIBloom filter:
The code that blurs the filter:
CGImageRef cgImageRef = CGBitmapContextCreateImage( tex->cgContext ) ;
CIImage *cii = [CIImage imageWithCGImage:cgImageRef] ;
CIFilter *filter = [CIFilter filterWithName:#"CIBloom"] ; //CIGaussianBlur
[filter setValue:cii forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:1.1f] forKey:#"inputIntensity"];
// Large radius makes result EVEN SMALLER:
//[filter setValue:[NSNumber numberWithFloat:100.f] forKey:#"inputRadius"];
CIContext *ciContext = [CIContext contextWithOptions:nil];
CIImage *ciResult = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgIRes = [ciContext createCGImage:ciResult fromRect:[ciResult extent]];
//Clear old tex
tex->clearBacking( 0, 0, 0, 1 ) ;
// Drop CIImage result into texture raw data
CGContextDrawImage( tex->cgContext, CGRectMake(0, 0, tex->w, tex->h), cgIRes ) ;
I like the CIBloom filter, but I need the results to be the same size as my original image, not downsampled.

Well, the extra space added to the image for the default kernel size of 10 is 21x28.
For K=10, the resultant image has 21 px added to each side, and 28 px added to the top and bottom. Keeping in mind the backing is 768x1024 px, it looks like the scale up produces an image that is 810x1080.
Playing around with this, I came up with the following
CGContextDrawImage( tex->cgContext, CGRectMake(-30, -36, tex->w+2*30, tex->h+2*36), cgIRes ) ;
This seems very approximate (there must be a formula for this) and introduces some distortion, but it works to make the image nearly the same size that it originally was, without introducing borders.

The bloom effect changes the position and size of your original image. So you need to edit them back.
In your case you change the following code line:
CGImageRef cgIRes = [ciContext createCGImage:ciResult fromRect:[ciResult extent]];
into
CGImageRef cgIRes = [ciContext createCGImage:ciResult fromRect:[cii extent]];
Hope it helps you

Related

resizing uiimage is flipping it?

I am trying to resize UIImage . i am taking the image and set its width to 640(for ex), and check the factor i had to use, and use it to the image height also .
Somehow the image is sometimes flips, even if it was portrait image, it becomes landscape.
I am probably not giving attention to something here ..
//scale 'newImage'
CGRect screenRect = CGRectMake(0, 0, 640.0, newImage.size.height* (640/newImage.size.width) );
UIGraphicsBeginImageContext(screenRect.size);
[newImage drawInRect:screenRect];
UIImage *scaled = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Ok i have the answer but its not her.
when i took the image from the assetLibrary i took it with :
CGImageRef imageRef = result.defaultRepresentation.fullResolution;
This flips the image, now i use :
CGImageRef imageRef = result.defaultRepresentation.fullScreenImage;
and the image is just fine .

Cropping out a face using CoreImage

I need to crop out a face/multiple faces from a given image and use the cropped face image for other use. I am using CIDetectorTypeFace from CoreImage. The problem is the new UIImage that contains just the detected face needs to be bigger in size as the hair is cut-off or the lower jaw is cut-off. How do i increase the size of the initWithFrame:faceFeature.bounds ??
Sample code i am using:
CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[staticBG addSubview:faceView];
// cropping the face
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds);
[resultView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
}
Note: The red frame that i made to show the detected face region does-not at all match with the cropped out image. Maybe i am not displaying the frame right but since i do not need to show the frame, i really need the cropped out face, i am not worrying about it much.
Not sure, but you could try
CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis);
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle);
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset

CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing?

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

NSImage losing quality upon writeToFile

Basically, I'm trying to create a program for batch image processing that will resize every image and add a border around the edge (the border will be made up of images as well). Although I have yet to get to that implementation, and that's beyond the scope of my question, I ask it because even if I get a great answer here, I still may be taking the wrong approach to get there, and any help in recognizing that would be greatly appreciated. Anyway, here's my question:
Question:
Can I take the existing code I have below and modify it to create higher-quality images saved-to-file than the code currently outputs? I literally spent 10+ hours trying to figure out what I was doing wrong; "secondaryImage" drew the high quality resized image into the Custom View, but everything I tried to do to save the file resulted in an image that was substantially lower quality (not so much pixelated, just noticeably more blurry). Finally, I found some code in Apple's "Reducer" example (at the end of ImageReducer.m) that locks the focus and gets a NSBitmapImageRep from the current view. This made a substantial increase in image quality, however, the output from Photoshop doing the same thing is a bit clearer. It looks like the image drawn to the view is of the same quality that's saved to file, and so both are below Photoshop's quality of the same image resized to 50%, just as this one is. Is it even possible to get higher quality resized images than this?
Aside from that, how can I modify the existing code to be able to control the quality of image saved to file? Can I change the compression and pixel density? I'd appreciate any help with either modifying my code or pointing me in the way of good examples or tutorials (preferably the later). Thanks so much!
- (void)drawRect:(NSRect)rect {
// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: #"/Users/TheUser/Desktop/4.jpg"];
// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];
[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];
[secondaryImage addRepresentation: bip];
// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];
NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:#"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];
// release from memory
[image release];
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}
I'm not sure why you are round tripping to and from the screen. That could affect the result, and it's not needed.
You can accomplish all this using CGImage and CGBitmapContext, using the resultant image to draw to the screen if needed. I've used those APIs and had good results (but I do not know how they compare to your current approach).
Another note: Render at a higher quality for the intermediate, then resize and reduce to 8bpc for the version you write. This will not make a significant difference now, but it will (in most cases) once you introduce filtering.
Finally, one of those "Aha!" moments! I tried using the same code on a high-quality .tif file, and the resultant image was 8 times smaller (in dimensions), rather than than the 50% I'd told it to do. When I tried displaying it would any rescaling of the image, it wound up still 4 times smaller than the original, when it should have displayed at the same height and width. I found out the way I was taking the NSSize from the imported image was wrong. Previously, it read:
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
Where it should be:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);
Apparently it has something to do with DPI and that jazz, so I needed to get the correct size from the BitmapImageRep rather than from image.size. With this change, I was able to save at a quality nearly indistinguishable from Photoshop.

Applying CIFilter destroys data from .BMP File

I seem to be tying myself up in knots trying to read into all of the different ways you can represent images in a Cocoa app for OSX.
My app reads in an image, applies CIFilters to it and then saves the output. Until this morning, this worked fine for all of the images that I've thrown at it. However, I've found some BMP files that produce empty, transparent images as soon as I try to apply any CIFilter to them.
One such image is one of the adverts from the Dragon Age 2 loader (I was just testing my app on random images this morning); http://www.johnwordsworth.com/wp-content/uploads/2011/08/hires_en.bmp
Specifically, my code does the following.
Load a CIImage using imageWithCGImage (the same problem occurs with initWithContentsOfURL).
Apply a number of CIFilters to the CIImage, all the while storing the current image in my AIImage container class.
Previews the image by adding an NSCIImageRep to an NSImage.
Saves the image using NSBitmapImageRep / initWithCIImage and then representationUsingType.
This process works with 99% of the files I've thrown at it (all JPGs, PNGs, TIFFs so far), just not with certain BMP files. If I skip step 2, the preview and saved image come out OK. However, if I turn step 2 on, the image produced is always blank and transparent.
The code is quite large, but here are what I believe to be the relevant snippets...
AIImage Loading Method
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imagePath], nil);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
CFDictionaryRef dictionaryRef = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil);
self.imageProperties = [NSMutableDictionary dictionaryWithDictionary:((NSDictionary *)dictionaryRef)];
self.imageData = [CIImage imageWithCGImage:imageRef];
AIImageResize Method
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleXBy:(targetSize.width / sourceRect.size.width) yBy:(targetSize.height / sourceRect.size.height)];
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:transform forKey:#"inputTransform"];
[transformFilter setValue:currentImage forKey:#"inputImage"];
currentImage = [transformFilter valueForKey:#"outputImage"];
aiImage.imageData = currentImage;
CIImagePreview Method
NSCIImageRep *imageRep = [[NSCIImageRep alloc] initWithCIImage:ciImage];
NSImage *nsImage = [[[NSImage alloc] initWithSize:ciImage.extent.size] autorelease];
[nsImage addRepresentation:imageRep];
Thanks for looking. Any advice would be greatly appreciated.