NSImage losing quality upon writeToFile - objective-c

Basically, I'm trying to create a program for batch image processing that will resize every image and add a border around the edge (the border will be made up of images as well). Although I have yet to get to that implementation, and that's beyond the scope of my question, I ask it because even if I get a great answer here, I still may be taking the wrong approach to get there, and any help in recognizing that would be greatly appreciated. Anyway, here's my question:
Question:
Can I take the existing code I have below and modify it to create higher-quality images saved-to-file than the code currently outputs? I literally spent 10+ hours trying to figure out what I was doing wrong; "secondaryImage" drew the high quality resized image into the Custom View, but everything I tried to do to save the file resulted in an image that was substantially lower quality (not so much pixelated, just noticeably more blurry). Finally, I found some code in Apple's "Reducer" example (at the end of ImageReducer.m) that locks the focus and gets a NSBitmapImageRep from the current view. This made a substantial increase in image quality, however, the output from Photoshop doing the same thing is a bit clearer. It looks like the image drawn to the view is of the same quality that's saved to file, and so both are below Photoshop's quality of the same image resized to 50%, just as this one is. Is it even possible to get higher quality resized images than this?
Aside from that, how can I modify the existing code to be able to control the quality of image saved to file? Can I change the compression and pixel density? I'd appreciate any help with either modifying my code or pointing me in the way of good examples or tutorials (preferably the later). Thanks so much!
- (void)drawRect:(NSRect)rect {
// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: #"/Users/TheUser/Desktop/4.jpg"];
// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];
[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];
[secondaryImage addRepresentation: bip];
// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];
NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:#"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];
// release from memory
[image release];
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}

I'm not sure why you are round tripping to and from the screen. That could affect the result, and it's not needed.
You can accomplish all this using CGImage and CGBitmapContext, using the resultant image to draw to the screen if needed. I've used those APIs and had good results (but I do not know how they compare to your current approach).
Another note: Render at a higher quality for the intermediate, then resize and reduce to 8bpc for the version you write. This will not make a significant difference now, but it will (in most cases) once you introduce filtering.

Finally, one of those "Aha!" moments! I tried using the same code on a high-quality .tif file, and the resultant image was 8 times smaller (in dimensions), rather than than the 50% I'd told it to do. When I tried displaying it would any rescaling of the image, it wound up still 4 times smaller than the original, when it should have displayed at the same height and width. I found out the way I was taking the NSSize from the imported image was wrong. Previously, it read:
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
Where it should be:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);
Apparently it has something to do with DPI and that jazz, so I needed to get the correct size from the BitmapImageRep rather than from image.size. With this change, I was able to save at a quality nearly indistinguishable from Photoshop.

Related

How to generate NSImage with specific resolution and specific size in mm

I am new to objective-c and cocoa programming. I am trying to generate image which will be 128mm in height and 128mm in width with 300 DPI resolution.
NSString *image = [[NSImage alloc] initWithSize:NSMakeSize(1512, 756)];
In above line of code 1512 and 756 are treated as points. So I am not able to convert it to what I need. It is creating image with (3024 * 1512) size.
Can you please suggest something...
Thanks in advance.
Here is the code which I tried
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(2268, 2268)];
[image lockFocus];
// Draw Something
[image unlockFocus];
NSString* pathh = #"/Users/abcd/Desktop/Images/1234.bmp";
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
NSSize copySize; // To change to 600 DPI
copySize.width = 600 * [newRep pixelsWide] / 72.0;
copySize.height = 600 * [newRep pixelsWide] / 72.0;
[newRep setSize:copySize]; //This input is not working
NSData *pngData = [newRep representationUsingType:NSBMPFileType properties:nil];
[pngData writeToFile:pathh atomically:YES];
Your question is somewhat confusing as you state you want a square image (128 x 128mm) and then attempt to construct a rectangular one (1512 x 756pts).
Guessing: it seems you may need to understand the difference between an NSImage and an NSImageRep and how they interact. Read Apple's Cocoa Drawing Guide, in particular the Images section. You may find the subsection Image Size and Resolution help to set the picture (no pun intended ;-)).
Another area you can read up on is printing - this often requires the generation of 300dpi images.
HTH

How to make a perfect crop (without changing the quality) in Objective-c/Cocoa (OSX)

Is there any way in Objective-c/cocoa (OSX) to crop an image without changing the quality of the image?
I am very near to a solution, but there are still some differences that I can detect in the color. I can notice it when zooming into the text. Here is the code I am currently using:
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
target.backgroundColor = [NSColor greenColor];
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//add the NSBitmapImage to the representation list of the target
[target addRepresentation:bmpImageRep];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
NSString *filename = [NSString stringWithFormat:#"%#%#.jpg", panelImagePrefix, panelNumber];
NSLog(#"This is the filename: %#", filename);
//write the data to a file
[data writeToFile:filename atomically:NO];
Here is a zoomed-in comparison of the original and the cropped image:
(Original image - above)
(Cropped image - above)
The difference is hard to see, but if you flick between them, you can notice it. You can use a colour picker to notice the difference as well. For example, the darkest pixel on the bottom row of the image is a different shade.
I also have a solution that works exactly the way I want it in iOS. Here is the code:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
So, is there a way to crop an image in OSX so that the cropped image does not change at all? Perhaps I have to look into a different library, but I would be surprised if I could not do this with Objective-C...
Note, This is a follow up question to my previous question here.
Update I have tried (as per the suggestion) to round the CGRect values to whole numbers, but did not notice a difference. Here is the code in case I used:
[source drawInRect:NSMakeRect(0,0,(int)panelRect.size.width,(int)panelRect.size.height)
fromRect:NSMakeRect((int)panelRect.origin.x , (int)(source.size.height - panelRect.origin.y - panelRect.size.height), (int)panelRect.size.width, (int)panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
Update I have tried mazzaroth code and it works if I save it as a png, but if I try and save it as a jpeg, the image loses quality. So close, but not close enough. Still hoping for a complete answer...
use CGImageCreateWithImageInRect.
// this chunk of code loads a jpeg image into a cgimage
// creates a second crop of the original image with CGImageCreateWithImageInRect
// writes the new cropped image to the desktop
// ensure that the xy origin of the CGRectMake call is smaller than the width or height of the original image
NSURL *originalImage = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:#"lockwood" ofType:#"jpg"]];
CGImageRef imageRef = NULL;
CGImageSourceRef loadRef = CGImageSourceCreateWithURL((CFURLRef)originalImage, NULL);
if (loadRef != NULL)
{
imageRef = CGImageSourceCreateImageAtIndex(loadRef, 0, NULL);
CFRelease(loadRef); // Release CGImageSource reference
}
CGImageRef croppedImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(200., 200., 100., 100.));
CFURLRef saveUrl = (CFURLRef)[NSURL fileURLWithPath:[#"~/Desktop/lockwood-crop.jpg" stringByExpandingTildeInPath]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(saveUrl, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, croppedImage, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", saveUrl);
}
CFRelease(destination);
CFRelease(imageRef);
CFRelease(croppedImage);
I also made a gist:
https://gist.github.com/4259594
Try to change the drawInRect orign to 0.5,0.5. Otherwise Quartz will distribute each pixel color to the adjacent 4 fixels.
Set the color space of the target image. You might be having a different colorspace causing to to look slightly different.
Try the various rendering intents and see which gets the best result, perceptual versus relative colorimetric etc. There are 4 options I think.
You mention that the colors get modified by the saving of JPEG versus PNG.
You can specify the compression level when saving to JPEG. Try with something like 0.8 or 0.9. you can also save JPEG without compression with 1.0, but ther PNG has a distinct advantage. You specify the compression level in the options dictionary for CGImageDesinationAddImage.
Finally - if nothing her helps - you should open a TSI with DTS, they can certainly provide you with the guidance you seek.
The usual problem is that cropping sizes are float, but image pixels are integer.
cocoa interpolates it automatically.
You need to floor, round or ceil the size and coordinates to be sure that they are integer.
This may help.
I am doing EXIF deleting of JPG files and I think I caught the reason:
All losses and changes come from the re-compress of your image during saving to file.
You may notice the change too if you just to save the whole image again.
What I am to do is to read the original JPG and re-compress it to a quality that take equivalent file size.

CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing?

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

Applying CIFilter destroys data from .BMP File

I seem to be tying myself up in knots trying to read into all of the different ways you can represent images in a Cocoa app for OSX.
My app reads in an image, applies CIFilters to it and then saves the output. Until this morning, this worked fine for all of the images that I've thrown at it. However, I've found some BMP files that produce empty, transparent images as soon as I try to apply any CIFilter to them.
One such image is one of the adverts from the Dragon Age 2 loader (I was just testing my app on random images this morning); http://www.johnwordsworth.com/wp-content/uploads/2011/08/hires_en.bmp
Specifically, my code does the following.
Load a CIImage using imageWithCGImage (the same problem occurs with initWithContentsOfURL).
Apply a number of CIFilters to the CIImage, all the while storing the current image in my AIImage container class.
Previews the image by adding an NSCIImageRep to an NSImage.
Saves the image using NSBitmapImageRep / initWithCIImage and then representationUsingType.
This process works with 99% of the files I've thrown at it (all JPGs, PNGs, TIFFs so far), just not with certain BMP files. If I skip step 2, the preview and saved image come out OK. However, if I turn step 2 on, the image produced is always blank and transparent.
The code is quite large, but here are what I believe to be the relevant snippets...
AIImage Loading Method
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imagePath], nil);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
CFDictionaryRef dictionaryRef = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil);
self.imageProperties = [NSMutableDictionary dictionaryWithDictionary:((NSDictionary *)dictionaryRef)];
self.imageData = [CIImage imageWithCGImage:imageRef];
AIImageResize Method
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleXBy:(targetSize.width / sourceRect.size.width) yBy:(targetSize.height / sourceRect.size.height)];
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:transform forKey:#"inputTransform"];
[transformFilter setValue:currentImage forKey:#"inputImage"];
currentImage = [transformFilter valueForKey:#"outputImage"];
aiImage.imageData = currentImage;
CIImagePreview Method
NSCIImageRep *imageRep = [[NSCIImageRep alloc] initWithCIImage:ciImage];
NSImage *nsImage = [[[NSImage alloc] initWithSize:ciImage.extent.size] autorelease];
[nsImage addRepresentation:imageRep];
Thanks for looking. Any advice would be greatly appreciated.

Get pixel colour from a Webcam

I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.
There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.