Converting NSImage to CIImage without degraded quality - objective-c

I am trying to convert an NSImage to a CIImage. When I do this, there seems to be a huge loss in image quality. I think it is because of the "TIFFRepresentation". Does anyone have a better method? Thanks a lot.
NSImage *image = [[NSImage alloc] initWithData:[someSource dataRepresentation]];
NSData * tiffData = [image TIFFRepresentation];
CIImage *backgroundCIImage = [[CIImage alloc] initWithData:tiffData];
CIContext *ciContext = [[NSGraphicsContext currentContext] CIContext];
[ciContext drawImage:backgroundCIImage atPoint:CGPointZero fromRect:someRect];

Your problem is indeed converting to TIFF. PDF is a vector format, while TIFF is bitmap, so a TIFF will look blurry at larger sizes.
Your best bet is probably to get a CGImage from the NSImage and create the CIImage from that. Either that or just create the CIImage from the original data.

Try replacing the line
NSData * tiffData = [image TIFFRepresentation];
with
NSData * tiffData = [image TIFFRepresentationUsingCompression: NSTIFFCompressionNone factor: 0.0f];
because the documentation states that TIFFRepresentation uses the TIFF compression option associated with each image representation, which might not be NSTIFFCompressionNone. Thus, you should be explicit about wanting the tiffData uncompressed.

I finally solved the problem. Basically, I render the pdf document two times its normal resolution offscreen and then capture the image displayed by the view. For a more detailed image, just increase the scaling factor. Please see the code below for the proof of concept. I didn't show the CIImage but once you get the bitmap, just use the CIImage method to create the CIImage from the bitmap.
NSImage *pdfImage = [[NSImage alloc] initWithData:[[aPDFView activePage] dataRepresentation]];
NSSize size = [pdfImage size];
NSRect imageRect = NSMakeRect(0, 0, size.width, size.height);
imageRect.size.width *= 2; //Twice the scale factor
imageRect.size.height *= 2; //Twice the scale factor
PDFDocument *pageDocument = [[[PDFDocument alloc] init] autorelease];
[pageDocument insertPage:[aPDFView activePage] atIndex:0];
PDFView *pageView = [[[PDFView alloc] init] autorelease];
[pageView setDocument:pageDocument];
[pageView setAutoScales:YES];
NSWindow *offscreenWindow = [[NSWindow alloc] initWithContentRect:imageRect
styleMask:NSBorderlessWindowMask
backing:NSBackingStoreRetained
defer:NO];
[offscreenWindow setContentView:pageView];
[offscreenWindow display];
[[offscreenWindow contentView] display]; // Draw to the backing buffer
// Create the NSBitmapImageRep
[[offscreenWindow contentView] lockFocus];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageRect];
// Clean up and delete the window, which is no longer needed.
[[offscreenWindow contentView] unlockFocus];
[compositeImage TIFFRepresentation]];
NSData *imageData = [rep representationUsingType: NSJPEGFileType properties: nil];
[imageData writeToFile:#"/Users/David/Desktop/out.jpg" atomically: YES];
[offscreenWindow release];

Related

NSImage drawAtPoint lost quality

I have a monochrome picture,When i draw this picture to another picture, it become a gray picture.
Here is my code:
Save a monochrome picture:
NSImage *copyImage = [image copy]; // Copy from a monochrome picture
NSBitmapImageRep *copyrep = [[NSBitmapImageRep alloc] initWithData:[copyImage TIFFRepresentation]];
NSSize copySize;
copySize.width = 72.0 * [copyrep pixelsWide] / PRINT_DPI; //Set Printing DPI
copySize.height = 72.0 * [copyrep pixelsHigh] / PRINT_DPI; //Set Printing DPI
[copyrep setSize:copySize];
copyImage= [[NSImage alloc] initWithData:[copyrep TIFFRepresentation]];
NSData *imgData = [copyImage TIFFRepresentation];
[imgData writeToFile: #"/Users/bbmac/Desktop/1.png" atomically: NO]; // save for testing
But when i draw it to a big picture, it become a gray picture:
// for printing
NSImage *printImage = [[NSImage alloc] initWithSize:imageSize];
[printImage lockFocus];
// Draw a monochrome picture
[image drawAtPoint:NSMakePoint(newX, newY) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0f];
[printImage unlockFocus];
// Save for testing
[[printImage TIFFRepresentation] writeToFile: #"/Users/bbmac/Desktop/printImage.png" atomically: NO];

Converting CIImage Into NSImage

I'm playing with the Core Image framework. As I understand, if I have an image (NSImage), it needs to be converted into CIImage, first. I can do that.
NSImage *im1 = [[NSImage alloc] initWithContentsOfFile:imagepath];
NSRect rect1;rect1.size.width = img1.size.width; rect1.size.height = img1.size.height;
CGImageRef imageRef1 = [img1 CGImageForProposedRect:&rect1 context:[NSGraphicsContext currentContext] hints:nil];
CIImage *ciimage = [CIImage imageWithCGImage:imageRef1];
I have a function that applies a Core Image filter to a core image (CIImage), which I want to test. And I want to add output image to a window as a subview. So I need NSImage. How can I convert this core image back into NSImage? If I ask Google, I don't get good results.
Thank you for your help.
I haven't tested it, but I think this should do it:
CIImage *ciImage = ...;
NSCIImageRep *rep = [NSCIImageRep imageRepWithCIImage:ciImage];
NSImage *nsImage = [[NSImage alloc] initWithSize:rep.size];
[nsImage addRepresentation:rep];
In Swift:
let ciImage = ...
let rep = NSCIImageRep(ciImage: ciImage)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
In Swift:
var rep: NSCIImageRep = NSCIImageRep(ciImage: gaussianBlurFilter.outputImage)
var nsImage: NSImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
There are filters that extend the size of the image quite a lot, like CIMotionBlur.
For an original image size 5120x1440 I ended up with an image with an "extent" x,y,w,h = -126,-502,5184,2444. To convert that to NSImage I use:
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cg_img = [context createCGImage:img fromRect:CGRectMake(0, 0, size.width, size.height)];
NSImage *ns_img = [[NSImage alloc] initWithCGImage:cg_img size:NSZeroSize];
CGImageRelease(cg_img); // Don't forget this! (memory leak)
Where size is the original image's size. I don't see another direct path form CIImage to NSImage that allows you to specify the origin within the CIImage, while the CGImageRef conversion does.

Scale Up NSImage and Save

I would like to scale up an image that's 64px to make it 512px (Even if it's blurry or pixelized)
I'm using this to get the image from my NSImageView and save it:
NSData *customimageData = [[customIcon image] TIFFRepresentation];
NSBitmapImageRep *customimageRep = [NSBitmapImageRep imageRepWithData:customimageData];
customimageData = [customimageRep representationUsingType:NSPNGFileType properties:nil];
NSString* customBundlePath = [[NSBundle mainBundle] pathForResource:#"customIcon" ofType:#"png"];
[customimageData writeToFile:customBundlePath atomically:YES];
I've tried setSize: but it still saves it 64px.
Thanks in advance!
You can't use the NSImage's size property as it bears only an indirect relationship to the pixel dimensions of an image representation. A good way to resize pixel dimensions is to use the drawInRect method of NSImageRep:
- (BOOL)drawInRect:(NSRect)rect
Draws the entire image in the specified rectangle, scaling it as needed to fit.
Here is a image resize method (creates a new NSImage at the pixel size you want).
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
It's from a more detailed answer I gave here: NSImage doesn't scale
Another resize method that works is the NSImage method drawInRect:fromRect:operation:fraction:respectFlipped:hints
- (void)drawInRect:(NSRect)dstSpacePortionRect
fromRect:(NSRect)srcSpacePortionRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)requestedAlpha
respectFlipped:(BOOL)respectContextIsFlipped
hints:(NSDictionary *)hints
The main advantage of this method is the hints NSDictionary, in which you have some control over interpolation. This can yield widely differing results when enlarging an image. NSImageHintInterpolation is an enum that can take one of five values…
enum {
NSImageInterpolationDefault = 0,
NSImageInterpolationNone = 1,
NSImageInterpolationLow = 2,
NSImageInterpolationMedium = 4,
NSImageInterpolationHigh = 3
};
typedef NSUInteger NSImageInterpolation;
Using this method there is no need for the intermediate step of extracting an imageRep, NSImage will do the right thing...
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImage drawInRect:targetFrame
fromRect:NSZeroRect //portion of source image to draw
operation:NSCompositeCopy //compositing operation
fraction:1.0 //alpha (transparency) value
respectFlipped:YES //coordinate system
hints:#{NSImageHintInterpolation:
[NSNumber numberWithInt:NSImageInterpolationLow]}];
[targetImage unlockFocus];
return targetImage;
}

UIImage from CIImage - Data length is zero?

I'm using an AVCaptureVideoDataOutput along with its delegate method to manipulate video frames. In the delegate method, I am using the sampleBuffer to create a CIImage, and from here I crop the CIImage, convert it to a UIImage and display it. Unfortunately, I need to determine the file-size of this new UIImage, but it's returning 0. The code works, the image is cropped beautifully, everything. I just don't see why it has no data!
Why might this be? Relevant code follows:
//In delegate method, given sampleBuffer...
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer
options:(NSDictionary *)attachments];
...
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGRect rect = [self drawFaceBoxesForFeatures:features forVideoBox:clap
orientation:curDeviceOrientation];
CIImage *cropped = [ciImage imageByCroppingToRect:rect];
UIImage *image = [[UIImage alloc] initWithCIImage:cropped];
NSData *data = UIImageJPEGRepresentation(image, 1);
NSLog(#"Image size is %d", data.length); //returns 0???
[imageView setImage:image];
[image release];
});
I had the same Problem, but with simple filtered images.
I stumbled upon this and it solved the issue. After this, I was able to save my image.
CGSize size = self.originalImage.size;
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
UIGraphicsBeginImageContext(size);
[[UIImage imageWithCIImage:self.filteredImage] drawInRect:rect];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * jpegData = UIImageJPEGRepresentation(image, 1.0);
But I only needed this two lines in the "ImageContext"

QRCode generate OFFLINE

Is there an objective-c library which will allow me to generate QRCodes offline?
Thanks
See: https://github.com/jverkoey/ObjQREncoder#readme
To use
#import <QREncoder/QREncoder.h>
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
In Mavericks and iOS7 QR code generation is part of Core Image. You just use the CIQRCodeGenerator filter. On Github you can find a class which implements this for iOS. I've adapted this code to get the OS X compatible code below:
NSString *website = #"http://stackoverflow.com/";
NSData *urlAsData = [website dataUsingEncoding:NSUTF8StringEncoding];
CIFilter *filter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[filter setDefaults];
[filter setValue: urlAsData forKey:#"inputMessage"];
[filter setValue:#"M" forKey:#"inputCorrectionLevel"];
CIImage *outputImage = [filter valueForKey:kCIOutputImageKey];
If you want to draw the CIImage there are several possibilities. You can create an NSImage like this:
CIContext *context = [[NSGraphicsContext currentContext] CIContext];
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:[outputImage extent]];
NSImage *image = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
But this image will typicall be much smaller than you want. I believe each black dot in the QR code just becomes one pixel. Not quite what you want. To scale up the image without making it blurry, do this:
NSSize largeSize = NSMakeSize(image.size.width * 10, image.size.height * 10);
[image setScalesWhenResized:YES];
NSImage *largeImage = [[NSImage alloc] initWithSize:largeSize];
[largeImage lockFocus];
[image setSize:largeSize];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone];
[image drawAtPoint:NSZeroPoint fromRect:CGRectMake(0, 0, largeSize.width, largeSize.height) operation:NSCompositeCopy fraction:1.0];
[largeImage unlockFocus];
largeImage is then your result image which you can display.
If you want to decode a QR you use AVFoundation as explained in this blog. Unfortunately this seems to only be supported on iOS7 at the moment.
Just a simple and native way of generating the QR code:
(CIImage *)createQRForString:(NSString *)qrString {
NSData *stringData = [qrString dataUsingEncoding: NSISOLatin1StringEncoding];
CIFilter *qrFilter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[qrFilter setValue:stringData forKey:#"inputMessage"];
return qrFilter.outputImage;
}