Scale Up NSImage and Save - objective-c

I would like to scale up an image that's 64px to make it 512px (Even if it's blurry or pixelized)
I'm using this to get the image from my NSImageView and save it:
NSData *customimageData = [[customIcon image] TIFFRepresentation];
NSBitmapImageRep *customimageRep = [NSBitmapImageRep imageRepWithData:customimageData];
customimageData = [customimageRep representationUsingType:NSPNGFileType properties:nil];
NSString* customBundlePath = [[NSBundle mainBundle] pathForResource:#"customIcon" ofType:#"png"];
[customimageData writeToFile:customBundlePath atomically:YES];
I've tried setSize: but it still saves it 64px.
Thanks in advance!

You can't use the NSImage's size property as it bears only an indirect relationship to the pixel dimensions of an image representation. A good way to resize pixel dimensions is to use the drawInRect method of NSImageRep:
- (BOOL)drawInRect:(NSRect)rect
Draws the entire image in the specified rectangle, scaling it as needed to fit.
Here is a image resize method (creates a new NSImage at the pixel size you want).
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
It's from a more detailed answer I gave here: NSImage doesn't scale
Another resize method that works is the NSImage method drawInRect:fromRect:operation:fraction:respectFlipped:hints
- (void)drawInRect:(NSRect)dstSpacePortionRect
fromRect:(NSRect)srcSpacePortionRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)requestedAlpha
respectFlipped:(BOOL)respectContextIsFlipped
hints:(NSDictionary *)hints
The main advantage of this method is the hints NSDictionary, in which you have some control over interpolation. This can yield widely differing results when enlarging an image. NSImageHintInterpolation is an enum that can take one of five values…
enum {
NSImageInterpolationDefault = 0,
NSImageInterpolationNone = 1,
NSImageInterpolationLow = 2,
NSImageInterpolationMedium = 4,
NSImageInterpolationHigh = 3
};
typedef NSUInteger NSImageInterpolation;
Using this method there is no need for the intermediate step of extracting an imageRep, NSImage will do the right thing...
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImage drawInRect:targetFrame
fromRect:NSZeroRect //portion of source image to draw
operation:NSCompositeCopy //compositing operation
fraction:1.0 //alpha (transparency) value
respectFlipped:YES //coordinate system
hints:#{NSImageHintInterpolation:
[NSNumber numberWithInt:NSImageInterpolationLow]}];
[targetImage unlockFocus];
return targetImage;
}

Related

Saving an image in a NSView

I have problem with rotating and saving JPEG NSImage. I have NSView which is flipped
- (BOOL)isFlipped
{
return YES;
}
Then I'm applying NSImage rotation with following function:
- (NSImage*)imageRotatedByDegrees:(CGFloat)degrees
{
// calculate the bounds for the rotated image
NSRect imageBounds = {NSZeroPoint, [image size]};
NSBezierPath* boundsPath = [NSBezierPath
bezierPathWithRect:imageBounds];
NSAffineTransform* transform = [NSAffineTransform transform];
[transform rotateByDegrees:degrees];
[boundsPath transformUsingAffineTransform:transform];
NSRect rotatedBounds = {NSZeroPoint, [boundsPath bounds].size};
NSImage* rotatedImage = [[NSImage alloc]
initWithSize:rotatedBounds.size];
// center the image within the rotated bounds
imageBounds.origin.x = NSMidX(rotatedBounds) - (NSWidth
(imageBounds) / 2);
imageBounds.origin.y = NSMidY(rotatedBounds) - (NSHeight
(imageBounds) / 2);
// set up the rotation transform
transform = [NSAffineTransform transform];
[transform translateXBy:+(NSWidth(rotatedBounds) / 2) yBy:+
(NSHeight(rotatedBounds) / 2)];
[transform rotateByDegrees:degrees];
[transform translateXBy:-(NSWidth(rotatedBounds) / 2) yBy:-
(NSHeight(rotatedBounds) / 2)];
// draw the original image, rotated, into the new image
[rotatedImage lockFocus];
[transform set];
[image drawInRect:imageBounds fromRect:NSZeroRect
operation:NSCompositeCopy fraction:1.0] ;
[rotatedImage unlockFocus];
return rotatedImage;
}
Image is now successfully rotated. Later, when I'm trying to save JPEG with following code:
-(void)saveDocument:(id)sender
{
NSData *imageData = [image TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps =
[NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0]
forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSJPEGFileType
properties:imageProps];
[imageData writeToFile:[_imageURL path] atomically:YES];
}
Result JPEG file is incorrectly flipped... What I'm doing wrong ?
Thanks a lot for any ideas, Petr
-[NSImage lockFocusFlipped:] could help.
Finally, I did find correct solution.
- (BOOL)isFlipped
{
return NO;
}
Then it is important to setup NSImage (thanks to pointum !)
[_image lockFocusFlipped:YES];
From now, when I'm saving image, he is correctly rotated and flipped.

UIImage from CIImage - Data length is zero?

I'm using an AVCaptureVideoDataOutput along with its delegate method to manipulate video frames. In the delegate method, I am using the sampleBuffer to create a CIImage, and from here I crop the CIImage, convert it to a UIImage and display it. Unfortunately, I need to determine the file-size of this new UIImage, but it's returning 0. The code works, the image is cropped beautifully, everything. I just don't see why it has no data!
Why might this be? Relevant code follows:
//In delegate method, given sampleBuffer...
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer
options:(NSDictionary *)attachments];
...
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGRect rect = [self drawFaceBoxesForFeatures:features forVideoBox:clap
orientation:curDeviceOrientation];
CIImage *cropped = [ciImage imageByCroppingToRect:rect];
UIImage *image = [[UIImage alloc] initWithCIImage:cropped];
NSData *data = UIImageJPEGRepresentation(image, 1);
NSLog(#"Image size is %d", data.length); //returns 0???
[imageView setImage:image];
[image release];
});
I had the same Problem, but with simple filtered images.
I stumbled upon this and it solved the issue. After this, I was able to save my image.
CGSize size = self.originalImage.size;
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
UIGraphicsBeginImageContext(size);
[[UIImage imageWithCIImage:self.filteredImage] drawInRect:rect];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * jpegData = UIImageJPEGRepresentation(image, 1.0);
But I only needed this two lines in the "ImageContext"

Slicing NSImage horizontally in cocoa

I need to slice an NSImage into two equal halves horizontally. Please help me out. Thanks in advance.
This is a trick I just added to my toolkit. I've added it as a category on NSImage. You pass in the source image and the rect from which to slice a new image. Here's the code:
+ (NSImage *) sliceImage:(NSImage *)image fromRect:(NSRect)srcRect {
NSRect targetRect = NSMakeRect(0, 0, srcRect.size.width, srcRect.size.height);
NSImage *result = [[NSImage alloc] initWithSize:targetRect.size];
[result lockFocus];
[image drawInRect:targetRect fromRect:srcRect operation:NSCompositeCopy fraction:1.0];
[result unlockFocus];
return [result autorelease];
}

Resize and Save NSImage?

I have an NSImageView which I get an image for from an NSOpenPanel. That works great.
Now, how can I take that NSImage, half its size and save it as the same format in the same directory as the original as well?
If you can help at all with anything I'd appreciate it, thanks.
Check the ImageCrop sample project from Matt Gemmell:
http://mattgemmell.com/source/
Nice example how to resize / crop images.
Finally you can use something like this to save the result (dirty sample):
// Write to TIF
[[resultImg TIFFRepresentation] writeToFile:#"/Users/Anne/Desktop/Result.tif" atomically:YES];
// Write to JPG
NSData *imageData = [resultImg TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[imageData writeToFile:#"/Users/Anne/Desktop/Result.jpg" atomically:NO];
Since NSImage objects are immutable you will have to:
Create a Core Graphics context the size of the new image.
Draw the NSImage into the CGContext. It should automatically scale it for you.
Create an NSImage from that context
Write out the new NSImage
Don't forget to release any temporary objects you allocated.
There are definitely other options, but this is the first one that came to mind.
+(NSImage*) resize:(NSImage*)aImage scale:(CGFloat)aScale
{
NSImageView* kView = [[NSImageView alloc] initWithFrame:NSMakeRect(0, 0, aImage.size.width * aScale, aImage.size.height* aScale)];
[kView setImageScaling:NSImageScaleProportionallyUpOrDown];
[kView setImage:aImage];
NSRect kRect = kView.frame;
NSBitmapImageRep* kRep = [kView bitmapImageRepForCachingDisplayInRect:kRect];
[kView cacheDisplayInRect:kRect toBitmapImageRep:kRep];
NSData* kData = [kRep representationUsingType:NSJPEGFileType properties:nil];
return [[NSImage alloc] initWithData:kData];
}
Here is a specific implementation
-(NSImage*)resizeImage:(NSImage*)input by:(CGFloat)factor
{
NSSize size = NSZeroSize;
size.width = input.size.width*factor;
size.height = input.size.height*factor;
NSImage *ret = [[NSImage alloc] initWithSize:size];
[ret lockFocus];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[input drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[ret unlockFocus];
return [ret autorelease];
}
Keep in mind that this is pixel based, with HiDPI the scaling must be taken into account, it is simple to obtain :
-(CGFloat)pixelScaling
{
NSRect pixelBounds = [self convertRectToBacking:self.bounds];
return pixelBounds.size.width/self.bounds.size.width;
}
Apple has source code for downscaling and saving images found here
http://developer.apple.com/library/mac/#samplecode/Reducer/Introduction/Intro.html
Here is some code that makes a more extensive use of Core Graphics than other answers. It's made according to hints in Mark Thalman's answer to this question.
This code downscales an NSImage based on a target image width. It's somewhat nasty, but still useful as an extra sample for documenting how to draw an NSImage in a CGContext, and how to write contents of CGBitmapContext and CGImage into a file.
You may want to add extra error checking. I didn't need it for my use case.
- (void)generateThumbnailForImage:(NSImage*)image atPath:(NSString*)newFilePath forWidth:(int)width
{
CGSize size = CGSizeMake(width, image.size.height * (float)width / (float)image.size.width);
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width * 4, rgbColorspace, bitmapInfo);
NSGraphicsContext * graphicsContext = [NSGraphicsContext graphicsContextWithGraphicsPort:context flipped:NO];
[NSGraphicsContext setCurrentContext:graphicsContext];
[image drawInRect:NSMakeRect(0, 0, size.width, size.height) fromRect:NSMakeRect(0, 0, image.size.width, image.size.height) operation:NSCompositeCopy fraction:1.0];
CGImageRef outImage = CGBitmapContextCreateImage(context);
CFURLRef outURL = (CFURLRef)[NSURL fileURLWithPath:newFilePath];
CGImageDestinationRef outDestination = CGImageDestinationCreateWithURL(outURL, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(outDestination, outImage, NULL);
if(!CGImageDestinationFinalize(outDestination))
{
NSLog(#"Failed to write image to %#", newFilePath);
}
CFRelease(outDestination);
CGImageRelease(outImage);
CGContextRelease(context);
CGColorSpaceRelease(rgbColorspace);
}
To resize image
- (NSImage *)scaleImage:(NSImage *)anImage newSize:(NSSize)newSize
{
NSImage *sourceImage = anImage;
if ([sourceImage isValid])
{
if (anImage.size.width == newSize.width && anImage.size.height == newSize.height && newSize.width <= 0 && newSize.height <= 0) {
return anImage;
}
NSRect oldRect = NSMakeRect(0.0, 0.0, anImage.size.width, anImage.size.height);
NSRect newRect = NSMakeRect(0,0,newSize.width,newSize.height);
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage lockFocus];
[sourceImage drawInRect:newRect fromRect:oldRect operation:NSCompositeCopy fraction:1.0];
[newImage unlockFocus];
return newImage;
}
}

Converting NSImage to CIImage without degraded quality

I am trying to convert an NSImage to a CIImage. When I do this, there seems to be a huge loss in image quality. I think it is because of the "TIFFRepresentation". Does anyone have a better method? Thanks a lot.
NSImage *image = [[NSImage alloc] initWithData:[someSource dataRepresentation]];
NSData * tiffData = [image TIFFRepresentation];
CIImage *backgroundCIImage = [[CIImage alloc] initWithData:tiffData];
CIContext *ciContext = [[NSGraphicsContext currentContext] CIContext];
[ciContext drawImage:backgroundCIImage atPoint:CGPointZero fromRect:someRect];
Your problem is indeed converting to TIFF. PDF is a vector format, while TIFF is bitmap, so a TIFF will look blurry at larger sizes.
Your best bet is probably to get a CGImage from the NSImage and create the CIImage from that. Either that or just create the CIImage from the original data.
Try replacing the line
NSData * tiffData = [image TIFFRepresentation];
with
NSData * tiffData = [image TIFFRepresentationUsingCompression: NSTIFFCompressionNone factor: 0.0f];
because the documentation states that TIFFRepresentation uses the TIFF compression option associated with each image representation, which might not be NSTIFFCompressionNone. Thus, you should be explicit about wanting the tiffData uncompressed.
I finally solved the problem. Basically, I render the pdf document two times its normal resolution offscreen and then capture the image displayed by the view. For a more detailed image, just increase the scaling factor. Please see the code below for the proof of concept. I didn't show the CIImage but once you get the bitmap, just use the CIImage method to create the CIImage from the bitmap.
NSImage *pdfImage = [[NSImage alloc] initWithData:[[aPDFView activePage] dataRepresentation]];
NSSize size = [pdfImage size];
NSRect imageRect = NSMakeRect(0, 0, size.width, size.height);
imageRect.size.width *= 2; //Twice the scale factor
imageRect.size.height *= 2; //Twice the scale factor
PDFDocument *pageDocument = [[[PDFDocument alloc] init] autorelease];
[pageDocument insertPage:[aPDFView activePage] atIndex:0];
PDFView *pageView = [[[PDFView alloc] init] autorelease];
[pageView setDocument:pageDocument];
[pageView setAutoScales:YES];
NSWindow *offscreenWindow = [[NSWindow alloc] initWithContentRect:imageRect
styleMask:NSBorderlessWindowMask
backing:NSBackingStoreRetained
defer:NO];
[offscreenWindow setContentView:pageView];
[offscreenWindow display];
[[offscreenWindow contentView] display]; // Draw to the backing buffer
// Create the NSBitmapImageRep
[[offscreenWindow contentView] lockFocus];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageRect];
// Clean up and delete the window, which is no longer needed.
[[offscreenWindow contentView] unlockFocus];
[compositeImage TIFFRepresentation]];
NSData *imageData = [rep representationUsingType: NSJPEGFileType properties: nil];
[imageData writeToFile:#"/Users/David/Desktop/out.jpg" atomically: YES];
[offscreenWindow release];