NSImage initWithContentsOfFile returns nil - objective-c

Hello I'm trying to load an image to NSImage from files, but aImage is always nil. What am I doing wrong here?
NSImage * aImage = [[NSImage alloc]initWithContentsOfFile: #"/Users/Thilina/Desktop/Other/20140818_163933_Fotor_Collage.jpg" ];
[imgPic setImage:aImage];

Here is an example.
- (void)getMyImage {
NSImage *img = [self getImage:filePath];
}
- (NSImage *)getImage:(NSString *)path {
NSArray *imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:path];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSImage *imageNSImage = [[NSImage alloc] initWithSize:NSMakeSize((CGFloat)width, (CGFloat)height)];
[imageNSImage addRepresentations:imageReps];
return imageNSImage;
}

Related

In cocoa: why changing image size it also change image resolution?

I am using this code to change size to an image and then create a new file with new dimensions. All is working fine but it changes the image dpi resolution and I don't want that... the initial image dpi res is 328 and after the resizing it becomes 72... how to keep the original dpi resolution?
Here is my code:
- (void)scaleIcons:(NSString *)outputPath :(NSURL *)nomeImmagine
{
NSImage *image = [[NSImage alloc] initWithContentsOfFile:[nomeImmagine path]];
if (!image)
image = [[NSWorkspace sharedWorkspace] iconForFile:[nomeImmagine path]];
NSSize outputSize = NSMakeSize(512.0f,512.0f);
NSImage *anImage = [self scaleImage:image toSize:outputSize];
NSString *finalPath = [outputPath stringByAppendingString:#"/icon_512x512.png"];
NSData *imageData = [anImage TIFFRepresentation];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:imageData];
NSData *dataToWrite = [rep representationUsingType:NSPNGFileType properties:nil];
[dataToWrite writeToFile:finalPath atomically:NO];
}
- (NSImage *)scaleImage:(NSImage *)image toSize:(NSSize)targetSize
{
if ([image isValid])
{
NSSize imageSize = [image size];
float width = imageSize.width;
float height = imageSize.height;
float targetWidth = targetSize.width;
float targetHeight = targetSize.height;
float scaleFactor = 0.0;
float scaledWidth = targetWidth;
float scaledHeight = targetHeight;
NSPoint thumbnailPoint = NSZeroPoint;
if (!NSEqualSizes(imageSize, targetSize))
{
float widthFactor = targetWidth / width;
float heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
{
scaleFactor = widthFactor;
}
else
{
scaleFactor = heightFactor;
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
if (widthFactor < heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor > heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
newImage = [[NSImage alloc] initWithSize:targetSize];
[newImage lockFocus];
NSRect thumbnailRect;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[image drawInRect:thumbnailRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[newImage unlockFocus];
}
}
return newImage;
}
Any help will be very much appreciated! Thanks... Massy
Finally I've been able to change the res... using the code taken by the post pointed by trojanfoe... I added this to my code before to write the file:
NSSize pointsSize = rep.size;
NSSize pixelSize = NSMakeSize(rep.pixelsWide, rep.pixelsHigh);
CGFloat currentDPI = ceilf((72.0f * pixelSize.width)/pointsSize.width);
NSLog(#"current DPI %f", currentDPI);
NSSize updatedPointsSize = pointsSize;
updatedPointsSize.width = ceilf((72.0f * pixelSize.width)/328);
updatedPointsSize.height = ceilf((72.0f * pixelSize.height)/328);
[rep setSize:updatedPointsSize];
only problem now is that the final resolution is 321,894 and not 328... but it's already something!

Rotate, Change Colors and Get RGB565 data from NSImage

I have found myself in a situation where I have several NSImage objects that I need to rotate by 90 degrees, change the colour of pixels that are one colour to another colour and then get the RGB565 data representation for it as an NSData object.
I found the vImageConvert_ARGB8888toRGB565 function in the Accelerate framework so this should be able to do the RGB565 output.
There are a few UIImage rotation I have found here on StackOverflow, but I'm having trouble converting them to NSImage as it appears I have to use NSGraphicsContext not CGContextRef?
Ideally I would like these in an NSImage Category so I can just call.
NSImage *rotated = [inputImage rotateByDegrees:90];
NSImage *colored = [rotated changeColorFrom:[NSColor redColor] toColor:[NSColor blackColor]];
NSData *rgb565 = [colored rgb565Data];
I just don't know where to start as image manipulation is new to me.
I appreciate any help I can get.
Edit (22/04/2013)
I have managed to piece this code together to generate the RGB565 data, it generates it upside down and with some small artefacts, I assume the first is due to different coordinate systems being used and the second possibly due to me going from PNG to BMP. I will do some more testing using a BMP to start and also a non-tranparent PNG.
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
For most of this, you'll want to use Core Image.
Rotation you can do with the CIAffineTransform filter. This takes an NSAffineTransform object. You may have already worked with that class before. (You could do the rotation with NSImage itself, but it's easier with Core Image and you'll probably need to use it for the next step anyway.)
I don't know what you mean by “change the colour of pixels that are one colour to another colour”; that could mean any of a lot of different things. Chances are, though, there's a filter for that.
I also don't know why you need 565 data specifically, but assuming you have a real need for that, you're correct that that function will be involved. Use CIContext's lowest-level rendering method to get 8-bit-per-component ARGB output, and then use that vImage function to convert it to 565 RGB.
I have managed to get what I want by using NSBitmapImageRep (accessing it with a bit of a hack). If anyone knows a better way of doing this, please do share.
The - (NSBitmapImageRep)bitmap method is my hack. The NSImage starts of having only an NSBitmapImageRep, however after the rotation method a CIImageRep is added which takes priority over the NSBitmapImageRep which breaks the colour code (as NSImage renders the CIImageRep which doesn't get colored).
BitmapImage.m (Subclass of NSImage)
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (NSBitmapImageRep*)bitmap
{
NSBitmapImageRep *bitmap = nil;
NSMutableArray *repsToRemove = [NSMutableArray array];
// Iterate through the representations that back the NSImage
for (NSImageRep *rep in self.representations)
{
// If the representation is a bitmap
if ([rep isKindOfClass:[NSBitmapImageRep class]])
{
bitmap = [(NSBitmapImageRep*)rep retain];
break;
}
else
{
[repsToRemove addObject:rep];
}
}
// If no bitmap representation was found, we create one (this shouldn't occur)
if (bitmap == nil)
{
bitmap = [[[NSBitmapImageRep alloc] initWithCGImage:self.CGImage] retain];
[self addRepresentation:bitmap];
}
for (NSImageRep *rep2 in repsToRemove)
{
[self removeRepresentation:rep2];
}
return [bitmap autorelease];
}
- (NSColor*)colorAtX:(NSInteger)x y:(NSInteger)y
{
return [self.bitmap colorAtX:x y:y];
}
- (void)setColor:(NSColor*)color atX:(NSInteger)x y:(NSInteger)y
{
[self.bitmap setColor:color atX:x y:y];
}
NSImage+Extra.m (NSImage Category)
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
Usage
- (IBAction)load:(id)sender
{
NSOpenPanel* openDlg = [NSOpenPanel openPanel];
[openDlg setCanChooseFiles:YES];
[openDlg setCanChooseDirectories:YES];
if ( [openDlg runModalForDirectory:nil file:nil] == NSOKButton )
{
NSArray* files = [openDlg filenames];
for( int i = 0; i < [files count]; i++ )
{
NSString* fileName = [files objectAtIndex:i];
BitmapImage *image = [[BitmapImage alloc] initWithContentsOfFile:fileName];
imageView.image = image;
}
}
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
NSColor *newColor = [img colorAtX:1 y:1];
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img colorAtX:x y:y] == newColor)
{
[img setColor:[NSColor redColor] atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
- (IBAction)rotate:(id)sender
{
BitmapImage *img = (BitmapImage*)imageView.image;
BitmapImage *newImg = [img rotate90DegreesClockwise:NO];
imageView.image = newImg;
}
Edit (24/04/2013)
I have changed the following code:
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
//NSLog(#"R: %ld, G:%ld, B:%ld", components[0], components[1], components[2]);
RGBColor color = {components[0], components[1], components[2]};
return color;
}
- (BOOL)color:(RGBColor)a isEqualToColor:(RGBColor)b
{
return ((a.red == b.red) && (a.green == b.green) && (a.blue == b.blue));
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
NSUInteger components[4] = {(NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue, 255};
//NSLog(#"R: %ld, G: %ld, B: %ld", components[0], components[1], components[2]);
[self.bitmap setPixel:components atX:x y:y];
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
RGBColor oldColor = [img colorAtX:0 y:0];
RGBColor newColor;// = {255, 0, 0};
newColor.red = 255;
newColor.green = 0;
newColor.blue = 0;
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img color:[img colorAtX:x y:y] isEqualToColor:oldColor])
{
[img setColor:newColor atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
But now it changes the pixels to red the first time and then blue the second time the colorize method is called.
Edit 2 (24/04/2013)
The following code fixes it. It was because the rotation code was adding an alpha channel to the NSBitmapImageRep.
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[1], components[2], components[3]};
return color;
}
else
{
NSUInteger components[3];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[0], components[1], components[2]};
return color;
}
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4] = {255, (NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
else
{
NSUInteger components[3] = {color.red, color.green, color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
}
Ok, I decided to spend the day researching Peter's suggestion of using CoreImage.
I had done some research previously and decided it was too hard but after an entire day of research I finally worked out what I needed to do and amazingly it couldn't be easier.
Early on I had decided that the Apple ChromaKey Core Image example would be a great starting point but the example code frightened me off due to the 3-dimensional colour cube. After watching the WWDC 2012 video on Core Image and finding some sample code on github (https://github.com/vhbit/ColorCubeSample) I decided to jump in and just give it a go.
Here are the important parts of the working code, I haven't included the RGB565Data method as I haven't written it yet, but it should be easy using the method Peter suggested:
CIImage+Extras.h
- (NSImage*) NSImage;
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise;
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor;
- (NSColor*) colorAtX:(NSUInteger)x y:(NSUInteger)y;
CIImage+Extras.m
- (NSImage*) NSImage
{
CGContextRef cg = [[NSGraphicsContext currentContext] graphicsPort];
CIContext *context = [CIContext contextWithCGContext:cg options:nil];
CGImageRef cgImage = [context createCGImage:self fromRect:self.extent];
NSImage *image = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
return [image autorelease];
}
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise
{
CIImage *im = self;
CIFilter *f = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform *t = [NSAffineTransform transform];
[t rotateByDegrees:clockwise ? -90 : 90];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
CGRect extent = [im extent];
f = [CIFilter filterWithName:#"CIAffineTransform"];
t = [NSAffineTransform transform];
[t translateXBy:-extent.origin.x
yBy:-extent.origin.y];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
return im;
}
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor
{
CIImage *im = self;
CIColor *backCIColor = [[CIColor alloc] initWithColor:backColor];
CIImage *backImage = [CIImage imageWithColor:backCIColor];
backImage = [backImage imageByCroppingToRect:self.extent];
[backCIColor release];
float chroma[3];
chroma[0] = chromaColor.redComponent;
chroma[1] = chromaColor.greenComponent;
chroma[2] = chromaColor.blueComponent;
// Allocate memory
const unsigned int size = 64;
const unsigned int cubeDataSize = size * size * size * sizeof (float) * 4;
float *cubeData = (float *)malloc (cubeDataSize);
float rgb[3];//, *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
size_t offset = 0;
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
float alpha = ((rgb[0] == chroma[0]) && (rgb[1] == chroma[1]) && (rgb[2] == chroma[2])) ? 0.0 : 1.0;
cubeData[offset] = rgb[0] * alpha;
cubeData[offset+1] = rgb[1] * alpha;
cubeData[offset+2] = rgb[2] * alpha;
cubeData[offset+3] = alpha;
offset += 4;
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:[NSNumber numberWithInt:size] forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
[colorCube setValue:im forKey:#"inputImage"];
im = [colorCube valueForKey:#"outputImage"];
CIFilter *sourceOver = [CIFilter filterWithName:#"CISourceOverCompositing"];
[sourceOver setValue:im forKey:#"inputImage"];
[sourceOver setValue:backImage forKey:#"inputBackgroundImage"];
im = [sourceOver valueForKey:#"outputImage"];
return im;
}
- (NSColor*)colorAtX:(NSUInteger)x y:(NSUInteger)y
{
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCIImage:self];
NSColor *color = [bitmap colorAtX:x y:y];
[bitmap release];
return color;
}

How do I capture a zoomed UIImageView inside of a Scrollview to crop?

Problem:
Cropping with the image zoomed out is fine.
Cropping with the image zoomed in is showing the image above what is should be.
The yOffset I have in there is because the crop square I want starts below where the scrollview does.
Code:
CGRect rect;
float yOffset = 84;
rect.origin.x = floorf([scrollView contentOffset].x * zoomScale);
rect.origin.y = floorf(([scrollView contentOffset].y + yOffset) * zoomScale);
rect.size.width = floorf([scrollView bounds].size.width * zoomScale);
rect.size.height = floorf((320 * zoomScale));
if (rect.size.width > 320) {
rect.size.width = 320;
}
if (rect.size.height > 320) {
rect.size.height = 320;
}
CGImageRef cr = CGImageCreateWithImageInRect([[imageView image] CGImage], rect);
UIImage *img = imageView.image; //[UIImage imageWithCGImage:cr];
UIGraphicsBeginImageContext(rect.size);
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, 320.0f, 320.0f);
NSLog(#"drawRect: %#", NSStringFromCGRect(drawRect));
NSLog(#"rect: %#", NSStringFromCGRect(rect));
// draw image
[img drawInRect:drawRect];
// grab image
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(cr);
[self.delegate imageCropper:self didFinishCroppingWithImage:cropped];
What am I doing that is causing the image to get the wrong height when zooming?
UIImage* imageFromView(UIImage* srcImage, CGRect* rect)
{
CGImageRef cr = CGImageCreateWithImageInRect(srcImage.CGImage, *rect);
UIImage* cropped = [UIImage imageWithCGImage:cr];
CGImageRelease(cr);
return cropped;
}
-(void) doneEditing
{
//Calculate the required area from the scrollview
CGRect visibleRect;
float scale = 1.0f/scrollView.zoomScale;
visibleRect.origin.x = scrollView.contentOffset.x * scale;
visibleRect.origin.y = scrollView.contentOffset.y * scale;
visibleRect.size.width = scrollView.bounds.size.width * scale;
visibleRect.size.height = scrollView.bounds.size.height * scale;
FinalOutputView* outputView = [[FinalOutputView alloc] initWithNibName:#"FinalOutputView" bundle:[NSBundle mainBundle]];
outputView.image = imageFromView(imageView.image, &visibleRect);
[self.navigationController pushViewController:outputView animated:YES];
[outputView release];
}
Loading Orginal Image:
Zooming Image:
Finally Capturing the Image
If you want to take screenshot from the whole view of scrollView (after zooming) you can do this:
UIImage* image = nil;
UIGraphicsBeginImageContext(self.scrollView.contentSize);
{
//save previous frames
CGPoint savedContentOffset = self.scrollView.contentOffset;
CGRect savedFrame = self.scrollView.frame;
CGRect imgFrame = self.imageView.frame;
//set the frames with current content size
self.scrollView.contentOffset = CGPointZero;
self.scrollView.frame = CGRectMake(0, 0, self.scrollView.contentSize.width, self.scrollView.contentSize.height);
self.imageView.frame = self.scrollView.frame;
//render image now :)
[self.scrollView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
//now set the frames again with old ones :)
self.scrollView.contentOffset = savedContentOffset;
self.scrollView.frame = savedFrame;
self.imageView.frame = imgFrame;
[self viewForZoomingInScrollView:self.scrollView];
}
UIGraphicsEndImageContext();
//get the documents path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
//save file as savedImage.png
NSString *savedImagePath = [documentsDirectory stringByAppendingPathComponent:#"savedImage.png"];
//get the png data from image
NSData *imageData = UIImagePNGRepresentation(image);
//write it now
[imageData writeToFile:savedImagePath atomically:NO];

How to compress and scale down an NSImage?

This is my compress code
NSBitmapImageRep* tmpRep = [[_image representations] objectAtIndex:0];
[tmpRep setPixelsWide:512];
[tmpRep setPixelsHigh:512];
[tmpRep setSize:NSMakeSize(SmallThumbnailWidth, SmallThumbnailHeight)];
NSDictionary* imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.3] forKey:NSImageCompressionFactor];
NSData* outputImageData = [tmpRep
representationUsingType:NSJPEGFileType properties:imageProps];
NSString* imageFilePath = [NSString stringWithFormat:#"%#/thumbnail.jpg",imagePath];
[outputImageData writeToFile:imageFilePath atomically:YES];
The original image size is 960*960.I want to compress the original image into 512*512.But the output Image's size is 960*960 when I check it in finder and the location size which compares with the original has really been compressed.Any one could tell me why ? thank you
Try this one:
This will reduce the saving size in kbs:
-(NSImage *)imageCompressedByFactor:(float)factor{
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:[self TIFFRepresentation]];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:factor] forKey:NSImageCompressionFactor];
NSData *compressedData = [imageRep representationUsingType:NSJPEGFileType properties:options];
return [[NSImage alloc] initWithData:compressedData];
}
This will reduce the file size in pixels :
Copied from here
#implementation NSImage (ProportionalScaling)
- (NSImage*)imageByScalingProportionallyToSize:(NSSize)targetSize{
NSImage* sourceImage = self;
NSImage* newImage = nil;
if ([sourceImage isValid]){
NSSize imageSize = [sourceImage size];
float width = imageSize.width;
float height = imageSize.height;
float targetWidth = targetSize.width;
float targetHeight = targetSize.height;
float scaleFactor = 0.0;
float scaledWidth = targetWidth;
float scaledHeight = targetHeight;
NSPoint thumbnailPoint = NSZeroPoint;
if ( NSEqualSizes( imageSize, targetSize ) == NO )
{
float widthFactor = targetWidth / width;
float heightFactor = targetHeight / height;
if ( widthFactor < heightFactor )
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
if ( widthFactor < heightFactor )
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
else if ( widthFactor > heightFactor )
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
newImage = [[NSImage alloc] initWithSize:targetSize];
[newImage lockFocus];
NSRect thumbnailRect;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect: thumbnailRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
[newImage unlockFocus];
}
return [newImage autorelease];
}
#end
Create a category method like so, in order to incrementally compress the image till you meet the desired file size:
- (NSImage*) compressUnderMegaBytes:(CGFloat)megabytes {
CGFloat compressionRatio = 1.0;
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:[self TIFFRepresentation]];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:compressionRatio] forKey:NSImageCompressionFactor];
NSData *compressedData = [imageRep representationUsingType:NSJPEGFileType properties:options];
while ([compressedData length]>(megabytes*1024*1024)) {
#autoreleasepool {
compressionRatio = compressionRatio * 0.9;
options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:compressionRatio] forKey:NSImageCompressionFactor];
compressedData = [imageRep representationUsingType:NSJPEGFileType properties:options];
// Safety check, 0.4 is a reasonable compression size, anything below will become blurry
if (compressionRatio <= 0.4) {
break;
}
}
}
return [[NSImage alloc] initWithData: compressedData];
}
You can then use it like this:
NSImage *compressedImage = [myImage compressUnderMegaBytes: 0.5];
level from 0.0 to 1.0
func getImageQualityWithLevel(image: NSImage, level: CGFloat) -> NSImage {
let _image = image
var newRect: NSRect = NSMakeRect(0, 0, _image.size.width, _image.size.height)
let imageSizeH: CGFloat = _image.size.height * level
let imageSizeW: CGFloat = _image.size.width * level
var newImage = NSImage(size: NSMakeSize(imageSizeW, imageSizeH))
newImage.lockFocus()
NSGraphicsContext.currentContext()?.imageInterpolation = NSImageInterpolation.Low
_image.drawInRect(NSMakeRect(0, 0, imageSizeW, imageSizeH), fromRect: newRect, operation: NSCompositingOperation.CompositeSourceOver, fraction: 1)
newImage.unlockFocus()
return newImage
}

NSImage doesn't scale

I'm developing a quick app in which I have a method that should rescale a #2x image to a regular one. The problem is that it doesn't :(
Why?
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
NSSize size = NSZeroSize;
size.width = inputRetinaImage.size.width*0.5;
size.height = inputRetinaImage.size.height*0.5;
[inputRetinaImage setSize:size];
NSLog(#"%f",inputRetinaImage.size.height);
NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSData *data = [imgRep representationUsingType: NSPNGFileType properties: nil];
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
NSLog([#"Normal version file path: " stringByAppendingString:outputFilePath]);
[data writeToFile:outputFilePath atomically: NO];
return true;
}
You have to be very wary of the size attribute of an NSImage. It doesn't necessarily refer to the bitmapRepresentation's pixel dimensions, it could refer to the displayed size for example. An NSImage may have a number of bitmapRepresentations for use at different output sizes.
Likewise, changing the size attribute of an NSImage does nothing to alter the bitmapRepresentations
So what you need to do is work out the size you want your output image to be, and then draw a new image at that size using a bitmapRepresentation from the source NSImage.
Getting that size depends on how you have obtained your input image and what you know about it. For example, if you are confident that your input image has only one bitmapImageRep you can use this type of thing (as a category on NSImage)
- (NSSize) pixelSize
{
NSBitmapImageRep* bitmap = [[self representations] objectAtIndex:0];
return NSMakeSize(bitmap.pixelsWide,bitmap.pixelsHigh);
}
Even if you have a number of bitmapImageReps, the first one should be the largest one, and if that is the size that your Retina image was created at, it should be the Retina size you are after.
When you have worked out your final size, you can make the image:
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
update
Here is a more elaborate version of a pixel-size-getting category on NSImage... let's assume nothing about the image, how many imageReps it has, whether it has any bitmapImageReps... this will return the largest pixel dimensions it can find. If it can't find bitMapImageRep pixel dimensions it will use whatever else it can get, which will most likely be bounding box dimensions (used by eps and pdfs).
NSImage+PixelSize.h
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>
#interface NSImage (PixelSize)
- (NSInteger) pixelsWide;
- (NSInteger) pixelsHigh;
- (NSSize) pixelSize;
#end
NSImage+PixelSize.m
#import "NSImage+PixelSize.h"
#implementation NSImage (Extensions)
- (NSInteger) pixelsWide
{
/*
returns the pixel width of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsWide > bitmapResult)
bitmapResult = imageRep.pixelsWide;
} else {
if (imageRep.pixelsWide > result)
result = imageRep.pixelsWide;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSInteger) pixelsHigh
{
/*
returns the pixel height of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsHigh > bitmapResult)
bitmapResult = imageRep.pixelsHigh;
} else {
if (imageRep.pixelsHigh > result)
result = imageRep.pixelsHigh;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSSize) pixelSize
{
return NSMakeSize(self.pixelsWide,self.pixelsHigh);
}
#end
You would #import "NSImage+PixelSize.h" in your current file to make it accessible.
With this image category and the resize: method, you would modify your method thus:
//size.width = inputRetinaImage.size.width*0.5;
//size.height = inputRetinaImage.size.height*0.5;
size.width = inputRetinaImage.pixelsWide*0.5;
size.height = inputRetinaImage.pixelsHigh*0.5;
//[inputRetinaImage setSize:size];
NSImage* outputImage = [self resizeImage:inputRetinaImage size:size];
//NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSBitmapImageRep *imgRep = [[outputImage representations] objectAtIndex: 0];
That should fix things for you (proviso: I haven't tested it on your code)
I modified the script i use to downscale my images for you :)
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
//determine new size
NSBitmapImageRep* bitmapImageRep = [[inputRetinaImage representations] objectAtIndex:0];
NSSize size = NSMakeSize(bitmapImageRep.pixelsWide * 0.5,bitmapImageRep.pixelsHigh * 0.5);
NSLog(#"size = %#", NSStringFromSize(size));
//get CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[inputRetinaImage TIFFRepresentation], NULL);
CGImageRef oldImageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(oldImageRef);
if (alphaInfo == kCGImageAlphaNone) alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context
CGContextRef bitmap = CGBitmapContextCreate(NULL, size.width, size.height, 8, 4 * size.width, CGImageGetColorSpace(oldImageRef), alphaInfo);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, CGRectMake(0, 0, size.width, size.height), oldImageRef);
// Get an image from the context
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
//this does not work in my test.
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
//but this does!
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* docsDirectory = [paths objectAtIndex:0];
NSString *newfileName = [docsDirectory stringByAppendingFormat:#"/%#", [outputFilePath lastPathComponent]];
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:newfileName];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, newImageRef, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", newfileName);
}
CFRelease(destination);
return true;
}