Thin white lines being added when cropping an image (Objective-C OSX) - objective-c

I am cutting up a large image and saving it into many different images. I first implemented this in iOS and it is working fine, but when I try and port the code to OSX, a thin white line (1 pixel) appears on the top and right of the image. The line is not pure white, or solid (see sample below).
Here is the iOS code to make one sub-image, that works like a champ:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
Here is the ported code in OSX that causes the white lines to be added:
NSImage *source = [[[NSImage alloc]initWithContentsOfFile:imagePath] autorelease];
//init the image
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
//start drawing
[target lockFocus];
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//write to tiff
[[target TIFFRepresentation] writeToFile:#"outputImage.tiff" atomically:NO];
[target addRepresentation:bmpImageRep];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
//write the data to a file
[data writeToFile: #"outputImage.jpg" atomically:NO];
data = [bmpImageRep representationUsingType: NSPNGFileType properties: imageProps];
//write the data to png
[data writeToFile: #"outputImage.png" atomically:NO];
The above code saves the image to three different formats to check if the problem was not in the save process of a specific format. It does not seem to be because all the formats have the same problem.
Here is a blown up (4x) version of top right hand corner of the images:
(OSX, note the white line top and left. It looks like a blur here, because the image is blown up)
(iOS, note there are no white lines)
If someone could tell me why this might be happening, I would be very happy. Perhaps it has something to do with the quality difference (the OSX version seems lower quality - though you can't notice)? Perhaps there is a completely different way to do this?
For reference, here is the unscaled osx image:
Update: Thanks to Daij-Djan, I was able to stop the drawInRect method from antialiasing:
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeDestinationAtop
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
Update: Changed 'Interpolation' to NSImageInterpolationNone, as this gives a better representation. The high interpolation makes minor adjustments, which is noticeable when zooming in on text. Removing interpolation stops pixels from jumping around, but still, there is a little difference in the color (164 to 155 for a grey color). Would be great to be able to just cut up an image like I can in iOS...

it looks like antialiasing... you gotta round the float values you calculate when cutting/scaling the image.
use froundf() on the float values

Related

Creating NSImage from CGImageRef causes image pixel size to double

capture is a CGImageRef returned from a call to CGWindowListCreateImage(). When I try to turn it into an NSImage directly via initWithCGImage:size: it mysteriously doubles in size. If I instead manually create an NSBitmapImageRep from capture and then add it to an empty NSImage everything works ok.
My hardware setup is a retina MBP + non-retina external display. The capture is taking place on the non-retina screen.
NSLog(#"capture image size: %d %d", CGImageGetWidth(capture), CGImageGetHeight(capture));
NSLog(#"logical image size: %f %f", viewRect.size.width, viewRect.size.height);
NSBitmapImageRep *debugRep;
NSImage *image;
//
// Create NSImage directly
image = [[NSImage alloc] initWithCGImage:capture size:NSSizeFromCGSize(viewRect.size)];
debugRep = [[image representations] objectAtIndex:0];
NSLog(#"pixel size, NSImage direct: %d %d", debugRep.pixelsWide, debugRep.pixelsHigh);
//
// Create representation manually
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:capture];
image = [[NSImage alloc] initWithSize:NSSizeFromCGSize(viewRect.size)];
[image addRepresentation:imageRep];
[imageRep release];
debugRep = [[image representations] objectAtIndex:0];
NSLog(#"pixel size, NSImage + manual representation: %d %d", debugRep.pixelsWide, debugRep.pixelsHigh);
Log output:
capture image size: 356 262
logical image size: 356.000000 262.000000
pixel size, NSImage direct: 712 524
pixel size, NSImage + manual representation: 356 262
Is this expected behaviour?
The documentation for initWithCGImage:size: states:
You should not assume anything about the image, other than that
drawing it is equivalent to drawing the CGImage.
In the end I just continued on working with NSBitmapImageRep instances directly.

NSImage lockFocus and NSString size on retina display

I'm facing a weird issue, I'm drawing inside an NSImage using the following pseudo-code:
NSString* text = #"Hello world!";
NSDictionary *dict = [[[NSDictionary alloc] initWithObjectsAndKeys:[NSColor colorWithCGColor:textColor],NSForegroundColorAttributeName,font, NSFontAttributeName,nil] autorelease];
NSMutableAttributedString* str = [[[NSMutableAttributedString alloc] initWithString:text attributes:dict] autorelease];
NSSize stringSize = [str size];
NSImage* image = [[[NSImage alloc] initWithSize:stringSize] autorelease];
[image lockFocus];
NSRect drawRect = NSMakeRect(0,0,stringSize.width,stringSize.height);
[str drawInRect:drawRect];
[image unlockFocus];
Now the problem is that, with a dual monitor configuration, if I keep my retina display open, the string is mangled (I get half of the string drawn), while by simply closing my retina display and using only my cinema display, the string is drawn correctly. It's like the NSImage is getting the default context and some scaling factor from the retina display.
Do you have any hints ?
Thanks !
Ok, I will keep this for future reference, even there's something about displaying NSImage that covers the same aspect.
No matter what's your primary display but seems that the NSGraphicContext comes with an affine transformation that multiplies x 2 to address the retina resolution.
You just need to reset the affine transformations, before drawing into NSImage with:
NSAffineTransform *trans = [[[NSAffineTransform alloc] init] autorelease];
[trans set];

OS X - How to save NSImage or NSBitmapImageRep to PNG file without alpha channel?

I'm building an OS X app that needs to save the file to disk.
I'm currently using NSBitmapImageRep to represent the image in my code, and while saving the image to disk with representationUsingType:properties: method, I want to set the hasAlpha channel for the image, but the properties dictionary does not seem to support this.
So, I've tried to create a no-alpha bitmap representation, but according to many SO questions, the 3 channel/24 bits combination is not supported. Well, what should I do then?
Big thanks!
First off, I would try just making sure you create your NSBitmapImageRep with
-initWithBitmapDataPlanes:... hasAlpha:NO ...
And write it out and see if it the result doesn’t have alpha—one would kind of hope so.
If you’re trying to write out an image that has alpha, but not write the alpha, just copy it into a non-alpha image first, and write that out.
`
NSURL *url = [NSURL fileURLWithPath:name];
CGImageSourceRef source;
NSImage *srcImage =[[NSImage alloc] initWithContentsOfURL:url];;
NSLog(#"URL: %#",url);
source = CGImageSourceCreateWithData((__bridge CFDataRef)[srcImage TIFFRepresentation], NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGRect rect = CGRectMake(0.f, 0.f, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
rect.size.width,
rect.size.height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Little
);
CGContextDrawImage(bitmapContext, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(bitmapContext);
NSImage *finalImage = [[NSImage alloc] initWithCGImage:decompressedImageRef size:NSZeroSize];
NSData *imageData = [finalImage TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSPNGFileType properties:imageProps];
[imageData writeToFile:name atomically:NO];
CGImageRelease(decompressedImageRef);
CGContextRelease(bitmapContext);
`
ref:
https://github.com/bpolat/Alpha-Channel-Remover

Retina display problems when working with images

I have met problems when working on retina display. NSImage size is correct, but if I create NSBitmapImageRep from it and write it to file I get image witch's size is twice as big as original image. There is no such problem when I use it on non retina display.
I create NSImage from file (1920x1080)
I do some drawings on
I create NSBitmapImageRep from image with drawings
I write it to file
I get image with 3840x2160 dimensions
What could cause that?
NSImage *originalImage = [[NSImage alloc] initWithContentsOfURL:fileUrl];
NSImage *editedImage = [[NSImage alloc] initWithSize:originalImage.size];
[editedImage lockFocus];
//I draw here NSBezierPaths
[editedImage unlockFocus];
NSBitmapImageRep *savingRep = [NSBitmapImageRep imageRepsWithData:[editedImage TIFFRepresentation]];
NSData *savingData = [savingRep representationUsingType: NSPNGFileType properties: nil];
[savingData writeToFile:desiredFileLocationAndName atomically:no];
If I open image and save it without editing I get the correct dimensions image
NSImage *imageFromFile = [[NSImage alloc] initWithContentsOfURL:fileURL];
NSBitmapImageRep *newRepresentation = [[NSBitmapImageRep imageRepsWithData:[imageFromFile TIFFRepresentation]];
NSData *savingData = [newRepresentation representationUsingType: NSPNGFileType properties: nil];
[savingData writeToFile:desiredFileLocationAndName atomically:no];
A bitmap representation of the image is measured in pixels. It is twice the size. The NSImage is giving you a size in points which on retina devices measure 2 pixels per point. There is nothing wrong with what its giving you.

How to draw to a file in objective-c

How can I draw to an image in objective-c? all I need to do is create an image with a size I set, draw few AA lines and save the image to a png file. I tried to find it in apple docs but there are CGImage, NSImage, CIImage and more. which one is easiest for my goal? I only need to support the latest mac os x version so new things are not a problem.
Probably the easiest way is to use an NSImage and draw directly into it after calling lockFocus.
Example:
NSSize imageSize = NSMakeSize(512, 512);
NSImage *image = [[[NSImage alloc] initWithSize:imageSize] autorelease];
[image lockFocus];
//draw a line:
[NSBezierPath strokeLineFromPoint:NSMakePoint(100, 100) toPoint:NSMakePoint(200, 200)];
//...
NSBitmapImageRep *imageRep = [[[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)] autorelease];
NSData *pngData = [imageRep representationUsingType:NSPNGFileType properties:nil];
[image unlockFocus];
[pngData writeToFile:#"/path/to/your/file.png" atomically:YES];
Well your question is actually two questions in one.
First question is about how to draw an image. You should first read the docs about drawing images. Apple has a Cocoa Drawing Guide about this topic. Start from there to draw images.
Then you need to save the image to disk. Here is a nice piece of code from over here:
NSBitmapImageRep *bits = ...; // get a rep from your image, or grab from a view
NSData *data;
data = [bits representationUsingType: NSPNGFileType
properties: nil];
[data writeToFile: #"/path/to/wherever/test.png"
atomically: NO];