I'm flipping between NSImage and NSBitmapImageRep(because only NSBitmapImageRep lets me find-replace colors per-pixel, but only NSImages can be used in a NSImageView/NSImageCell's setImage). I know how to convert a NSImage to a NSBitmapImageRep (using bitmap = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]]), but I can't do the opposite of that because TIFFRepresentation is read-only. So, how do I convert a NSBitmapRep to a NSImage?
With hope,
radzo73
All that I had to do to get back an NSImage from my modified NSBitmapImageRep was call one method:
NSImage *image = [[NSImage alloc] initWithSize:someSize];
[...]
NSBitmapImageRep *rawImage = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
[code that gets and sets pixels]
[image initWithCGImage:[rawImage CGImage] size:image.size]; // overwrite original NSImage with modified one
Related
I'm using the following code to flip (rotate by 180 degrees) an NSImage.
But the new image is twice the size (MBs, not dimensions) of the original when saved to disk. I want it to be approximately the same as the original. How can I accomplish this?
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:[[_imageView image] TIFFRepresentation]];
NSImage *img = [[NSImage alloc] initWithSize:NSMakeSize(imageRep.pixelsWide, imageRep.pixelsHigh)];
[img lockFocus];
NSAffineTransform *rotator = [NSAffineTransform transform];
[rotator translateXBy:imageRep.pixelsWide yBy:imageRep.pixelsHigh];
[rotator scaleXBy:-1 yBy:-1];
[rotator concat];
[imageRep drawInRect:NSMakeRect(0, 0, imageRep.pixelsWide, imageRep.pixelsHigh)];
[img unlockFocus];
Code I'm using to save image to disk :
[[NSFileManager defaultManager] createFileAtPath:path contents:[img TIFFRepresentation] attributes:nil];
Thanks in advance!
I still don't know the root cause of this, but one work around is to save to the JPEG representation instead of the TIFF. The method I wrote is as follows :
- (void)CompressAndSaveImg:(NSImage *)img ToDiskAt:(NSString *)path WithCompressionFactor:(float)value{
NSData *imgData = [img TIFFRepresentation];
NSBitmapImageRep *imgRep = [NSBitmapImageRep imageRepWithData:imgData];
NSNumber *compressionFactor = [NSNumber numberWithFloat:value];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:compressionFactor forKey:NSImageCompressionFactor];
imgData = [imgRep representationUsingType:NSJPEGFileType properties:imageProps];
[imgData writeToFile:path atomically:YES];
}
You can play around the with the WithCompressionFactor:(float)value parameter, but 0.75 works fine for me.
I'm building an OS X app that needs to save the file to disk.
I'm currently using NSBitmapImageRep to represent the image in my code, and while saving the image to disk with representationUsingType:properties: method, I want to set the hasAlpha channel for the image, but the properties dictionary does not seem to support this.
So, I've tried to create a no-alpha bitmap representation, but according to many SO questions, the 3 channel/24 bits combination is not supported. Well, what should I do then?
Big thanks!
First off, I would try just making sure you create your NSBitmapImageRep with
-initWithBitmapDataPlanes:... hasAlpha:NO ...
And write it out and see if it the result doesn’t have alpha—one would kind of hope so.
If you’re trying to write out an image that has alpha, but not write the alpha, just copy it into a non-alpha image first, and write that out.
`
NSURL *url = [NSURL fileURLWithPath:name];
CGImageSourceRef source;
NSImage *srcImage =[[NSImage alloc] initWithContentsOfURL:url];;
NSLog(#"URL: %#",url);
source = CGImageSourceCreateWithData((__bridge CFDataRef)[srcImage TIFFRepresentation], NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGRect rect = CGRectMake(0.f, 0.f, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
rect.size.width,
rect.size.height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Little
);
CGContextDrawImage(bitmapContext, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(bitmapContext);
NSImage *finalImage = [[NSImage alloc] initWithCGImage:decompressedImageRef size:NSZeroSize];
NSData *imageData = [finalImage TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSPNGFileType properties:imageProps];
[imageData writeToFile:name atomically:NO];
CGImageRelease(decompressedImageRef);
CGContextRelease(bitmapContext);
`
ref:
https://github.com/bpolat/Alpha-Channel-Remover
I have met problems when working on retina display. NSImage size is correct, but if I create NSBitmapImageRep from it and write it to file I get image witch's size is twice as big as original image. There is no such problem when I use it on non retina display.
I create NSImage from file (1920x1080)
I do some drawings on
I create NSBitmapImageRep from image with drawings
I write it to file
I get image with 3840x2160 dimensions
What could cause that?
NSImage *originalImage = [[NSImage alloc] initWithContentsOfURL:fileUrl];
NSImage *editedImage = [[NSImage alloc] initWithSize:originalImage.size];
[editedImage lockFocus];
//I draw here NSBezierPaths
[editedImage unlockFocus];
NSBitmapImageRep *savingRep = [NSBitmapImageRep imageRepsWithData:[editedImage TIFFRepresentation]];
NSData *savingData = [savingRep representationUsingType: NSPNGFileType properties: nil];
[savingData writeToFile:desiredFileLocationAndName atomically:no];
If I open image and save it without editing I get the correct dimensions image
NSImage *imageFromFile = [[NSImage alloc] initWithContentsOfURL:fileURL];
NSBitmapImageRep *newRepresentation = [[NSBitmapImageRep imageRepsWithData:[imageFromFile TIFFRepresentation]];
NSData *savingData = [newRepresentation representationUsingType: NSPNGFileType properties: nil];
[savingData writeToFile:desiredFileLocationAndName atomically:no];
A bitmap representation of the image is measured in pixels. It is twice the size. The NSImage is giving you a size in points which on retina devices measure 2 pixels per point. There is nothing wrong with what its giving you.
I am cutting up a large image and saving it into many different images. I first implemented this in iOS and it is working fine, but when I try and port the code to OSX, a thin white line (1 pixel) appears on the top and right of the image. The line is not pure white, or solid (see sample below).
Here is the iOS code to make one sub-image, that works like a champ:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
Here is the ported code in OSX that causes the white lines to be added:
NSImage *source = [[[NSImage alloc]initWithContentsOfFile:imagePath] autorelease];
//init the image
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
//start drawing
[target lockFocus];
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//write to tiff
[[target TIFFRepresentation] writeToFile:#"outputImage.tiff" atomically:NO];
[target addRepresentation:bmpImageRep];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
//write the data to a file
[data writeToFile: #"outputImage.jpg" atomically:NO];
data = [bmpImageRep representationUsingType: NSPNGFileType properties: imageProps];
//write the data to png
[data writeToFile: #"outputImage.png" atomically:NO];
The above code saves the image to three different formats to check if the problem was not in the save process of a specific format. It does not seem to be because all the formats have the same problem.
Here is a blown up (4x) version of top right hand corner of the images:
(OSX, note the white line top and left. It looks like a blur here, because the image is blown up)
(iOS, note there are no white lines)
If someone could tell me why this might be happening, I would be very happy. Perhaps it has something to do with the quality difference (the OSX version seems lower quality - though you can't notice)? Perhaps there is a completely different way to do this?
For reference, here is the unscaled osx image:
Update: Thanks to Daij-Djan, I was able to stop the drawInRect method from antialiasing:
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeDestinationAtop
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
Update: Changed 'Interpolation' to NSImageInterpolationNone, as this gives a better representation. The high interpolation makes minor adjustments, which is noticeable when zooming in on text. Removing interpolation stops pixels from jumping around, but still, there is a little difference in the color (164 to 155 for a grey color). Would be great to be able to just cut up an image like I can in iOS...
it looks like antialiasing... you gotta round the float values you calculate when cutting/scaling the image.
use froundf() on the float values
How can I draw to an image in objective-c? all I need to do is create an image with a size I set, draw few AA lines and save the image to a png file. I tried to find it in apple docs but there are CGImage, NSImage, CIImage and more. which one is easiest for my goal? I only need to support the latest mac os x version so new things are not a problem.
Probably the easiest way is to use an NSImage and draw directly into it after calling lockFocus.
Example:
NSSize imageSize = NSMakeSize(512, 512);
NSImage *image = [[[NSImage alloc] initWithSize:imageSize] autorelease];
[image lockFocus];
//draw a line:
[NSBezierPath strokeLineFromPoint:NSMakePoint(100, 100) toPoint:NSMakePoint(200, 200)];
//...
NSBitmapImageRep *imageRep = [[[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)] autorelease];
NSData *pngData = [imageRep representationUsingType:NSPNGFileType properties:nil];
[image unlockFocus];
[pngData writeToFile:#"/path/to/your/file.png" atomically:YES];
Well your question is actually two questions in one.
First question is about how to draw an image. You should first read the docs about drawing images. Apple has a Cocoa Drawing Guide about this topic. Start from there to draw images.
Then you need to save the image to disk. Here is a nice piece of code from over here:
NSBitmapImageRep *bits = ...; // get a rep from your image, or grab from a view
NSData *data;
data = [bits representationUsingType: NSPNGFileType
properties: nil];
[data writeToFile: #"/path/to/wherever/test.png"
atomically: NO];