NSImage Image Size With Multiple Layers - objective-c

I have a Mac (not iOS) application that allows the user to select one or more images with NSOpenPanel. What I have trouble with is how to get correct dimensions for multiple-layered images. If an image contains one layer or compressed, the following will get me correct image dimensions with the file path.
NSImage *image0 = [[NSImage alloc] initWithContentsOfFile:path];
CGFloat w = image0.size.width;
CGFloat h = image0.size.height;
But if I select an image that has multiple layers, I'll get strange numbers. For example, I have a single-layer image whose dimensions are 1,440 x 900 px according to Fireworks. If I add a small layer of a circle and save an image as PNG and read it, I get 1,458 x 911 px. According to this topic and this topic, they suggest that I read the largest layer. Okay. So I've created a function as follows.
- (CGSize)getimageSize :(NSString *)filepath {
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:filepath];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSSize size = CGSizeMake((CGFloat)width, (CGFloat)height);
return size;
}
Using the function above, I get wrong dimensions (1,458 x 911 px) instead of 1,440 x 900 px. Actually, I had the same problem when I was developing Mac applications with REAL Stupid till a few years ago. So how can I get correct dimensions when an image contains multiple layers?
Thank you for your advice.

Related

NSImage -initWithContentsOfFile returns an image of size zero (0)

To load icon images, I have the below code in one of the methods:
NSLog(#"icon path: %#", iconPath);
NSImage *iconImage = [[NSImage alloc] initWithContentsOfFile:iconPath];
return iconImage;
From the log output, it is clear that image resources are being opened from the correct location. I don't see errors. Yet, tif files that I open are shown to have empty NSSize (width=0, height=0) in debugger, and displayed on the screen as if I am pointing to some runaway memory segment.
Flags are mainly set to 0. The exceptions are colorMatchPreferred and multipleResolutionMatching set to 1.
Reps points to an array (NSArrayM *) containing two (2) bitmap representations (NSBitmapImageRep entries).
Please advise what am I doing wrong!
Thank you
//to get image original size use this code
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:fileUrl];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps)
{
if ([imageRep pixelsWide] > width)
width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height)
height = [imageRep pixelsHigh];
}

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

How to programmatically determine native pixel resolution of Retina MacBook Pro screen on OS X?

Given a CGDirectDisplayID returned from
CGError error = CGGetActiveDisplayList(8, directDisplayIDs, &displayCount);
for the built-in screen on a Retina MacBook Pro, I would expect to fetch the native pixel dimensions using
size_t pixelWidth = CGDisplayPixelsWide(directDisplayID);
size_t pixelHeight = CGDisplayPixelsHigh(directDisplayID);
However, these calls only return the dimensions of the currently selected mode. If I change screen resolution I get back different values. I was looking to get back 2880 x 1800 on a 15" rMBP.
How do I fetch the native pixel dimensions of a Retina MacBook Pro screen?
I think the best approach is to enumerate all of the display modes (including the 1x modes) and find the biggest 1x mode's dimensions.
You would use CGDisplayCopyAllDisplayModes() and pass a dictionary with the key kCGDisplayShowDuplicateLowResolutionModes mapped to kCFBooleanTrue as the options to get all of the modes. You can test that CGDisplayModeGetPixelWidth() is equal to CGDisplayModeGetWidth() to determine which are 1x.
CGDisplayModeGetIOFlags can tell you some information of the display. The native resolutions have kDisplayModeNativeFlag set. The following will set ns to be the native resolution of the current screen of the window win.
CGDirectDisplayID sid = ((NSNumber *)[win.screen.deviceDescription
objectForKey:#"NSScreenNumber"]).unsignedIntegerValue;
CFArrayRef ms = CGDisplayCopyAllDisplayModes(sid, NULL);
CFIndex n = CFArrayGetCount(ms);
NSSize ns;
for(int i = 0; i < n; ++i){
CGDisplayModeRef m = (CGDisplayModeRef)CFArrayGetValueAtIndex(ms, i);
if(CGDisplayModeGetIOFlags(m) & kDisplayModeNativeFlag){
ns.width = CGDisplayModeGetPixelWidth(m);
ns.height = CGDisplayModeGetPixelHeight(m);
break;
}
}
CFRelease(ms);
I would go a different route. Instead of finding out the screen dimensions, I would fetch the device model number that the program was being run on and then compare the model number to the dimensions of its screen. This may be tedious to program the model number and corresponding screen size but thats the only way I can think of. Hope this helps.
If using NSScreen is an option, you could do something like this in OSX 10.7:
NSRect framePixels = [screen convertRectToBacking:[screen frame]];
where framePixels.size is your display's pixel resolution and screen is a pointer to NSScreen. For example, this code would print the pixel resolution of all active displays to console:
for (NSScreen* screen in [NSScreen screens])
{
NSRect framePixels = [screen convertRectToBacking:[screen frame]];
NSLog(#"framePixels: (%f, %f)", framePixels.size.width, framePixels.size.height);
}
system_profiler SPDisplaysDataType | grep Resolution:
On a two display machine, I get this output:
Resolution: 2880 x 1800 Retina
Resolution: 2560 x 1440 (QHD/WQHD - Wide Quad High Definition)
I heard about this from a similar question: How to get the physical display resolution on MacOS?
Modified version of Ken/jxy solution above, by default uses screen frame*backingScaleFactor if no native resolution is found:
static float getScreenScaleFactor() {
NSRect screenFrame = [[NSScreen mainScreen] frame];
// this may well be larger than actual pixel dimensions of screen, as Mac will report the backing scale factor of the render buffer, not the screen, stupidly
float bestWidth = screenFrame.size.width * [[NSScreen mainScreen] backingScaleFactor];
// if there's a native resolution found in this method, that's more accurate than above
CFArrayRef myModes = CGDisplayCopyAllDisplayModes(CGMainDisplayID(), NULL);
for (int i = 0; i < CFArrayGetCount(myModes); i++) {
CGDisplayModeRef myMode = (CGDisplayModeRef) CFArrayGetValueAtIndex(myModes, i);
if (CGDisplayModeGetIOFlags(myMode) & kDisplayModeNativeFlag) {
bestWidth = CGDisplayModeGetPixelWidth(myMode);
//printf("found native resolution: %i %i\n", (int)CGDisplayModeGetPixelWidth(myMode), (int)CGDisplayModeGetPixelHeight(myMode));
break;
}
}
return bestWidth/screenFrame.size.width;
}

Average Color of Mac Screen

I'm trying to find out a way to calculate the average color of the screen using objective-c.
So far I use this code to get a screen shot, which works great:
CGImageRef image1 = CGDisplayCreateImage(kCGDirectMainDisplay);
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:image1];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
Now my problem is to calculate the average RGB color of this image.
I've found one solution, but the R G and B color components were always calculated to be the same (equal):
NSInteger i = 0;
NSInteger components[3] = {0,0,0};
unsigned char *data = [bitmapRep bitmapData];
NSInteger pixels = ([bitmapRep size].width *[bitmapRep size].height);
do {
components[0] += *data++;
components[1] += *data++;
components[2] += *data++;
} while (++i < pixels);
int red = (CGFloat)components[0] / pixels;
int green = (CGFloat)components[1] / pixels;
int blue = (CGFloat)components[2] / pixels;
A short analysis of bitmapRep shows that each pixel has 32 bits (4 bytes) where the first byte is unused, it is a padding byte, in other words the format is XRGB and X is not used. (There are no padding bytes at the end of a pixel row).
Another remark: for counting the number of pixels you use the method -(NSSize)size.
You should never do this! size has nothing to do with pixels. It only says how big the image should be depicted (expressed in inch or cm or mm) on the screen or the printer. For counting (or using otherwise) the pixels you should use -(NSInteger)pixelsWide and -(NSInteger)pixelsHigh. But the (wrong) using of -size works if and only if the resolution of the imageRep is 72 dots per inch.
Finally: there is a similar question at Average Color of Mac Screen
Your data is probably aligned as 4 bytes per pixel (and not 3 bytes, like you assume). That would (statistically) explain the near-equal values that you get.

NSImage acting weird

Why is this code setting artistImage to an image with 0 width and 0 height?
NSURL *artistImageURL = [NSURL URLWithString:#"http://userserve-ak.last.fm/serve/252/8581581.jpg"];
NSImage *artistImage = [[NSImage alloc] initWithContentsOfURL:artistImageURL];
As Ken wrote, the DPI is messed up in this image. If you want to force NSImage to set the real image size (ignoring the DPI), use the method described at http://borkware.com/quickies/one?topic=NSImage:
NSBitmapImageRep *rep = [[image representations] objectAtIndex: 0];
NSSize size = NSMakeSize([rep pixelsWide], [rep pixelsHigh]);
[image setSize: size];
NSImage does load this fine for me, but that particular image has corrupt metadata. Its resolution according to the exif data is 7.1999997999228071e-06 dpi.
NSImage respects the DPI info in the file, so if you try to draw the image at its natural size, you'll get something 2520000070 pixels across.
Last I checked, NSImage's -initWithContentsOfURL: only works with file URLs. You'll need to retrieve the URL first, and then use -initWithData:
It is more or less guaranteed that .representations contains NSImageRep* (of course not always NSBitmapImageRep). To be on a safe side for future extensions one can write something like code below. And it also takes into account multiple representation (like in some .icns and .tiff files).
#implementation NSImage (Extension)
- (void) makePixelSized {
NSSize max = NSZeroSize;
for (NSObject* o in self.representations) {
if ([o isKindOfClass: NSImageRep.class]) {
NSImageRep* r = (NSImageRep*)o;
if (r.pixelsWide != NSImageRepMatchesDevice && r.pixelsHigh != NSImageRepMatchesDevice) {
max.width = MAX(max.width, r.pixelsWide);
max.height = MAX(max.height, r.pixelsHigh);
}
}
}
if (max.width > 0 && max.height > 0) {
self.size = max;
}
}
#end