How to get pixel color at location from UIimage scaled within a UIimageView - objective-c

I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.

try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}

Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!

Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}

Related

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

Detect most black pixel on an image - objective-c iOS

I have an image! It's been so long since I've done pixel detection, I remember you have to convert the pixels to an array somehow and then find the width of the image to find out when the pixels reach the end of a row and go to the next one and ahh, lots of complex stuff haha! Anyways I now have no clue how to do this anymore but I need to detect the left-most darkest pixel's x&y coordinates of my image named "image1"... Any good starting places?
Go to your bookstore, find a book called "iOS Developer's Cookbook" by Erica Sadun. Go to page 378-ish and there are methods for pixel detection there. You can look in this array of RGB values and run a for loop to sort and find the pixel that has the smallest sum of R, G, and B values (this will be 0-255) that will give you the pixel closest to black.
I can also post the code if needed. But the book is the best source as it gives methods and explanations.
These are mine with some changes. The method name remains the same. All I changed was the image which basically comes from an image picker.
-(UInt8 *) createBitmap{
if (!self.imageCaptured) {
NSLog(#"Error: There has not been an image captured.");
return nil;
}
//create bitmap for the image
UIImage *myImage = self.imageCaptured;//image name for test pic
CGContextRef context = CreateARGBBitmapContext(myImage.size);
if(context == NULL) return NULL;
CGRect rect = CGRectMake(0.0f/*start width*/, 0.0f/*start*/, myImage.size.width /*width bound*/, myImage.size.height /*height bound*/); //original
// CGRect rect = CGRectMake(myImage.size.width/2.0 - 25.0 /*start width*/, myImage.size.height/2.0 - 25.0 /*start*/, myImage.size.width/2.0 + 24.0 /*width bound*/, myImage.size.height/2.0 + 24.0 /*height bound*/); //test rectangle
CGContextDrawImage(context, rect, myImage.CGImage);
UInt8 *data = CGBitmapContextGetData(context);
CGContextRelease(context);
return data;
}
CGContextRef CreateARGBBitmapContext (CGSize size){
//Create new color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
//Allocate memory for bitmap data
void *bitmapData = malloc(size.width*size.height*4);
if(bitmapData == NULL){
fprintf(stderr, "Error: memory not allocated\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Build an 8-bit per channel context
CGContextRef context = CGBitmapContextCreate(bitmapData, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
fprintf(stderr, "Error: Context not created!");
free(bitmapData);
return NULL;
}
return context;
}
NSUInteger blueOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+3);
}
NSUInteger redOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+1);
}
The method on the bottom, redOffset, will get you the Red value in the ARGB (Alpha-Red-Green-Blue) scale. To change what channel in the ARGB you are looking at, change the value added to the x variable in the redOffset function to 0 to find alpha, keep it at 1 to find red, 2 to find green, and 3 to find blue. This works because it just looks at an array made in the methods above and the addition to x accounts for the index value. Essentially, use methods for the three colors (Red, green, and blue) and find the summation of those for each pixel. Whichever pixel has the lowest value of red, green, and blue together is the most black.

How can I get the underlying pixel data from a UIImage or CGImage?

I've tried numerous 'solutions' around the net, all of those I found have errors and thus don't work. I need to know the color of a pixel in a UIImage. How can i get this information?
Getting the raw data
From Apple's Technical Q&A QA1509 it says this will get the raw image data in it's original format by getting it from the Data Provider.
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
Needed in a different format or color-space
If you want to get the data color-matched and in a specific format you can use something similar to the following code sample:
void ManipulateImagePixelData(CGImageRef inImage)
{
// Create the bitmap context
CGContextRef cgctx = CreateARGBBitmapContext(inImage);
if (cgctx == NULL)
{
// error creating context
return;
}
// Get image width, height. We'll use the entire image.
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
void *data = CGBitmapContextGetData (cgctx);
if (data != NULL)
{
// **** You have a pointer to the image data ****
// **** Do stuff with the data here ****
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data)
{
free(data);
}
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
Color of a particular pixel
Assuming RGB, once you have the data in a format you like finding the color is a matter of moving through the array of data and getting the RGB value at a particular pixel location.
If you're looking to just get a single pixel or a few ones you can look to do a little different approach. Create a 1x1 bitmap context and draw the image over it with an offset so you just get the pixel you want.
CGImageRef image = uiimage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
// Setup 1x1 pixel context to draw into
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rawData[4];
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the image
CGContextDrawImage(context,
CGRectMake(-offset.x, offset.y-height, width, height),
image);
// Done
CGContextRelease(context);
// Get the pixel information
unsigned char red = rawData[0];
unsigned char green = rawData[1];
unsigned char blue = rawData[2];
unsigned char alpha = rawData[3];

Manipulating raw png data

I want to read a PNG file such that I can:
a) Access the raw bitmap data of the file, with no color space adjustment or alpha premultiply.
b) Based on that bitmap, display bit slices (any single bit of R, G, B, or A, across the whole image) in an image in the window. If I have the bitmap I can find the right bits, but what can I stuff them into to get them onscreen?
c) After some modification of the bitplanes, write a new PNG file, again with no adjustments.
This is only for certain specific images. The PNG is not expected to have any data other than simply RGBA-32.
From reading some similar questions here, I'm suspecting NSBitmapImageRep for the file read/write, and drawing in an NSView for the onscreen part. Does this sound right?
1.) You can use NSBitmapImageRep's -bitmapData to get the raw pixel data. Unfortunately, CG (NSBitmapImageRep's backend) does not support native unpremultiplication so you would have to unpremultiply yourself. The colorspace used in this will be the same as present in the file. Here is how to unpremultiply the image data:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:data];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
unsigned char *bytes = [imageRep bitmapData];
for (NSUInteger y = 0; y < width * height * 4; y += 4) { // bgra little endian + alpha first
uint8_t a, r, g, b;
if (imageRep.bitmapFormat & NSAlphaFirstBitmapFormat) {
a = bytes[y];
r = bytes[y+1];
g = bytes[y+2];
b = bytes[y+3];
} else {
r = bytes[y+0];
g = bytes[y+1];
b = bytes[y+2];
a = bytes[y+3];
}
// unpremultiply alpha if there is any
if (a > 0) {
if (!(imageRep.bitmapFormat & NSAlphaNonpremultipliedBitmapFormat)) {
float factor = 255.0f/a;
b *= factor;
g *= factor;
r *= factor;
}
} else {
b = 0;
g = 0;
r = 0;
}
bytes[y]=a; // for argb
bytes[y+1]=r;
bytes[y+2]=g;
bytes[y+3]=b;
}
2.) I couldn't think of a simple way to do this. You could make your own image drawing method that loops through the raw image data and generates a new image based on the values. Refer above to see how to start doing it.
3.) Here is a method to get a CGImage from raw data places (you can write the png to a file using native CG functions or convert it to NSBitmapImageRep if CG makes you uncomfortable)
static CGImageRef cgImageFrom(NSData *data, uint16_t width, uint16_t height) {
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaFirst;
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, 4 * width, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return cgImage;
}
You can create the NSData object out of the raw data object with +dataWithBytes:length:
I haven't ever worked in this area, but you may be able to use Image IO for this.

How to get the on-screen location of an NSStatusItem

I have a question about the NSStatusItem for cocoa in mac osx. If you look at the mac app called snippets (see the movie at http://snippetsapp.com/). you will see that once you clicked your statusbar icon that a perfectly aligned view / panel or maybe even windows appears just below the icon.
My question is ... How to calculate the position to where to place your NSWindow just like this app does?
I have tried the following:
Subclass NSMenu
Set the view popery for the first item of the menu (Worked but enough)
Using addSubview instead of icon to NSStatusItem this worked but could not get higher then 20px
Give the NSStatusItem a view, then get the frame of that view's window. This technically counts as UndocumentedGoodness, so don't be surprised if it breaks someday (e.g., if they start keeping the window offscreen instead).
I don't know what you mean by “could not get heigher then 20px”.
To do this without the hassle of a custom view, I tried the following (that works). In the method that is set as the action for the status item i.e. the method that is called when the user clicks the status item, the frame of the status item can be retrieved by:
[[[NSApp currentEvent] window] frame]
Works a treat for me
Given an NSMenuItem and an NSWindow, you can get the point that centers your window right below the menu item like this:
fileprivate var centerBelowMenuItem: CGPoint {
guard let window = window, let barButton = statusItem.button else { return .zero }
let rectInWindow = barButton.convert(barButton.bounds, to: nil)
let screenRect = barButton.window?.convertToScreen(rectInWindow) ?? .zero
// We now have the menu item rect on the screen.
// Let's do some basic math to center our window to this point.
let centerX = screenRect.origin.x-(window.frame.size.width-barButton.bounds.width)/2
return CGPoint(x: centerX, y: screenRect.origin.y)
}
No need for undocumented API's.
Maybe another solution which works for me (swift 4.1) :
let yourStatusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength)
let frameOrigin = yourStatusItem.button?.window?.frame.origin
let yourPoint = CGPoint(x: (frameOrigin?.x)!, y: (frameOrigin?.y)! - 22)
yourWindow?.setFrameOrigin(yourPoint)
It seems that this app uses Matt's MAAttachedWindow. There's an sample application with the same layout & position.
NOTE: PLEASE DO NOT USE THIS, at least not for the purpose of locating an NSStatusItem.
Back when I posted this, this crazy image matching technique was the only way to solve this problem without undocumented API. Now, you should use Oskar's solution.
If you're willing to use image analysis to find the status item on a menu bar, here's a category for NSScreen which does exactly that.
It might seem crazy to do it this way, but it's fast, relatively small, and it's the only way of finding a status item without undocumented API.
If you pass in the current image for the status item, this method should find it.
#implementation NSScreen (LTStatusItemLocator)
// Find the location of IMG on the screen's status bar.
// If the image is not found, returns NSZeroPoint
- (NSPoint)originOfStatusItemWithImage:(NSImage *)IMG
{
CGColorSpaceRef csK = CGColorSpaceCreateDeviceGray();
NSPoint ret = NSZeroPoint;
CGDirectDisplayID screenID = 0;
CGImageRef displayImg = NULL;
CGImageRef compareImg = NULL;
CGRect screenRect = CGRectZero;
CGRect barRect = CGRectZero;
uint8_t *bm_bar = NULL;
uint8_t *bm_bar_ptr;
uint8_t *bm_compare = NULL;
uint8_t *bm_compare_ptr;
size_t bm_compare_w, bm_compare_h;
BOOL inverted = NO;
int numberOfScanLines = 0;
CGFloat *meanValues = NULL;
int presumptiveMatchIdx = -1;
CGFloat presumptiveMatchMeanVal = 999;
// If the computer is set to Dark Mode, set the "inverted" flag
NSDictionary *globalPrefs = [[NSUserDefaults standardUserDefaults] persistentDomainForName:NSGlobalDomain];
id style = globalPrefs[#"AppleInterfaceStyle"];
if ([style isKindOfClass:[NSString class]]) {
inverted = (NSOrderedSame == [style caseInsensitiveCompare:#"dark"]);
}
screenID = (CGDirectDisplayID)[self.deviceDescription[#"NSScreenNumber"] integerValue];
screenRect = CGDisplayBounds(screenID);
// Get the menubar rect
barRect = CGRectMake(0, 0, screenRect.size.width, 22);
displayImg = CGDisplayCreateImageForRect(screenID, barRect);
if (!displayImg) {
NSLog(#"Unable to create image from display");
CGColorSpaceRelease(csK);
return ret; // I would normally use goto(bail) here, but this is public code so let's not ruffle any feathers
}
size_t bar_w = CGImageGetWidth(displayImg);
size_t bar_h = CGImageGetHeight(displayImg);
// Determine scale factor based on the CGImageRef we got back from the display
CGFloat scaleFactor = (CGFloat)bar_h / (CGFloat)22;
// Greyscale bitmap for menu bar
bm_bar = malloc(1 * bar_w * bar_h);
{
CGContextRef bmCxt = NULL;
bmCxt = CGBitmapContextCreate(bm_bar, bar_w, bar_h, 8, 1 * bar_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
// Draw the menu bar in grey
CGContextDrawImage(bmCxt, CGRectMake(0, 0, bar_w, bar_h), displayImg);
uint8_t minVal = 0xff;
uint8_t maxVal = 0x00;
// Walk the bitmap
uint64_t running = 0;
for (int yi = bar_h / 2; yi == bar_h / 2; yi++)
{
bm_bar_ptr = bm_bar + (bar_w * yi);
for (int xi = 0; xi < bar_w; xi++)
{
uint8_t v = *bm_bar_ptr++;
if (v < minVal) minVal = v;
if (v > maxVal) maxVal = v;
running += v;
}
}
running /= bar_w;
uint8_t threshold = minVal + ((maxVal - minVal) / 2);
//threshold = running;
// Walk the bitmap
bm_bar_ptr = bm_bar;
for (int yi = 0; yi < bar_h; yi++)
{
for (int xi = 0; xi < bar_w; xi++)
{
// Threshold all the pixels. Values > 50% go white, values <= 50% go black
// (opposite if Dark Mode)
// Could unroll this loop as an optimization, but probably not worthwhile
*bm_bar_ptr = (*bm_bar_ptr > threshold) ? (inverted?0x00:0xff) : (inverted?0xff:0x00);
bm_bar_ptr++;
}
}
CGImageRelease(displayImg);
displayImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
CGContextRef bmCxt = NULL;
CGImageRef img_cg = NULL;
bm_compare_w = scaleFactor * IMG.size.width;
bm_compare_h = scaleFactor * 22;
// Create out comparison bitmap - the image that was passed in
bmCxt = CGBitmapContextCreate(NULL, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(bmCxt, kCGBlendModeNormal);
NSRect imgRect_og = NSMakeRect(0,0,IMG.size.width,IMG.size.height);
NSRect imgRect = imgRect_og;
img_cg = [IMG CGImageForProposedRect:&imgRect context:nil hints:nil];
CGContextClearRect(bmCxt, imgRect);
CGContextSetFillColorWithColor(bmCxt, [NSColor whiteColor].CGColor);
CGContextFillRect(bmCxt, CGRectMake(0,0,9999,9999));
CGContextScaleCTM(bmCxt, scaleFactor, scaleFactor);
CGContextTranslateCTM(bmCxt, 0, (22. - IMG.size.height) / 2.);
// Draw the image in grey
CGContextSetFillColorWithColor(bmCxt, [NSColor blackColor].CGColor);
CGContextDrawImage(bmCxt, imgRect, img_cg);
compareImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
// We start at the right of the menu bar, and scan left until we find a good match
int numberOfScanLines = barRect.size.width - IMG.size.width;
bm_compare = malloc(1 * bm_compare_w * bm_compare_h);
// We use the meanValues buffer to keep track of how well the image matched for each point in the scan
meanValues = calloc(sizeof(CGFloat), numberOfScanLines);
// Walk the menubar image from right to left, pixel by pixel
for (int scanx = 0; scanx < numberOfScanLines; scanx++)
{
// Optimization, if we recently found a really good match, bail on the loop and return it
if ((presumptiveMatchIdx >= 0) && (scanx > (presumptiveMatchIdx + 5))) {
break;
}
CGFloat xOffset = numberOfScanLines - scanx;
CGRect displayRect = CGRectMake(xOffset * scaleFactor, 0, IMG.size.width * scaleFactor, 22. * scaleFactor);
CGImageRef displayCrop = CGImageCreateWithImageInRect(displayImg, displayRect);
CGContextRef compareCxt = CGBitmapContextCreate(bm_compare, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(compareCxt, kCGBlendModeCopy);
// Draw the image from our menubar
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), displayCrop);
// Blend mode difference is like an XOR
CGContextSetBlendMode(compareCxt, kCGBlendModeDifference);
// Draw the test image. Because of blend mode, if we end up with a black image we matched perfectly
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), compareImg);
CGContextFlush(compareCxt);
// Walk through the result image, to determine overall blackness
bm_compare_ptr = bm_compare;
for (int i = 0; i < bm_compare_w * bm_compare_h; i++)
{
meanValues[scanx] += (CGFloat)(*bm_compare_ptr);
bm_compare_ptr++;
}
meanValues[scanx] /= (255. * (CGFloat)(bm_compare_w * bm_compare_h));
// If the image is very dark, it matched well. If the average pixel value is < 0.07, we consider this
// a presumptive match. Mark it as such, but continue looking to see if there's an even better match.
if (meanValues[scanx] < 0.07) {
if (meanValues[scanx] < presumptiveMatchMeanVal) {
presumptiveMatchMeanVal = meanValues[scanx];
presumptiveMatchIdx = scanx;
}
}
CGImageRelease(displayCrop);
CGContextRelease(compareCxt);
}
}
// After we're done scanning the whole menubar (or we bailed because we found a good match),
// return the origin point.
// If we didn't match well enough, return NSZeroPoint
if (presumptiveMatchIdx >= 0) {
ret = CGPointMake(CGRectGetMaxX(self.frame), CGRectGetMaxY(self.frame));
ret.x -= (IMG.size.width + presumptiveMatchIdx);
ret.y -= 22;
}
CGImageRelease(displayImg);
CGImageRelease(compareImg);
CGColorSpaceRelease(csK);
if (bm_bar) free(bm_bar);
if (bm_compare) free(bm_compare);
if (meanValues) free(meanValues);
return ret;
}
#end
From the Apple NSStatusItem Class Reference:
Setting a custom view overrides all the other appearance and behavior settings defined by NSStatusItem. The custom view is responsible for drawing itself and providing its own behaviors, such as processing mouse clicks and sending action messages.