Capture mouse cursor in screenshot - objective-c

I am developing Mac desktop application, where I am capturing the screen using
CGImageRef screenShot = CGWindowListCreateImage(CGRectInfinite, kCGWindowListOptionAll, kCGNullWindowID, kCGWindowImageDefault);
The problem is, I am expecting it should show the mouse cursor too, but it's not showing.
Do I need to enable any settings for that?
I tried the following before calling this function:
CGDisplayShowCursor(kCGDirectMainDisplay);
CGAssociateMouseAndMouseCursorPosition(true);
, but it didn't work.
When I checked using following
bool bCursor = CGCursorIsDrawnInFramebuffer(); /* This returns false */
bCursor = CGCursorIsVisible(); /* This returns true */
, te value says the cursor was not drawn in the frame buffer, however the cursor is visible.
I suppose I only need to do is to draw the cursor in the frame buffer, how do I do this?.

it seems, framebuffer doesn't give me the mouse cursor, so i am drawing my own, this is the code snippet , might be help full to you guys,
-(CGImageRef)appendMouseCursor:(CGImageRef)pSourceImage{
// get the cursor image
NSPoint mouseLoc;
mouseLoc = [NSEvent mouseLocation]; //get cur
NSLog(#"Mouse location is x=%d,y=%d",(int)mouseLoc.x,(int)mouseLoc.y);
// get the mouse image
NSImage *overlay = [[[NSCursor arrowCursor] image] copy];
NSLog(#"Mouse location is x=%d,y=%d cursor width = %d, cursor height = %d",(int)mouseLoc.x,(int)mouseLoc.y,(int)[overlay size].width,(int)[overlay size].height);
int x = (int)mouseLoc.x;
int y = (int)mouseLoc.y;
int w = (int)[overlay size].width;
int h = (int)[overlay size].height;
int org_x = x;
int org_y = y;
size_t height = CGImageGetHeight(pSourceImage);
size_t width = CGImageGetWidth(pSourceImage);
int bytesPerRow = CGImageGetBytesPerRow(pSourceImage);
unsigned int * imgData = (unsigned int*)malloc(height*bytesPerRow);
// have the graphics context now,
CGRect bgBoundingBox = CGRectMake (0, 0, width,height);
CGContextRef context = CGBitmapContextCreate(imgData, width,
height,
8, // 8 bits per component
bytesPerRow,
CGImageGetColorSpace(pSourceImage),
CGImageGetBitmapInfo(pSourceImage));
// first draw the image
CGContextDrawImage(context,bgBoundingBox,pSourceImage);
// then mouse cursor
CGContextDrawImage(context,CGRectMake(0, 0, width,height),pSourceImage);
// then mouse cursor
CGContextDrawImage(context,CGRectMake(org_x, org_y, w,h),[overlay CGImageForProposedRect: NULL context: NULL hints: NULL] );
// assuming both the image has been drawn then create an Image Ref for that
CGImageRef pFinalImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
return pFinalImage; /* to be released by the caller */
}

Related

Core Graphics pointillize effect on CGImage

So I have been writing a lot of image processing code lately using only core graphics and i have made quite a few working filters that manipulate the colors, apply blends, blurs and stuff like that. But I'm having trouble writing a filter to apply a pointillize effect to an image like this:
what I'm trying to do is get the color of a pixel and fill an ellipse with that color, looping through the image and doing this every few pixels here is the code:
EDIT: here is my new code this time its just drawing a few little circles in the bottom of the image am I doing it right like you said?
-(UIImage*)applyFilterWithAmount:(double)amount {
CGImageRef inImage = self.CGImage;
CFDataRef m_dataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8* m_pixelBuf = (UInt8*)CFDataGetBytePtr(m_dataRef);
int length = CFDataGetLength(m_dataRef);
CGContextRef ctx = CGBitmapContextCreate(m_pixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage));
int row = 0;
int imageWidth = self.size.width;
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
for (int i = 0; i<length; i+=4) {
//filterPointillize(m_pixelBuf, i, context);
int r = i;
int g = i+1;
int b = i+2;
int red = m_pixelBuf[r];
int green = m_pixelBuf[g];
int blue = m_pixelBuf[b];
CGContextSetRGBFillColor(ctx, red/255, green/255, blue/255, 1.0);
CGContextFillEllipseInRect(ctx, CGRectMake(col, row, amount, amount));
}
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CFRelease(m_dataRef);
return finalImage;
}
One problem I see right off the bat is you are using the raster cell number for both your X and Y origin. A raster in this configuration is just a single dimension line. It is up to you to calculate the second dimension based on the raster image's width. That could explain why you got a line.
Another thing: seems like you are reading every pixel of the image. Didn't you want to skip pixels that are the width of the the ellipses you are trying to draw?
Next thing that looks suspicious is I think you should create the context you are drawing in before drawing. In addition, you should not be calling:
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSaveGState(contextRef);
and
CGContextRestoreGState(contextRef);
inside the loop.
EDIT:
One further observation: your read RGB values are 0-255, and the CGContextSetRGBFillColor function expects values to be between 0.f - 1.f. This would explain why you got white. So you need to divide by 255:
CGContextSetRGBFillColor(contextRef, red / 255, green / 255, blue / 255, 1.0);
If you have any further questions, please don't hesitate to ask!
EDIT 2:
To calculate the row, first declare a row counter outside the loop:
int row = 0; //declare before the loop
int imageWidth = self.size.width; //get the image width
if ((i % imageWidth) == 0) { //we divide the cell number and if the remainder is 0
//then we want to increment the row counter
row++;
}
We can also use mod to calculate the current column:
int col = i % imageWidth; //divide i by the image width. the remainder is the col num
EDIT 3:
You have to put this inside the for loop:
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
Also, I forgot to mention before, to make the column and row 0 based (which is what you want) you will need to subtract 1 from the image size:
int imageWidth = self.size.width - 1;

CTFrameGetVisibleStringRange() returns 0

I have this code that generates an array of viewController while reading an NSAttributedString. After the first cycle, the function CTFrameGetVisibleStringRange() returns 0 even if there is more text to display.
- (void)buildFrames
{
/*here we do some setup - define the x & y offsets and create an empty frames array */
float frameXOffset = 20;
float frameYOffset = 20;
self.frames = [NSMutableArray array];
//buildFrames continues by creating a path and a frame for the view's bounds (offset slightly so we have a margin).
CGMutablePathRef path = CGPathCreateMutable();
// create an insect rect for drawing
CGRect textFrame = CGRectInset(self.bounds, frameXOffset, frameYOffset);
CGPathAddRect(path, NULL, textFrame );// add it to the path
// Create a frame setter with my attributed String
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)attributedString);
//This section declares textPos, which will hold the current position in the text.
//It also declares columnIndex, which will count how many columns are already created.
int textPos = 0;
int columnIndex = 0;
while (textPos < [attributedString length]) {
//The while loop here runs until we've reached the end of the text. Inside the loop we create a column bounds: colRect is a CGRect which depending on columnIndex holds the origin and size of the current column. Note that we are building columns continuously to the right (not across and then down).
CGPoint colOffset = CGPointMake(frameXOffset , frameYOffset);
CGRect columnRect = CGRectMake(0, 0 , textFrame.size.width, textFrame.size.height);
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, NULL, colRect);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(textPos, 0), path, NULL);
CFRange frameRange = CTFrameGetVisibleStringRange(frame);
// MY CUSTOM UIVIEW
LSCTView* content = [[[LSCTView alloc] initWithFrame: CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height)] autorelease];
content.backgroundColor = [UIColor clearColor];
content.frame = CGRectMake(colOffset.x, colOffset.y, columnRect.size.width, columnRect.size.height) ;
/************* CREATE A NEW VIEW CONTROLLER WITH view=content *********************/
textPos += frameRange.length;
CFRelease(path);
columnIndex++;
}
}
Did you change alignment for attributedString? I had this samme issue and found that it occurs in some cases when text alignment is set to kCTJustifiedTextAlignment, it should works fine with rest types.

How to get pixel color at location from UIimage scaled within a UIimageView

I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}

How can I get the underlying pixel data from a UIImage or CGImage?

I've tried numerous 'solutions' around the net, all of those I found have errors and thus don't work. I need to know the color of a pixel in a UIImage. How can i get this information?
Getting the raw data
From Apple's Technical Q&A QA1509 it says this will get the raw image data in it's original format by getting it from the Data Provider.
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
Needed in a different format or color-space
If you want to get the data color-matched and in a specific format you can use something similar to the following code sample:
void ManipulateImagePixelData(CGImageRef inImage)
{
// Create the bitmap context
CGContextRef cgctx = CreateARGBBitmapContext(inImage);
if (cgctx == NULL)
{
// error creating context
return;
}
// Get image width, height. We'll use the entire image.
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
void *data = CGBitmapContextGetData (cgctx);
if (data != NULL)
{
// **** You have a pointer to the image data ****
// **** Do stuff with the data here ****
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data)
{
free(data);
}
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
Color of a particular pixel
Assuming RGB, once you have the data in a format you like finding the color is a matter of moving through the array of data and getting the RGB value at a particular pixel location.
If you're looking to just get a single pixel or a few ones you can look to do a little different approach. Create a 1x1 bitmap context and draw the image over it with an offset so you just get the pixel you want.
CGImageRef image = uiimage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
// Setup 1x1 pixel context to draw into
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rawData[4];
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the image
CGContextDrawImage(context,
CGRectMake(-offset.x, offset.y-height, width, height),
image);
// Done
CGContextRelease(context);
// Get the pixel information
unsigned char red = rawData[0];
unsigned char green = rawData[1];
unsigned char blue = rawData[2];
unsigned char alpha = rawData[3];

How to get the on-screen location of an NSStatusItem

I have a question about the NSStatusItem for cocoa in mac osx. If you look at the mac app called snippets (see the movie at http://snippetsapp.com/). you will see that once you clicked your statusbar icon that a perfectly aligned view / panel or maybe even windows appears just below the icon.
My question is ... How to calculate the position to where to place your NSWindow just like this app does?
I have tried the following:
Subclass NSMenu
Set the view popery for the first item of the menu (Worked but enough)
Using addSubview instead of icon to NSStatusItem this worked but could not get higher then 20px
Give the NSStatusItem a view, then get the frame of that view's window. This technically counts as UndocumentedGoodness, so don't be surprised if it breaks someday (e.g., if they start keeping the window offscreen instead).
I don't know what you mean by “could not get heigher then 20px”.
To do this without the hassle of a custom view, I tried the following (that works). In the method that is set as the action for the status item i.e. the method that is called when the user clicks the status item, the frame of the status item can be retrieved by:
[[[NSApp currentEvent] window] frame]
Works a treat for me
Given an NSMenuItem and an NSWindow, you can get the point that centers your window right below the menu item like this:
fileprivate var centerBelowMenuItem: CGPoint {
guard let window = window, let barButton = statusItem.button else { return .zero }
let rectInWindow = barButton.convert(barButton.bounds, to: nil)
let screenRect = barButton.window?.convertToScreen(rectInWindow) ?? .zero
// We now have the menu item rect on the screen.
// Let's do some basic math to center our window to this point.
let centerX = screenRect.origin.x-(window.frame.size.width-barButton.bounds.width)/2
return CGPoint(x: centerX, y: screenRect.origin.y)
}
No need for undocumented API's.
Maybe another solution which works for me (swift 4.1) :
let yourStatusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength)
let frameOrigin = yourStatusItem.button?.window?.frame.origin
let yourPoint = CGPoint(x: (frameOrigin?.x)!, y: (frameOrigin?.y)! - 22)
yourWindow?.setFrameOrigin(yourPoint)
It seems that this app uses Matt's MAAttachedWindow. There's an sample application with the same layout & position.
NOTE: PLEASE DO NOT USE THIS, at least not for the purpose of locating an NSStatusItem.
Back when I posted this, this crazy image matching technique was the only way to solve this problem without undocumented API. Now, you should use Oskar's solution.
If you're willing to use image analysis to find the status item on a menu bar, here's a category for NSScreen which does exactly that.
It might seem crazy to do it this way, but it's fast, relatively small, and it's the only way of finding a status item without undocumented API.
If you pass in the current image for the status item, this method should find it.
#implementation NSScreen (LTStatusItemLocator)
// Find the location of IMG on the screen's status bar.
// If the image is not found, returns NSZeroPoint
- (NSPoint)originOfStatusItemWithImage:(NSImage *)IMG
{
CGColorSpaceRef csK = CGColorSpaceCreateDeviceGray();
NSPoint ret = NSZeroPoint;
CGDirectDisplayID screenID = 0;
CGImageRef displayImg = NULL;
CGImageRef compareImg = NULL;
CGRect screenRect = CGRectZero;
CGRect barRect = CGRectZero;
uint8_t *bm_bar = NULL;
uint8_t *bm_bar_ptr;
uint8_t *bm_compare = NULL;
uint8_t *bm_compare_ptr;
size_t bm_compare_w, bm_compare_h;
BOOL inverted = NO;
int numberOfScanLines = 0;
CGFloat *meanValues = NULL;
int presumptiveMatchIdx = -1;
CGFloat presumptiveMatchMeanVal = 999;
// If the computer is set to Dark Mode, set the "inverted" flag
NSDictionary *globalPrefs = [[NSUserDefaults standardUserDefaults] persistentDomainForName:NSGlobalDomain];
id style = globalPrefs[#"AppleInterfaceStyle"];
if ([style isKindOfClass:[NSString class]]) {
inverted = (NSOrderedSame == [style caseInsensitiveCompare:#"dark"]);
}
screenID = (CGDirectDisplayID)[self.deviceDescription[#"NSScreenNumber"] integerValue];
screenRect = CGDisplayBounds(screenID);
// Get the menubar rect
barRect = CGRectMake(0, 0, screenRect.size.width, 22);
displayImg = CGDisplayCreateImageForRect(screenID, barRect);
if (!displayImg) {
NSLog(#"Unable to create image from display");
CGColorSpaceRelease(csK);
return ret; // I would normally use goto(bail) here, but this is public code so let's not ruffle any feathers
}
size_t bar_w = CGImageGetWidth(displayImg);
size_t bar_h = CGImageGetHeight(displayImg);
// Determine scale factor based on the CGImageRef we got back from the display
CGFloat scaleFactor = (CGFloat)bar_h / (CGFloat)22;
// Greyscale bitmap for menu bar
bm_bar = malloc(1 * bar_w * bar_h);
{
CGContextRef bmCxt = NULL;
bmCxt = CGBitmapContextCreate(bm_bar, bar_w, bar_h, 8, 1 * bar_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
// Draw the menu bar in grey
CGContextDrawImage(bmCxt, CGRectMake(0, 0, bar_w, bar_h), displayImg);
uint8_t minVal = 0xff;
uint8_t maxVal = 0x00;
// Walk the bitmap
uint64_t running = 0;
for (int yi = bar_h / 2; yi == bar_h / 2; yi++)
{
bm_bar_ptr = bm_bar + (bar_w * yi);
for (int xi = 0; xi < bar_w; xi++)
{
uint8_t v = *bm_bar_ptr++;
if (v < minVal) minVal = v;
if (v > maxVal) maxVal = v;
running += v;
}
}
running /= bar_w;
uint8_t threshold = minVal + ((maxVal - minVal) / 2);
//threshold = running;
// Walk the bitmap
bm_bar_ptr = bm_bar;
for (int yi = 0; yi < bar_h; yi++)
{
for (int xi = 0; xi < bar_w; xi++)
{
// Threshold all the pixels. Values > 50% go white, values <= 50% go black
// (opposite if Dark Mode)
// Could unroll this loop as an optimization, but probably not worthwhile
*bm_bar_ptr = (*bm_bar_ptr > threshold) ? (inverted?0x00:0xff) : (inverted?0xff:0x00);
bm_bar_ptr++;
}
}
CGImageRelease(displayImg);
displayImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
CGContextRef bmCxt = NULL;
CGImageRef img_cg = NULL;
bm_compare_w = scaleFactor * IMG.size.width;
bm_compare_h = scaleFactor * 22;
// Create out comparison bitmap - the image that was passed in
bmCxt = CGBitmapContextCreate(NULL, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(bmCxt, kCGBlendModeNormal);
NSRect imgRect_og = NSMakeRect(0,0,IMG.size.width,IMG.size.height);
NSRect imgRect = imgRect_og;
img_cg = [IMG CGImageForProposedRect:&imgRect context:nil hints:nil];
CGContextClearRect(bmCxt, imgRect);
CGContextSetFillColorWithColor(bmCxt, [NSColor whiteColor].CGColor);
CGContextFillRect(bmCxt, CGRectMake(0,0,9999,9999));
CGContextScaleCTM(bmCxt, scaleFactor, scaleFactor);
CGContextTranslateCTM(bmCxt, 0, (22. - IMG.size.height) / 2.);
// Draw the image in grey
CGContextSetFillColorWithColor(bmCxt, [NSColor blackColor].CGColor);
CGContextDrawImage(bmCxt, imgRect, img_cg);
compareImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
// We start at the right of the menu bar, and scan left until we find a good match
int numberOfScanLines = barRect.size.width - IMG.size.width;
bm_compare = malloc(1 * bm_compare_w * bm_compare_h);
// We use the meanValues buffer to keep track of how well the image matched for each point in the scan
meanValues = calloc(sizeof(CGFloat), numberOfScanLines);
// Walk the menubar image from right to left, pixel by pixel
for (int scanx = 0; scanx < numberOfScanLines; scanx++)
{
// Optimization, if we recently found a really good match, bail on the loop and return it
if ((presumptiveMatchIdx >= 0) && (scanx > (presumptiveMatchIdx + 5))) {
break;
}
CGFloat xOffset = numberOfScanLines - scanx;
CGRect displayRect = CGRectMake(xOffset * scaleFactor, 0, IMG.size.width * scaleFactor, 22. * scaleFactor);
CGImageRef displayCrop = CGImageCreateWithImageInRect(displayImg, displayRect);
CGContextRef compareCxt = CGBitmapContextCreate(bm_compare, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(compareCxt, kCGBlendModeCopy);
// Draw the image from our menubar
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), displayCrop);
// Blend mode difference is like an XOR
CGContextSetBlendMode(compareCxt, kCGBlendModeDifference);
// Draw the test image. Because of blend mode, if we end up with a black image we matched perfectly
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), compareImg);
CGContextFlush(compareCxt);
// Walk through the result image, to determine overall blackness
bm_compare_ptr = bm_compare;
for (int i = 0; i < bm_compare_w * bm_compare_h; i++)
{
meanValues[scanx] += (CGFloat)(*bm_compare_ptr);
bm_compare_ptr++;
}
meanValues[scanx] /= (255. * (CGFloat)(bm_compare_w * bm_compare_h));
// If the image is very dark, it matched well. If the average pixel value is < 0.07, we consider this
// a presumptive match. Mark it as such, but continue looking to see if there's an even better match.
if (meanValues[scanx] < 0.07) {
if (meanValues[scanx] < presumptiveMatchMeanVal) {
presumptiveMatchMeanVal = meanValues[scanx];
presumptiveMatchIdx = scanx;
}
}
CGImageRelease(displayCrop);
CGContextRelease(compareCxt);
}
}
// After we're done scanning the whole menubar (or we bailed because we found a good match),
// return the origin point.
// If we didn't match well enough, return NSZeroPoint
if (presumptiveMatchIdx >= 0) {
ret = CGPointMake(CGRectGetMaxX(self.frame), CGRectGetMaxY(self.frame));
ret.x -= (IMG.size.width + presumptiveMatchIdx);
ret.y -= 22;
}
CGImageRelease(displayImg);
CGImageRelease(compareImg);
CGColorSpaceRelease(csK);
if (bm_bar) free(bm_bar);
if (bm_compare) free(bm_compare);
if (meanValues) free(meanValues);
return ret;
}
#end
From the Apple NSStatusItem Class Reference:
Setting a custom view overrides all the other appearance and behavior settings defined by NSStatusItem. The custom view is responsible for drawing itself and providing its own behaviors, such as processing mouse clicks and sending action messages.