I want to crop the image, and want to get the other portion of the image. Like the image as per below
and
Here I want to create a transparent area of the selected portion of the image and make a new image.
I have also tried to get all the pixel and set alpha to 0 of select portion, but it didn't work.
Does anyone have any other solutions?
Here is the code I have used:
CGSize size = [UIImage imageNamed:fileName].size;
CGImageRef inImage = [UIImage imageNamed:fileName].CGImage;
CFDataRef ref = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * buf = (UInt8 *) CFDataGetBytePtr(ref);
int length = CFDataGetLength(ref);
float value2 = (1 + value-0.5);
NSLog(#"length = %d",length);
int row = 0,col = 0;
for(int i=0; i<length; i+=4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
col++;
if ((col % (int)size.width)==0 ) {
row++;
col=0;
}
int red = buf[r];
int green = buf[g];
int blue = buf[b];
int alpha = buf[a];
if (col > 25 && col < 75 && row > 25 && row < 75) {
alpha = 0;
}
buf[r] = SAFECOLOR(red);
buf[g] = SAFECOLOR(green);
buf[b] = SAFECOLOR(blue);
buf[a] = SAFECOLOR(alpha);
}
NSLog(#"CGImageGetAlphaInfo %d",CGImageGetAlphaInfo(inImage));
NSLog(#"CGImageGetColorSpace %#",CGImageGetColorSpace(inImage));
CGContextRef ctx = CGBitmapContextCreate(buf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
kCGColorSpaceGenericRGB,
kCGImageAlphaPremultipliedLast);
CGImageRef img = CGBitmapContextCreateImage(ctx);
imgView.image = [UIImage imageWithCGImage:img];
Try doing it by painting transparent rectangle in kCGBlendModeCopy (which replaces destination pixels completely rather than blending with them)
UIImage* dst = [UIImage imageNamed:fileName];
UIGraphicsBeginImageContext(dst.size);
[dst drawInRect:CGRectMake(0,0,dst.size.width,dst.size.height)];
[[UIBezierPath bezierPathWithRect:CGRectMake(25,25,50,50)]
fillWithBlendMode:kCGBlendModeCopy alpha:0];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem with your solution is probably in that in a bitmap with premultiplied alpha the R,G,B components are expected to be multiplied by A component, and that means if you set A to zero you must set R,G,B to zero too (as any value multiplied by A=0 will be zero).
Related
I'm trying to show a preview of an image in 1-bit monochrome, as in, not grayscale, but bitonal black and white. It's supposed to be an indication of how the image will look if it were faxed. Formats as low as 1-bit per pixel aren't available on OS X, only 8-bit grayscale. Is there any way to achieve this effect using Core Graphics or another framework (ideally with dithering)?
I know there's a filter called CIColorMonochrome but this only converts the image to grayscale.
The creation of a 1 bit deep NSImageRep (and also in the CG-world) is AFAIK not supported, So we have to do it manually. It might be useful to use CIImage for this task. Here I go the classical (you may call it old-fashioned) way. Here is a code that shows how we can do it. First a gray image is created from an NSImageRep so we have a well defined and simple format whatever the source image will be formatted (could also be a PDF file). The resulting gray image is the source for the bitonal image. Here is the code for creating a gray image: (without respecting the size / resolution of the source image, only the pixels count!):
- (NSBitmapImageRep *) grayRepresentationOf:(NSImageRep *)aRep
{
NSBitmapImageRep *newRep =
[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:[aRep pixelsWide]
pixelsHigh:[aRep pixelsHigh]
bitsPerSample:8
samplesPerPixel:1
hasAlpha:NO //must be NO !
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bytesPerRow:0
bitsPerPixel:0 ];
// this new imagerep has (as default) a resolution of 72 dpi
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:newRep];
if( context==nil ){
NSLog( #"*** %s context is nil", __FUNCTION__ );
return nil;
}
[NSGraphicsContext setCurrentContext:context];
[aRep drawInRect:NSMakeRect( 0, 0, [newRep pixelsWide], [newRep pixelsHigh] )];
[NSGraphicsContext restoreGraphicsState];
return [newRep autorelease];
}
In the next method we create an NXBitmapImageRep (bits per pixel=1, samples per pixel=1) from a given NSImageRep (one of it's subclasses) and will use the method just given:
- (NSBitmapImageRep *) binaryRepresentationOf:(NSImageRep *)aRep
{
NSBitmapImageRep *grayRep = [aRep grayRepresentation];
if( grayRep==nil ) return nil;
NSInteger numberOfRows = [grayRep pixelsHigh];
NSInteger numberOfCols = [grayRep pixelsWide];
NSBitmapImageRep *newRep =
[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:numberOfCols
pixelsHigh:numberOfRows
bitsPerSample:1
samplesPerPixel:1
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bitmapFormat:0
bytesPerRow:0
bitsPerPixel:0 ];
unsigned char *bitmapDataSource = [grayRep bitmapData];
unsigned char *bitmapDataDest = [newRep bitmapData];
// here is the place to use dithering or error diffusion (code below)
// iterate over all pixels
NSInteger grayBPR = [grayRep bytesPerRow];
NSInteger binBPR = [newRep bytesPerRow];
NSInteger pWide = [newRep pixelsWide];
for( NSInteger row=0; row<numberOfRows; row++ ){
unsigned char *rowDataSource = bitmapDataSource + row*grayBPR;
unsigned char *rowDataDest = bitmapDataDest + row*binBPR;
NSInteger destCol = 0;
unsigned char bw = 0;
for( NSInteger col = 0; col<pWide; ){
unsigned char gray = rowDataSource[col];
if( gray>127 ) {bw |= (1<<(7-col%8)); };
col++;
if( (col%8 == 0) || (col==pWide) ){
rowDataDest[destCol] = bw;
bw = 0;
destCol++;
}
}
}
// save as PNG for testing and return
[[newRep representationUsingType:NSPNGFileType properties:nil] writeToFile:#"/tmp/bin_1.png" atomically:YES];
return [newRep autorelease];
}
For error diffusion I used the following code which changes directly the bitmap of the gray image. This is allowed because the gray image itself is no longer used.
// change bitmapDataSource : use Error-Diffusion
for( NSInteger row=0; row<numberOfRows-1; row++ ){
unsigned char *currentRowData = bitmapDataSource + row*grayBPR;
unsigned char *nextRowData = currentRowData + grayBPR;
for( NSInteger col = 1; col<numberOfCols; col++ ){
NSInteger origValue = currentRowData[col];
NSInteger newValue = (origValue>127) ? 255 : 0;
NSInteger error = -(newValue - origValue);
currentRowData[col] = newValue;
currentRowData[col+1] = clamp(currentRowData[col+1] + (7*error/16));
nextRowData[col-1] = clamp( nextRowData[col-1] + (3*error/16) );
nextRowData[col] = clamp( nextRowData[col] + (5*error/16) );
nextRowData[col+1] = clamp( nextRowData[col+1] + (error/16) );
}
}
clamp is a macro defined before the method
#define clamp(z) ( (z>255)?255 : ((z<0)?0:z) )
This makes the unsigned char bytes to have valid values (0<=z<=255)
So I have been writing a lot of image processing code lately using only core graphics and i have made quite a few working filters that manipulate the colors, apply blends, blurs and stuff like that. But I'm having trouble writing a filter to apply a pointillize effect to an image like this:
what I'm trying to do is get the color of a pixel and fill an ellipse with that color, looping through the image and doing this every few pixels here is the code:
EDIT: here is my new code this time its just drawing a few little circles in the bottom of the image am I doing it right like you said?
-(UIImage*)applyFilterWithAmount:(double)amount {
CGImageRef inImage = self.CGImage;
CFDataRef m_dataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8* m_pixelBuf = (UInt8*)CFDataGetBytePtr(m_dataRef);
int length = CFDataGetLength(m_dataRef);
CGContextRef ctx = CGBitmapContextCreate(m_pixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage));
int row = 0;
int imageWidth = self.size.width;
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
for (int i = 0; i<length; i+=4) {
//filterPointillize(m_pixelBuf, i, context);
int r = i;
int g = i+1;
int b = i+2;
int red = m_pixelBuf[r];
int green = m_pixelBuf[g];
int blue = m_pixelBuf[b];
CGContextSetRGBFillColor(ctx, red/255, green/255, blue/255, 1.0);
CGContextFillEllipseInRect(ctx, CGRectMake(col, row, amount, amount));
}
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CFRelease(m_dataRef);
return finalImage;
}
One problem I see right off the bat is you are using the raster cell number for both your X and Y origin. A raster in this configuration is just a single dimension line. It is up to you to calculate the second dimension based on the raster image's width. That could explain why you got a line.
Another thing: seems like you are reading every pixel of the image. Didn't you want to skip pixels that are the width of the the ellipses you are trying to draw?
Next thing that looks suspicious is I think you should create the context you are drawing in before drawing. In addition, you should not be calling:
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSaveGState(contextRef);
and
CGContextRestoreGState(contextRef);
inside the loop.
EDIT:
One further observation: your read RGB values are 0-255, and the CGContextSetRGBFillColor function expects values to be between 0.f - 1.f. This would explain why you got white. So you need to divide by 255:
CGContextSetRGBFillColor(contextRef, red / 255, green / 255, blue / 255, 1.0);
If you have any further questions, please don't hesitate to ask!
EDIT 2:
To calculate the row, first declare a row counter outside the loop:
int row = 0; //declare before the loop
int imageWidth = self.size.width; //get the image width
if ((i % imageWidth) == 0) { //we divide the cell number and if the remainder is 0
//then we want to increment the row counter
row++;
}
We can also use mod to calculate the current column:
int col = i % imageWidth; //divide i by the image width. the remainder is the col num
EDIT 3:
You have to put this inside the for loop:
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
Also, I forgot to mention before, to make the column and row 0 based (which is what you want) you will need to subtract 1 from the image size:
int imageWidth = self.size.width - 1;
I'm a little confused at the moment, first time poster here on stack overflow. I'm brand new to objective C but have learned a lot from my coworkers. What I'm trying to do is traverse a bmContext vertically shifting horizontally by 1 pixel after every vertical loop. Heres some code:
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = width * bytesPerPixel;
NSUInteger bytesPerColumn = height * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(bmContext, (CGRect){.origin.x = 0.0f, .origin.y = 0.0f, .size.width = width, .size.height = height}, image.CGImage);
UInt8* data = (UInt8*)CGBitmapContextGetData(bmContext);
const size_t bitmapByteCount = bytesPerRow * height;
struct Color {
UInt8 r;
UInt8 g;
UInt8 b;
};
for (size_t i = 0; i < bytesPerRow; i += 4) //shift 1 pixel
{
for (size_t j = 0; j < bitmapByteCount; j += bytesPerRow) //check every pixel in column
{
struct Color thisColor = {data[j + i + 1], data[j + i + 2], data[j + i + 3]};
}
}
in java it looks something like this, but I have no interest in the java version it's just to emphasis my true question. I only care about the objective c code.
for (int x = 0; x = image.getWidth(); x++)
{
for (int y = 0; y = image.getHeight(); y++)
{
int rgb = image.getRGB(x, y);
//do something with pixel
}
}
Am I really shifting one unit horizontally and then checking all vertical pixels and then shifting again horizontally? I thought I was but my results seem to be a little off. In java and c# achieving a task was rather simple, if anyone knows a simpler way to do this in Objective C please let me know. Thanks in advance!
The way you are getting at the pixels seems to be off.
If I'm understanding correctly, you just want to iterate through every pixel in the image, column by column. Right?
This should work:
for (size_t i = 0; i < CGBitmapContextGetWidth(bmContext); i++)
{
for (size_t j = 0; j < CGBitmapContextGetHeight(bmContext); j++)
{
int pixel = j * CGBitmapContextGetWidth(bmContext) + i;
struct Color thisColor = {data[pixel + 1], data[pixel + 2], data[pixel + 3]};
}
}
I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}
I have a question about the NSStatusItem for cocoa in mac osx. If you look at the mac app called snippets (see the movie at http://snippetsapp.com/). you will see that once you clicked your statusbar icon that a perfectly aligned view / panel or maybe even windows appears just below the icon.
My question is ... How to calculate the position to where to place your NSWindow just like this app does?
I have tried the following:
Subclass NSMenu
Set the view popery for the first item of the menu (Worked but enough)
Using addSubview instead of icon to NSStatusItem this worked but could not get higher then 20px
Give the NSStatusItem a view, then get the frame of that view's window. This technically counts as UndocumentedGoodness, so don't be surprised if it breaks someday (e.g., if they start keeping the window offscreen instead).
I don't know what you mean by “could not get heigher then 20px”.
To do this without the hassle of a custom view, I tried the following (that works). In the method that is set as the action for the status item i.e. the method that is called when the user clicks the status item, the frame of the status item can be retrieved by:
[[[NSApp currentEvent] window] frame]
Works a treat for me
Given an NSMenuItem and an NSWindow, you can get the point that centers your window right below the menu item like this:
fileprivate var centerBelowMenuItem: CGPoint {
guard let window = window, let barButton = statusItem.button else { return .zero }
let rectInWindow = barButton.convert(barButton.bounds, to: nil)
let screenRect = barButton.window?.convertToScreen(rectInWindow) ?? .zero
// We now have the menu item rect on the screen.
// Let's do some basic math to center our window to this point.
let centerX = screenRect.origin.x-(window.frame.size.width-barButton.bounds.width)/2
return CGPoint(x: centerX, y: screenRect.origin.y)
}
No need for undocumented API's.
Maybe another solution which works for me (swift 4.1) :
let yourStatusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength)
let frameOrigin = yourStatusItem.button?.window?.frame.origin
let yourPoint = CGPoint(x: (frameOrigin?.x)!, y: (frameOrigin?.y)! - 22)
yourWindow?.setFrameOrigin(yourPoint)
It seems that this app uses Matt's MAAttachedWindow. There's an sample application with the same layout & position.
NOTE: PLEASE DO NOT USE THIS, at least not for the purpose of locating an NSStatusItem.
Back when I posted this, this crazy image matching technique was the only way to solve this problem without undocumented API. Now, you should use Oskar's solution.
If you're willing to use image analysis to find the status item on a menu bar, here's a category for NSScreen which does exactly that.
It might seem crazy to do it this way, but it's fast, relatively small, and it's the only way of finding a status item without undocumented API.
If you pass in the current image for the status item, this method should find it.
#implementation NSScreen (LTStatusItemLocator)
// Find the location of IMG on the screen's status bar.
// If the image is not found, returns NSZeroPoint
- (NSPoint)originOfStatusItemWithImage:(NSImage *)IMG
{
CGColorSpaceRef csK = CGColorSpaceCreateDeviceGray();
NSPoint ret = NSZeroPoint;
CGDirectDisplayID screenID = 0;
CGImageRef displayImg = NULL;
CGImageRef compareImg = NULL;
CGRect screenRect = CGRectZero;
CGRect barRect = CGRectZero;
uint8_t *bm_bar = NULL;
uint8_t *bm_bar_ptr;
uint8_t *bm_compare = NULL;
uint8_t *bm_compare_ptr;
size_t bm_compare_w, bm_compare_h;
BOOL inverted = NO;
int numberOfScanLines = 0;
CGFloat *meanValues = NULL;
int presumptiveMatchIdx = -1;
CGFloat presumptiveMatchMeanVal = 999;
// If the computer is set to Dark Mode, set the "inverted" flag
NSDictionary *globalPrefs = [[NSUserDefaults standardUserDefaults] persistentDomainForName:NSGlobalDomain];
id style = globalPrefs[#"AppleInterfaceStyle"];
if ([style isKindOfClass:[NSString class]]) {
inverted = (NSOrderedSame == [style caseInsensitiveCompare:#"dark"]);
}
screenID = (CGDirectDisplayID)[self.deviceDescription[#"NSScreenNumber"] integerValue];
screenRect = CGDisplayBounds(screenID);
// Get the menubar rect
barRect = CGRectMake(0, 0, screenRect.size.width, 22);
displayImg = CGDisplayCreateImageForRect(screenID, barRect);
if (!displayImg) {
NSLog(#"Unable to create image from display");
CGColorSpaceRelease(csK);
return ret; // I would normally use goto(bail) here, but this is public code so let's not ruffle any feathers
}
size_t bar_w = CGImageGetWidth(displayImg);
size_t bar_h = CGImageGetHeight(displayImg);
// Determine scale factor based on the CGImageRef we got back from the display
CGFloat scaleFactor = (CGFloat)bar_h / (CGFloat)22;
// Greyscale bitmap for menu bar
bm_bar = malloc(1 * bar_w * bar_h);
{
CGContextRef bmCxt = NULL;
bmCxt = CGBitmapContextCreate(bm_bar, bar_w, bar_h, 8, 1 * bar_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
// Draw the menu bar in grey
CGContextDrawImage(bmCxt, CGRectMake(0, 0, bar_w, bar_h), displayImg);
uint8_t minVal = 0xff;
uint8_t maxVal = 0x00;
// Walk the bitmap
uint64_t running = 0;
for (int yi = bar_h / 2; yi == bar_h / 2; yi++)
{
bm_bar_ptr = bm_bar + (bar_w * yi);
for (int xi = 0; xi < bar_w; xi++)
{
uint8_t v = *bm_bar_ptr++;
if (v < minVal) minVal = v;
if (v > maxVal) maxVal = v;
running += v;
}
}
running /= bar_w;
uint8_t threshold = minVal + ((maxVal - minVal) / 2);
//threshold = running;
// Walk the bitmap
bm_bar_ptr = bm_bar;
for (int yi = 0; yi < bar_h; yi++)
{
for (int xi = 0; xi < bar_w; xi++)
{
// Threshold all the pixels. Values > 50% go white, values <= 50% go black
// (opposite if Dark Mode)
// Could unroll this loop as an optimization, but probably not worthwhile
*bm_bar_ptr = (*bm_bar_ptr > threshold) ? (inverted?0x00:0xff) : (inverted?0xff:0x00);
bm_bar_ptr++;
}
}
CGImageRelease(displayImg);
displayImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
CGContextRef bmCxt = NULL;
CGImageRef img_cg = NULL;
bm_compare_w = scaleFactor * IMG.size.width;
bm_compare_h = scaleFactor * 22;
// Create out comparison bitmap - the image that was passed in
bmCxt = CGBitmapContextCreate(NULL, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(bmCxt, kCGBlendModeNormal);
NSRect imgRect_og = NSMakeRect(0,0,IMG.size.width,IMG.size.height);
NSRect imgRect = imgRect_og;
img_cg = [IMG CGImageForProposedRect:&imgRect context:nil hints:nil];
CGContextClearRect(bmCxt, imgRect);
CGContextSetFillColorWithColor(bmCxt, [NSColor whiteColor].CGColor);
CGContextFillRect(bmCxt, CGRectMake(0,0,9999,9999));
CGContextScaleCTM(bmCxt, scaleFactor, scaleFactor);
CGContextTranslateCTM(bmCxt, 0, (22. - IMG.size.height) / 2.);
// Draw the image in grey
CGContextSetFillColorWithColor(bmCxt, [NSColor blackColor].CGColor);
CGContextDrawImage(bmCxt, imgRect, img_cg);
compareImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
// We start at the right of the menu bar, and scan left until we find a good match
int numberOfScanLines = barRect.size.width - IMG.size.width;
bm_compare = malloc(1 * bm_compare_w * bm_compare_h);
// We use the meanValues buffer to keep track of how well the image matched for each point in the scan
meanValues = calloc(sizeof(CGFloat), numberOfScanLines);
// Walk the menubar image from right to left, pixel by pixel
for (int scanx = 0; scanx < numberOfScanLines; scanx++)
{
// Optimization, if we recently found a really good match, bail on the loop and return it
if ((presumptiveMatchIdx >= 0) && (scanx > (presumptiveMatchIdx + 5))) {
break;
}
CGFloat xOffset = numberOfScanLines - scanx;
CGRect displayRect = CGRectMake(xOffset * scaleFactor, 0, IMG.size.width * scaleFactor, 22. * scaleFactor);
CGImageRef displayCrop = CGImageCreateWithImageInRect(displayImg, displayRect);
CGContextRef compareCxt = CGBitmapContextCreate(bm_compare, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(compareCxt, kCGBlendModeCopy);
// Draw the image from our menubar
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), displayCrop);
// Blend mode difference is like an XOR
CGContextSetBlendMode(compareCxt, kCGBlendModeDifference);
// Draw the test image. Because of blend mode, if we end up with a black image we matched perfectly
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), compareImg);
CGContextFlush(compareCxt);
// Walk through the result image, to determine overall blackness
bm_compare_ptr = bm_compare;
for (int i = 0; i < bm_compare_w * bm_compare_h; i++)
{
meanValues[scanx] += (CGFloat)(*bm_compare_ptr);
bm_compare_ptr++;
}
meanValues[scanx] /= (255. * (CGFloat)(bm_compare_w * bm_compare_h));
// If the image is very dark, it matched well. If the average pixel value is < 0.07, we consider this
// a presumptive match. Mark it as such, but continue looking to see if there's an even better match.
if (meanValues[scanx] < 0.07) {
if (meanValues[scanx] < presumptiveMatchMeanVal) {
presumptiveMatchMeanVal = meanValues[scanx];
presumptiveMatchIdx = scanx;
}
}
CGImageRelease(displayCrop);
CGContextRelease(compareCxt);
}
}
// After we're done scanning the whole menubar (or we bailed because we found a good match),
// return the origin point.
// If we didn't match well enough, return NSZeroPoint
if (presumptiveMatchIdx >= 0) {
ret = CGPointMake(CGRectGetMaxX(self.frame), CGRectGetMaxY(self.frame));
ret.x -= (IMG.size.width + presumptiveMatchIdx);
ret.y -= 22;
}
CGImageRelease(displayImg);
CGImageRelease(compareImg);
CGColorSpaceRelease(csK);
if (bm_bar) free(bm_bar);
if (bm_compare) free(bm_compare);
if (meanValues) free(meanValues);
return ret;
}
#end
From the Apple NSStatusItem Class Reference:
Setting a custom view overrides all the other appearance and behavior settings defined by NSStatusItem. The custom view is responsible for drawing itself and providing its own behaviors, such as processing mouse clicks and sending action messages.