Average Color of Mac Screen - objective-c

I'm trying to find out a way to calculate the average color of the screen using objective-c.
So far I use this code to get a screen shot, which works great:
CGImageRef image1 = CGDisplayCreateImage(kCGDirectMainDisplay);
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:image1];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
Now my problem is to calculate the average RGB color of this image.
I've found one solution, but the R G and B color components were always calculated to be the same (equal):
NSInteger i = 0;
NSInteger components[3] = {0,0,0};
unsigned char *data = [bitmapRep bitmapData];
NSInteger pixels = ([bitmapRep size].width *[bitmapRep size].height);
do {
components[0] += *data++;
components[1] += *data++;
components[2] += *data++;
} while (++i < pixels);
int red = (CGFloat)components[0] / pixels;
int green = (CGFloat)components[1] / pixels;
int blue = (CGFloat)components[2] / pixels;

A short analysis of bitmapRep shows that each pixel has 32 bits (4 bytes) where the first byte is unused, it is a padding byte, in other words the format is XRGB and X is not used. (There are no padding bytes at the end of a pixel row).
Another remark: for counting the number of pixels you use the method -(NSSize)size.
You should never do this! size has nothing to do with pixels. It only says how big the image should be depicted (expressed in inch or cm or mm) on the screen or the printer. For counting (or using otherwise) the pixels you should use -(NSInteger)pixelsWide and -(NSInteger)pixelsHigh. But the (wrong) using of -size works if and only if the resolution of the imageRep is 72 dots per inch.
Finally: there is a similar question at Average Color of Mac Screen

Your data is probably aligned as 4 bytes per pixel (and not 3 bytes, like you assume). That would (statistically) explain the near-equal values that you get.

Related

Extracting RGB data from bitmapData (NSBitmapImageRep) Cocoa

I am working on an application which needs to compare two images, in order to see how different they are and the application does this repeatedly for different images. So the way I currently do this is by having both the images as NSBitmapImageRep, then using the colorAtX: y: function in order to get a NSColor object, and then examining the RGB components. But this approach is extremely slow. So researching around the internet I found posts saying that a better way would be to get the bitmap data, using the function bitmapData, which returns an unsigned char. Unfortunately for me I don't know how to progress further from here, and none of the posts I've found show you how to actually get the RGB components for each pixel from this bitmapData. So currently I have :
NSBitmapImageRep* imageRep = [self screenShot]; //Takes a screenshot of the content displayed in the nswindow
unsigned char *data = [imageRep bitmapData]; //Get the bitmap data
//What do I do here in order to get the RGB components?
Thanks
The pointer you get back from -bitmapData points to the RGB pixel data. You have to query the image rep to see what format it's in. You can use the -bitmapFormat method which will tell you whether the data is alpha first or last (RGBA or ARGB), and whether the pixels are ints or floats. You need to check how many samples per pixel, etc. Here are the docs. If you have more specific questions about the data format, post those questions and we can try to help answer them.
Usually the data will be non-planar, which means it's just interleaved RGBA (or ARGB) data. You can loop over it like this (assuming 8-bit per channel, 4 channels of data) :
int width = [imageRep pixelsWide];
int height = [imageRep pixelsHight];
int rowBytes = [imageRep bytesPerRow];
char* pixels = [imageRep bitmapData];
int row, col;
for (row = 0; row < height; row++)
{
unsigned char* rowStart = (unsigned char*)(pixels + (row * rowBytes));
unsigned char* nextChannel = rowStart;
for (col = 0; col < width; col++)
{
unsigned char red, green, blue, alpha;
red = *nextChannel;
nextChannel++;
green = *nextChannel;
nextChannel++;
// ...etc...
}
}

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

NSImage Image Size With Multiple Layers

I have a Mac (not iOS) application that allows the user to select one or more images with NSOpenPanel. What I have trouble with is how to get correct dimensions for multiple-layered images. If an image contains one layer or compressed, the following will get me correct image dimensions with the file path.
NSImage *image0 = [[NSImage alloc] initWithContentsOfFile:path];
CGFloat w = image0.size.width;
CGFloat h = image0.size.height;
But if I select an image that has multiple layers, I'll get strange numbers. For example, I have a single-layer image whose dimensions are 1,440 x 900 px according to Fireworks. If I add a small layer of a circle and save an image as PNG and read it, I get 1,458 x 911 px. According to this topic and this topic, they suggest that I read the largest layer. Okay. So I've created a function as follows.
- (CGSize)getimageSize :(NSString *)filepath {
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:filepath];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSSize size = CGSizeMake((CGFloat)width, (CGFloat)height);
return size;
}
Using the function above, I get wrong dimensions (1,458 x 911 px) instead of 1,440 x 900 px. Actually, I had the same problem when I was developing Mac applications with REAL Stupid till a few years ago. So how can I get correct dimensions when an image contains multiple layers?
Thank you for your advice.

Core Graphics pointillize effect on CGImage

So I have been writing a lot of image processing code lately using only core graphics and i have made quite a few working filters that manipulate the colors, apply blends, blurs and stuff like that. But I'm having trouble writing a filter to apply a pointillize effect to an image like this:
what I'm trying to do is get the color of a pixel and fill an ellipse with that color, looping through the image and doing this every few pixels here is the code:
EDIT: here is my new code this time its just drawing a few little circles in the bottom of the image am I doing it right like you said?
-(UIImage*)applyFilterWithAmount:(double)amount {
CGImageRef inImage = self.CGImage;
CFDataRef m_dataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8* m_pixelBuf = (UInt8*)CFDataGetBytePtr(m_dataRef);
int length = CFDataGetLength(m_dataRef);
CGContextRef ctx = CGBitmapContextCreate(m_pixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage));
int row = 0;
int imageWidth = self.size.width;
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
for (int i = 0; i<length; i+=4) {
//filterPointillize(m_pixelBuf, i, context);
int r = i;
int g = i+1;
int b = i+2;
int red = m_pixelBuf[r];
int green = m_pixelBuf[g];
int blue = m_pixelBuf[b];
CGContextSetRGBFillColor(ctx, red/255, green/255, blue/255, 1.0);
CGContextFillEllipseInRect(ctx, CGRectMake(col, row, amount, amount));
}
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CFRelease(m_dataRef);
return finalImage;
}
One problem I see right off the bat is you are using the raster cell number for both your X and Y origin. A raster in this configuration is just a single dimension line. It is up to you to calculate the second dimension based on the raster image's width. That could explain why you got a line.
Another thing: seems like you are reading every pixel of the image. Didn't you want to skip pixels that are the width of the the ellipses you are trying to draw?
Next thing that looks suspicious is I think you should create the context you are drawing in before drawing. In addition, you should not be calling:
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSaveGState(contextRef);
and
CGContextRestoreGState(contextRef);
inside the loop.
EDIT:
One further observation: your read RGB values are 0-255, and the CGContextSetRGBFillColor function expects values to be between 0.f - 1.f. This would explain why you got white. So you need to divide by 255:
CGContextSetRGBFillColor(contextRef, red / 255, green / 255, blue / 255, 1.0);
If you have any further questions, please don't hesitate to ask!
EDIT 2:
To calculate the row, first declare a row counter outside the loop:
int row = 0; //declare before the loop
int imageWidth = self.size.width; //get the image width
if ((i % imageWidth) == 0) { //we divide the cell number and if the remainder is 0
//then we want to increment the row counter
row++;
}
We can also use mod to calculate the current column:
int col = i % imageWidth; //divide i by the image width. the remainder is the col num
EDIT 3:
You have to put this inside the for loop:
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
Also, I forgot to mention before, to make the column and row 0 based (which is what you want) you will need to subtract 1 from the image size:
int imageWidth = self.size.width - 1;

iOS OpenGL using parameters for glTexImage2D to make a UIImage?

I am working through some existing code for a project i am assigned to.
I have a successful call to glTexImage2D like this:
glTexImage2D(GL_TEXTURE_2D, 0, texture->format, texture->widthTexture, texture->heightTexture, 0, texture->format, texture->type, texture->data);
I would like create an image (preferably a CGImage or UIImage) using the variables passed to glTexImage2D, but don't know if it's possible.
I need to create many sequential images(many of them per second) from an OpenGL view and save them for later use.
Should i be able to create a CGImage or UIImage using the variables i use in glTexImage2D?
If i should be able to, how should i do it?
If not, why can't i and what do you suggest for my task of saving/capturing the contents of my opengl view many times per second?
edit: i have already successfully captured images using some techniques provided by apple with glReadPixels, etc etc. i want something faster so i can get more images per second.
edit: after reviewing and adding the code from Thomson, here is the resulting image:
the image very slightly resembles what the image should look like, except duplicated ~5 times horizontally and with some random black space underneath.
note: the video(each frame) data is coming over an ad-hoc network connection to the iPhone. i believe the camera is shooting over each frame with the YCbCr color space
edit: further reviewing Thomson's code
I have copied your new code into my project and got a different image as result:
width: 320
height: 240
i am not sure how to find the number of bytes in texture-> data. it is a void pointer.
edit: format and type
texture.type = GL_UNSIGNED_SHORT_5_6_5
texture.format = GL_RGB
Hey binnyb, here's the solution to creating a UIImage using the data stored in texture->data. v01d is certainly right that you're not going to get the UIImage as it appears in your GL framebuffer, but it'll get you an image from the data before it has passed through the framebuffer.
Turns out your texture data is in 16 bit format, 5 bits for red, 6 bits for green, and 5 bits for blue. I've added code for converting the 16 bit RGB values into 32 bit RGBA values before creating a UIImage. I'm looking forward to hearing how this turns out.
float width = 512;
float height = 512;
int channels = 4;
// create a buffer for our image after converting it from 565 rgb to 8888rgba
u_int8_t* rawData = (u_int8_t*)malloc(width*height*channels);
// unpack the 5,6,5 pixel data into 24 bit RGB
for (int i=0; i<width*height; ++i)
{
// append two adjacent bytes in texture->data into a 16 bit int
u_int16_t pixel16 = (texture->data[i*2] << 8) + texture->data[i*2+1];
// mask and shift each pixel into a single 8 bit unsigned, then normalize by 5/6 bit
// max to 8 bit integer max. Alpha set to 0.
rawData[channels*i] = ((pixel16 & 63488) >> 11) / 31.0 * 255;
rawData[channels*i+1] = ((pixel16 & 2016) << 5 >> 10) / 63.0 * 255;
rawData[channels*i+2] = ((pixel16 & 31) << 11 >> 11) / 31.0 * 255;
rawData[channels*4+3] = 0;
}
// same as before
int bitsPerComponent = 8;
int bitsPerPixel = channels*bitsPerComponent;
int bytesPerRow = channels*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
channels*width*height,
NULL);
free( rawData );
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
The code for creating a new image comes from Creating UIImage from raw RGBA data thanks to Rohit. I've tested this with our original 320x240 image dimension, having converted a 24 bit RGB image into 5,6,5 format and then up to 32 bit. I haven't tested it on a 512x512 image but I don't expect any problems.
You could make an image from the data you are sending to GL, but I doubt that's really what you want to achieve.
My guess is you want the output of the Frame Buffer. To do that you need glReadPixels(). Bare in mind for a large buffer (say 1024x768) it will take seconds to read the pixels back from GL, you wont get more than 1 per second.
You should be able to use the UIImage initializer imageWithData for this. All you need is to ensure that the data in texture->data is in a structured format that is recognizable to the UIImage constructor.
NSData* imageData = [NSData dataWithBytes:texture->data length:(3*texture->widthTexture*texture->heightTexture)];
UIImage* theImage = [UIImage imageWithData:imageData];
The types that imageWithData: supports are not well documented, but you can create NSData from .png, .jpg, .gif, and I presume .ppm files without any difficulty. If texture->data is in one of those binary formats I suspect you can get this running with a little experimentation.