Detect most black pixel on an image - objective-c iOS - objective-c

I have an image! It's been so long since I've done pixel detection, I remember you have to convert the pixels to an array somehow and then find the width of the image to find out when the pixels reach the end of a row and go to the next one and ahh, lots of complex stuff haha! Anyways I now have no clue how to do this anymore but I need to detect the left-most darkest pixel's x&y coordinates of my image named "image1"... Any good starting places?

Go to your bookstore, find a book called "iOS Developer's Cookbook" by Erica Sadun. Go to page 378-ish and there are methods for pixel detection there. You can look in this array of RGB values and run a for loop to sort and find the pixel that has the smallest sum of R, G, and B values (this will be 0-255) that will give you the pixel closest to black.
I can also post the code if needed. But the book is the best source as it gives methods and explanations.
These are mine with some changes. The method name remains the same. All I changed was the image which basically comes from an image picker.
-(UInt8 *) createBitmap{
if (!self.imageCaptured) {
NSLog(#"Error: There has not been an image captured.");
return nil;
}
//create bitmap for the image
UIImage *myImage = self.imageCaptured;//image name for test pic
CGContextRef context = CreateARGBBitmapContext(myImage.size);
if(context == NULL) return NULL;
CGRect rect = CGRectMake(0.0f/*start width*/, 0.0f/*start*/, myImage.size.width /*width bound*/, myImage.size.height /*height bound*/); //original
// CGRect rect = CGRectMake(myImage.size.width/2.0 - 25.0 /*start width*/, myImage.size.height/2.0 - 25.0 /*start*/, myImage.size.width/2.0 + 24.0 /*width bound*/, myImage.size.height/2.0 + 24.0 /*height bound*/); //test rectangle
CGContextDrawImage(context, rect, myImage.CGImage);
UInt8 *data = CGBitmapContextGetData(context);
CGContextRelease(context);
return data;
}
CGContextRef CreateARGBBitmapContext (CGSize size){
//Create new color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
//Allocate memory for bitmap data
void *bitmapData = malloc(size.width*size.height*4);
if(bitmapData == NULL){
fprintf(stderr, "Error: memory not allocated\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Build an 8-bit per channel context
CGContextRef context = CGBitmapContextCreate(bitmapData, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
fprintf(stderr, "Error: Context not created!");
free(bitmapData);
return NULL;
}
return context;
}
NSUInteger blueOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+3);
}
NSUInteger redOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+1);
}
The method on the bottom, redOffset, will get you the Red value in the ARGB (Alpha-Red-Green-Blue) scale. To change what channel in the ARGB you are looking at, change the value added to the x variable in the redOffset function to 0 to find alpha, keep it at 1 to find red, 2 to find green, and 3 to find blue. This works because it just looks at an array made in the methods above and the addition to x accounts for the index value. Essentially, use methods for the three colors (Red, green, and blue) and find the summation of those for each pixel. Whichever pixel has the lowest value of red, green, and blue together is the most black.

Related

PDF generated from postscript vs. Quartz 2D

I'm working on a program which outputs needlework patterns as PDF files using Quartz 2D and Objective C. I've used another program which was coded in Python that outputs postscript files, which are converted to PDF when I open them in Preview. Since the second app is open source, I've been able to check that the settings I use to layout my PDF are the same, specifically the size of the squares and gap sizes between them.
In the image below, the output of the other program is on the left, while mine is on the right and both are at actual size. The problem I'm having is that at actual size, the gap lines in my output are intermittent, while in the other, all the gaps can be seen. I'm wondering if anyone knows about a rendering difference with postscript files which allows for this. I can zoom in on my output and the gaps show up, but I don't understand why there would be this difference.
The squares are set to be 8 pixels wide and tall, with a 1 pixel gap between them in both applications with 2 pixel wide gaps every 10 squares and mine is set to not use antialiasing. With my output, I've tried drawing directly to a CGPDFContext and drawing to a CGLayerRef then drawing the layer to the PDF Context, but I get the same result. I'm using integer values for positioning the layout and I'm pretty sure I've avoided trying to place squares in fractions of pixel positions.
I have also tried drawing the output to a CGBitmapContext and then drawing the resulting bitmap to the PDF context, but zooming in on that gives terrible artifacts since it is then a raster being magnified.
A last difference I've noted is that the file size of the postscript generated PDF is much smaller than the one I make, and I'm thinking that might have to do with the paths I draw since it says drawing to a PDF context records the drawing as a series of PDF drawing commands written to a file, which I would imagine takes up quite a bit of space compared to just displaying an image.
I have included my code to generate my PDFs below incase it would be helpful, but I'm really just wondering about if there is a rendering difference between postscript and Quartz that could explain these differences and if there is a way to make my output match up.
(The uploader says I need at least 10 reputation to post images, but I have this link http://i.stack.imgur.com/nr588.jpg again, the postscript output is on the left, my Quartz output is on the right and in my output, the gridlines are intermittent)
-(void)makePDF:(NSImage*)image withPixelArray:(unsigned char *)rawData{
NSString *currentUserHomeDirectory = NSHomeDirectory();
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingString:#"/Desktop/"];
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingString:[image name]];
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingPathExtension:#"pdf"];
CGContextRef pdfContext;
CFStringRef path;
CFURLRef url;
int width = 792;
int height = 612;
CFMutableDictionaryRef myDictionary = NULL;
CFMutableDictionaryRef pageDictionary = NULL;
const char *filename = [currentUserHomeDirectory UTF8String];
path = CFStringCreateWithCString (NULL, filename,
kCFStringEncodingUTF8);
url = CFURLCreateWithFileSystemPath (NULL, path,
kCFURLPOSIXPathStyle, 0);
CFRelease (path);
myDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CGRect pageRect = CGRectMake(0, 0, width, height);
pdfContext = CGPDFContextCreateWithURL (url, &pageRect, myDictionary);
const CGFloat whitePoint[3]= {0.95047, 1.0, 1.08883};
const CGFloat blackPoint[3]={0,0,0};
const CGFloat gammavalues[3] = {2.2,2.2,2.2};
const CGFloat matrix[9] = {0.4124564, 0.3575761, 0.1804375, 0.2126729, 0.7151522, 0.072175, 0.0193339, 0.119192, 0.9503041};
CGColorSpaceRef myColorSpace = CGColorSpaceCreateCalibratedRGB(&whitePoint[3], &blackPoint[3], &gammavalues[3], &matrix[9]);
CGContextSetFillColorSpace (
pdfContext,
myColorSpace
);
int annotationNumber =0;
int match=0;
CFRelease(myDictionary);
CFRelease(url);
pageDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDataRef boxData = CFDataCreate(NULL,(const UInt8 *)&pageRect, sizeof (CGRect));
CFDictionarySetValue(pageDictionary, kCGPDFContextMediaBox, boxData);
int m = 0;
int sidestep = 0;
int downstep = 0;
int maxc = 0;
int maxr = 0;
int columnsPerPage = 70;
int rowsPerPage = 60;
int symbolSize = 8;
int gapSize=1;
CGContextSetShouldAntialias(pdfContext, NO);
int pages = ceil([image size].width/columnsPerPage) * ceil([image size].height/rowsPerPage);
for (int g=0; g<pages; g++) {
int offsetX = 32;
int offsetY = 32;
if (sidestep == ceil([image size].width/columnsPerPage)-1) {
maxc=[image size].width-sidestep*columnsPerPage;
}else {
maxc=columnsPerPage;
}
if (downstep == ceil([image size].height/rowsPerPage)-1) {
maxr=[image size].height-downstep*rowsPerPage;
}else {
maxr=rowsPerPage;
}
CGPDFContextBeginPage (pdfContext, pageDictionary);
CGContextTranslateCTM(pdfContext, 0.0, 612);
CGContextScaleCTM(pdfContext, 1.0, -1.0);
CGContextSetShouldAntialias(pdfContext, NO);
int r=0;
while (r<maxr){
int c=0;
while (c<maxc){
m = sidestep*columnsPerPage+c+downstep*[image size].width*rowsPerPage+r*[image size].width;
//Reset offsetX
if (c==0) {
offsetX=32;
}
//Increase offset for gridlines
if (c==0 && r%10==0&&r!=0) {
offsetY+=2;
}
if (c%10==0&&c!=0) {
offsetX+=2;
}
//DRAW SQUARES
CGContextSetRGBFillColor (pdfContext, (double)rawData[m*4]/255.,(double) rawData[m*4+1]/255., (double)rawData[m*4+2]/255., 1);
CGContextFillRect (pdfContext, CGRectMake (c*(symbolSize+gapSize)+offsetX, r*(symbolSize+gapSize)+offsetY, symbolSize, symbolSize ));
if ([usedColorsPaths count]!=0) {
for (int z=0; z<[usedColorsPaths count]; z++) {
if ([[[usedColorsPaths allKeys] objectAtIndex:z] isEqualToString:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]]) {
match=1;
if (rawData[m*4+3] == 0) {
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY),[Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]] intValue] :symbolSize :0]);
}
break;
}
}
if (match==0) {
if (rawData[m*4+3] == 0) {
[usedColorsPaths setObject:[NSNumber numberWithInt:455] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
[usedColorsPaths setObject:[NSNumber numberWithInt:annotationNumber] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize :0]);
}
annotationNumber++;
if (annotationNumber==9) {
annotationNumber=0;
}
}
match=0;
}
if ([usedColorsPaths count]==0) {
if (rawData[m*4+3] == 0) {
[usedColorsPaths setObject:[NSNumber numberWithInt:455] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
[usedColorsPaths setObject:[NSNumber numberWithInt:annotationNumber] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize :0]);
}
annotationNumber++;
}
c++;
}
r++;
}
sidestep++;
if (sidestep == ceil([image size].width/columnsPerPage)) {
sidestep=0;
downstep+=1;
}
CGContextSaveGState(pdfContext);
CGPDFContextEndPage (pdfContext);
}
CGContextRelease(pdfContext);
CFRelease(pageDictionary);
CFRelease(boxData);}
I would need to see both PDF files to be able to make any judgement about why the sizes are different, or why the line spacing is intermittent, but here's a few points:
The most likely reason why the files are different sizes is that one PDF file has the content stream (the drawing operations you refer to) compressed, while the other does not.
In general emitting a sequence of drawing operations is more compact than including a bitmap image, unless the image resolution is very low. An RGB image takes 3 bytes for every image sample. If you think about a 1 inch square image at 300 dpi that's 300x300x3 bytes, or 270,000 bytes. If the image was all one colour (see example below) I could describe it in PDF drawing operations in 22 bytes.
You can't specify the size of the squares, or any of the other graphic features, in pixels. PDF is a vector based scalable format, not a bitmap format. I don't work on a Mac so I can't comment on your sample code but I suspect you are confusing the media width and height (specified in points) with pixels, these are not the same. The width and height describe a media size, there are no pixels involved until the PDF file is rendered to a bitmap, at that time the resolution of the device determines how many pixels are in each point.
Lets consider a PDF one inch square; that would have a width of 72 and a height of 72. I'll fill that rectangle with pure red, the PDF operations for that would be:
1 0 0 rg
0 0 72 72 re
So that's set the non-stroking colour to RGB (1, 0, 0), then starting at 0, 0 (bottom left) and extending for 72 points wide and 72 points high (one inch in each direction) fill a rectangle with that colour.
I view that on screen here on my PC that one inch square is rendered as 96 pixels by 96 pixels. Now I view it on an iPad with a retina display, the square is rendered 264 pixels by 264. Finally I print it to my laser printer, now the square is rendered 600 pixels by 600. The PDF content hasn't changed, but the number of pixels certainly has. A square is too simple of course, I could have used a circle instead, and obviously the high resolution display would have smoother curves. Of course, if I did use an image, the smoothness of the curve is 'baked in', the device rendering the PDF can't alter it if the resolution changes, all it can do is discard image samples to render down, or interpolate new ones to render up. That looks jagged when you scale down, and fuzzy when you scale up. The vector representation remains smooth, limited only by the current resolution.
The point of PDF is that the PDF isn't limited to one resolution, it can print to all of them and the output should be the same (as far as possible) on each device.
Now I suspect the problem is that you are "using integer values for positioning the layout", you can't do that and get correct (ie expected) results. You should be using real numbers for the layout which will allow you finer control over the position. Remember you are not addressing individual pixels, you are positioning graphics in a co-ordinate system, the resolution only comes into play when rendering (ie viewing or printing) the PDF file. You need to put aside concerns over pixels and just focus on the positioning.
So, after messing with it for another day,I think I found out that my problem was turning off antialiasing. I thought I wanted sharper drawing, but since PDF contains vector graphics, the antialiasing is okay and zooming in on the graphics keeps them sharp.
The first thing I did was in Preview, I went to Preview>preferences>PDF and selected "Define 100% scale as: 1 point equals 1 screen pixel". Doing this with my antialisaing turned off resulted in my image being displayed as I wanted, but zooming in on it, it seems like Preview had difficulty deciding when to draw the 1 pixel gaps.
I then altered my code by deleting the calls to turn off antialiasing and my output renders perfectly, so having antialiasing fixed my problem. Pretty embarrassing that this took me three days to figure out, but I'm glad for this simple fix.

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

How to get pixel color at location from UIimage scaled within a UIimageView

I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}

How can I get the underlying pixel data from a UIImage or CGImage?

I've tried numerous 'solutions' around the net, all of those I found have errors and thus don't work. I need to know the color of a pixel in a UIImage. How can i get this information?
Getting the raw data
From Apple's Technical Q&A QA1509 it says this will get the raw image data in it's original format by getting it from the Data Provider.
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
Needed in a different format or color-space
If you want to get the data color-matched and in a specific format you can use something similar to the following code sample:
void ManipulateImagePixelData(CGImageRef inImage)
{
// Create the bitmap context
CGContextRef cgctx = CreateARGBBitmapContext(inImage);
if (cgctx == NULL)
{
// error creating context
return;
}
// Get image width, height. We'll use the entire image.
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
void *data = CGBitmapContextGetData (cgctx);
if (data != NULL)
{
// **** You have a pointer to the image data ****
// **** Do stuff with the data here ****
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data)
{
free(data);
}
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
Color of a particular pixel
Assuming RGB, once you have the data in a format you like finding the color is a matter of moving through the array of data and getting the RGB value at a particular pixel location.
If you're looking to just get a single pixel or a few ones you can look to do a little different approach. Create a 1x1 bitmap context and draw the image over it with an offset so you just get the pixel you want.
CGImageRef image = uiimage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
// Setup 1x1 pixel context to draw into
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rawData[4];
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the image
CGContextDrawImage(context,
CGRectMake(-offset.x, offset.y-height, width, height),
image);
// Done
CGContextRelease(context);
// Get the pixel information
unsigned char red = rawData[0];
unsigned char green = rawData[1];
unsigned char blue = rawData[2];
unsigned char alpha = rawData[3];

Manipulating raw png data

I want to read a PNG file such that I can:
a) Access the raw bitmap data of the file, with no color space adjustment or alpha premultiply.
b) Based on that bitmap, display bit slices (any single bit of R, G, B, or A, across the whole image) in an image in the window. If I have the bitmap I can find the right bits, but what can I stuff them into to get them onscreen?
c) After some modification of the bitplanes, write a new PNG file, again with no adjustments.
This is only for certain specific images. The PNG is not expected to have any data other than simply RGBA-32.
From reading some similar questions here, I'm suspecting NSBitmapImageRep for the file read/write, and drawing in an NSView for the onscreen part. Does this sound right?
1.) You can use NSBitmapImageRep's -bitmapData to get the raw pixel data. Unfortunately, CG (NSBitmapImageRep's backend) does not support native unpremultiplication so you would have to unpremultiply yourself. The colorspace used in this will be the same as present in the file. Here is how to unpremultiply the image data:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:data];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
unsigned char *bytes = [imageRep bitmapData];
for (NSUInteger y = 0; y < width * height * 4; y += 4) { // bgra little endian + alpha first
uint8_t a, r, g, b;
if (imageRep.bitmapFormat & NSAlphaFirstBitmapFormat) {
a = bytes[y];
r = bytes[y+1];
g = bytes[y+2];
b = bytes[y+3];
} else {
r = bytes[y+0];
g = bytes[y+1];
b = bytes[y+2];
a = bytes[y+3];
}
// unpremultiply alpha if there is any
if (a > 0) {
if (!(imageRep.bitmapFormat & NSAlphaNonpremultipliedBitmapFormat)) {
float factor = 255.0f/a;
b *= factor;
g *= factor;
r *= factor;
}
} else {
b = 0;
g = 0;
r = 0;
}
bytes[y]=a; // for argb
bytes[y+1]=r;
bytes[y+2]=g;
bytes[y+3]=b;
}
2.) I couldn't think of a simple way to do this. You could make your own image drawing method that loops through the raw image data and generates a new image based on the values. Refer above to see how to start doing it.
3.) Here is a method to get a CGImage from raw data places (you can write the png to a file using native CG functions or convert it to NSBitmapImageRep if CG makes you uncomfortable)
static CGImageRef cgImageFrom(NSData *data, uint16_t width, uint16_t height) {
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaFirst;
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, 4 * width, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return cgImage;
}
You can create the NSData object out of the raw data object with +dataWithBytes:length:
I haven't ever worked in this area, but you may be able to use Image IO for this.