How can I get the underlying pixel data from a UIImage or CGImage? - objective-c

I've tried numerous 'solutions' around the net, all of those I found have errors and thus don't work. I need to know the color of a pixel in a UIImage. How can i get this information?

Getting the raw data
From Apple's Technical Q&A QA1509 it says this will get the raw image data in it's original format by getting it from the Data Provider.
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
Needed in a different format or color-space
If you want to get the data color-matched and in a specific format you can use something similar to the following code sample:
void ManipulateImagePixelData(CGImageRef inImage)
{
// Create the bitmap context
CGContextRef cgctx = CreateARGBBitmapContext(inImage);
if (cgctx == NULL)
{
// error creating context
return;
}
// Get image width, height. We'll use the entire image.
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
void *data = CGBitmapContextGetData (cgctx);
if (data != NULL)
{
// **** You have a pointer to the image data ****
// **** Do stuff with the data here ****
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data)
{
free(data);
}
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
Color of a particular pixel
Assuming RGB, once you have the data in a format you like finding the color is a matter of moving through the array of data and getting the RGB value at a particular pixel location.

If you're looking to just get a single pixel or a few ones you can look to do a little different approach. Create a 1x1 bitmap context and draw the image over it with an offset so you just get the pixel you want.
CGImageRef image = uiimage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
// Setup 1x1 pixel context to draw into
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rawData[4];
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the image
CGContextDrawImage(context,
CGRectMake(-offset.x, offset.y-height, width, height),
image);
// Done
CGContextRelease(context);
// Get the pixel information
unsigned char red = rawData[0];
unsigned char green = rawData[1];
unsigned char blue = rawData[2];
unsigned char alpha = rawData[3];

Related

Image Quality getting affected on scaling the image using vImageScale_ARGB8888 - Cocoa Objective C

I am capturing my system's screen with AVCaptureSession and then create a video file out of the image buffers captured. It works fine.
Now I want to scale the image buffers by maintaining the aspect ratio for the video file's dimension. I have used the following code to scale the images.
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t finalWidth = 1080;
size_t finalHeight = 720;
size_t sourceWidth = CVPixelBufferGetWidth(imageBuffer);
size_t sourceHeight = CVPixelBufferGetHeight(imageBuffer);
CGRect aspectRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(sourceWidth, sourceHeight), CGRectMake(0, 0, finalWidth, finalHeight));
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t startY = aspectRect.origin.y;
size_t yOffSet = (finalWidth*startY*4);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* destData = malloc(finalHeight * finalWidth * 4);
vImage_Buffer srcBuffer = { (void *)baseAddress, sourceHeight, sourceWidth, bytesPerRow};
vImage_Buffer destBuffer = { (void *)destData+yOffSet, aspectRect.size.height, aspectRect.size.width, aspectRect.size.width * 4};
vImage_Error err = vImageScale_ARGB8888(&srcBuffer, &destBuffer, NULL, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer);
CVImageBufferRef pixelBuffer1 = NULL;
CVReturn result = CVPixelBufferCreateWithBytes(NULL, finalWidth, finalHeight, pixelFormat, destData, finalWidth * 4, NULL, NULL, NULL, &pixelBuffer1);
}
I am able scale the image with the above code but the final image seems to be blurry compare to resizing the image with Preview application. Because of this the video is not clear.
This works fine if I change the output pixel format to RGB with below code.
output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
But I want the image buffers in YUV format (which is the default format for AVCaptureVideoDataOutput) since this will reduce the size of the buffer when transferring it over network.
Image after scaling:
Image resized with Preview application:
I have tried using vImageScale_CbCr8 instead of vImageScale_ARGB8888 but the resulting image didn't contain correct RGB values.
I have also noticed there is function to convert image format: vImageConvert_422YpCbYpCr8ToARGB8888(const vImage_Buffer *src, const vImage_Buffer *dest, const vImage_YpCbCrToARGB *info, const uint8_t permuteMap[4], const uint8_t alpha, vImage_Flags flags);
But I don't know what should be the values for vImage_YpCbCrToARGB and permuteMap as I don't know anything about image processing.
Expected Solution:
How to convert YUV pixel buffers to RGB buffers and back to YUV (or) How to scale YUV pixel buffers without affecting the RGB values.
After a lot search and going through different questions related to image rendering, found the below code to convert the pixel format of the image buffers. Thanks to the answer in this link.
CVPixelBufferRef imageBuffer;
CVPixelBufferCreate(kCFAllocatorDefault, sourceWidth, sourceHeight, kCVPixelFormatType_32ARGB, 0, &imageBuffer);
VTPixelTransferSessionTransferImage(pixelTransferSession, pixelBuffer, imageBuffer);

Creating ad displaying a UIImage from raw BGRA data

I'm collecting image data from the camera using this code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// Called when a frame arrives
// Should be in BGRA format
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char *raw = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
// Copy memory into allocated buffer
unsigned char *buffer = malloc(sizeof(unsigned char) * bytesPerRow * height);
memcpy(buffer, raw, bytesPerRow * height);
[self processVideoData:buffer width:width height:height bytesPerRow:bytesPerRow];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
The processVideoData: method looks like this:
- (void)processVideoData:(unsigned char *)data width:(size_t)width height:(size_t)height bytesPerRow:(size_t)bytesPerRow
{
dispatch_sync(dispatch_get_main_queue(), ^{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * height, NULL);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
// Set layer contents???
UIImage *objcImage = [UIImage imageWithCGImage:image];
self.imageView.image = objcImage;
free(data);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
});
}
No complaints, no leaks but nothing shows up in the image view it just stays blank (yes I have checked the outlet connection). Previously I had the bitmapInfo set just to kCGBitmapByteOrderDefault which was causing a crash when setting the image property of the image view however the image view would go dark which was promising just before the crash.
I summarised that the crash was due to the image being in BGRA not BGR so I set the bitmapInfo to kCGBitmapByteOrderDefault | kCGImageAlphaLast and that solved the crash but no image.
I realise that the image will look weird as the CGImageRef is expecting an RGB image and I'm passing it BGR but that should only result in a weird looking image due to channel swapping. I have also logged out the data that I'm getting and it seems to be in order something like: b:65 g:51 r:42 a:255 and the alpha channel is always 255 as expected.
I'm sorry if it's obvious but I can't work out what is going wrong.
You can use this flag combination to achieve BGRA format:
kCGBitmapByteOrder32Little | kCGImageAlphaSkipFirst
You should prefer to use this solution, it will be more performant way in comparison to OpenCV conversion.
Here is more common way to convert sourcePixelFormat to bitmapInfo:
sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
bitmapInfo = #{
#(kCVPixelFormatType_32ARGB) : #(kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst),
#(kCVPixelFormatType_32BGRA) : #(kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst),
}[#(sourcePixelFormat)].unsignedIntegerValue;
It turns out the data was just in the wrong format and I wasn't feeding it into the CGImageCreate function correctly.
The data comes out in BGRA format so I fed this data into an IplImage structure (I'm using OpenCV v 2.4.9) like so:
// Pack IplImage with data
IplImage *img = cvCreateImage(cvSize((int)width, (int)height), 8, 4);
img->imageData = (char *)data;
I then converted it to RGB like so:
IplImage *converted = cvCreateImage(cvSize((int)width, (int)height), 8, 3);
cvCvtColor(img, converted, CV_BGRA2RGB);
I then fed the data from the converted IplImage into a CGImageCreate function and it works nicely.

I need help optimizing BGR888 blitting to NSView

This is best I've come up with for blitting a 24-bit BGR image out to an NSView.
I did trim a significant amount of CPU time by ensuring that the NSWindow host also had the same colorSpace.
I think there are 4 or 5 pixel copies going on here:
in the vImage conversion (required)
calling CGDataProviderCreateWithData
calling CGImageCreate
creating the NSBitmapImageRep bitmap
in the final blit with drawInRect (required)
Anyone want to chime in on improving it?
Any help would be much appreciated.
{
// one-time setup code
CGColorSpaceRef useColorSpace = nil;
int w = 1920;
int h = 1080;
[theWindow setColorSpace: [NSColorSpace genericRGBColorSpace]];
// setup vImage buffers (not listed here)
// srcBuffer is my 24-bit BGR image (malloc-ed to be w*h*3)
// dstBuffer is for the resulting 32-bit RGBA image (malloc-ed to be w*h*4)
...
// this is called # 30-60fps
if (!useColorSpace)
useColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
vImage_Error err = vImageConvert_BGR888toRGBA8888(srcBuffer, NULL, 0xff, dstBuffer, NO, 0);
CGDataProviderRef newProvider = CGDataProviderCreateWithData(NULL,dstBuffer->data,w*h*4,myReleaseProvider);
CGImageRef myImageRGBA = CGImageCreate(w, h, 8, 32, w*4, useColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, newProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(newProvider);
// store myImageRGBA in an array of frames (using NSObject wrappers) for later access (setNeedsDisplay:)
...
}
- (void)drawRect:(NSRect)dirtyRect
{
// this is called # 30-60fps
CGImageRef storedImage = ...; // retrieve from array
NSBitmapImageRep *repImg = [[NSBitmapImageRep alloc] initWithCGImage:storedImage];
CGRect myFrame = CGRectMake(0,0,CGImageGetWidth(storedImage),CGImageGetHeight(storedImage));
[repImg drawInRect:myFrame fromRect:myFrame operation:NSCompositeCopy fraction:1.0 respectFlipped:TRUE hints:nil];
// free image from array (not listed here)
}
// this is called when the CGDataProvider is ready to release its data
void myReleaseProvider (void *info, const void *data, size_t size)
{
if (data) {
free((void *)data);
data=nil;
}
}
Use CGColorSpaceCreateDeviceRGB instead of genericRGB to avoid colorspace conversion inside CG. Use kCGImageAlphaNoneSkipLast instead of kCGImageAlphaLast since we know alpha is opaque to allow for a copy instead of a blend.
After you make those changes, it would be useful to run an Instruments time profile on it to show where the time is going.

How to get pixel color at location from UIimage scaled within a UIimageView

I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}

Detect most black pixel on an image - objective-c iOS

I have an image! It's been so long since I've done pixel detection, I remember you have to convert the pixels to an array somehow and then find the width of the image to find out when the pixels reach the end of a row and go to the next one and ahh, lots of complex stuff haha! Anyways I now have no clue how to do this anymore but I need to detect the left-most darkest pixel's x&y coordinates of my image named "image1"... Any good starting places?
Go to your bookstore, find a book called "iOS Developer's Cookbook" by Erica Sadun. Go to page 378-ish and there are methods for pixel detection there. You can look in this array of RGB values and run a for loop to sort and find the pixel that has the smallest sum of R, G, and B values (this will be 0-255) that will give you the pixel closest to black.
I can also post the code if needed. But the book is the best source as it gives methods and explanations.
These are mine with some changes. The method name remains the same. All I changed was the image which basically comes from an image picker.
-(UInt8 *) createBitmap{
if (!self.imageCaptured) {
NSLog(#"Error: There has not been an image captured.");
return nil;
}
//create bitmap for the image
UIImage *myImage = self.imageCaptured;//image name for test pic
CGContextRef context = CreateARGBBitmapContext(myImage.size);
if(context == NULL) return NULL;
CGRect rect = CGRectMake(0.0f/*start width*/, 0.0f/*start*/, myImage.size.width /*width bound*/, myImage.size.height /*height bound*/); //original
// CGRect rect = CGRectMake(myImage.size.width/2.0 - 25.0 /*start width*/, myImage.size.height/2.0 - 25.0 /*start*/, myImage.size.width/2.0 + 24.0 /*width bound*/, myImage.size.height/2.0 + 24.0 /*height bound*/); //test rectangle
CGContextDrawImage(context, rect, myImage.CGImage);
UInt8 *data = CGBitmapContextGetData(context);
CGContextRelease(context);
return data;
}
CGContextRef CreateARGBBitmapContext (CGSize size){
//Create new color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
//Allocate memory for bitmap data
void *bitmapData = malloc(size.width*size.height*4);
if(bitmapData == NULL){
fprintf(stderr, "Error: memory not allocated\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Build an 8-bit per channel context
CGContextRef context = CGBitmapContextCreate(bitmapData, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
fprintf(stderr, "Error: Context not created!");
free(bitmapData);
return NULL;
}
return context;
}
NSUInteger blueOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+3);
}
NSUInteger redOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+1);
}
The method on the bottom, redOffset, will get you the Red value in the ARGB (Alpha-Red-Green-Blue) scale. To change what channel in the ARGB you are looking at, change the value added to the x variable in the redOffset function to 0 to find alpha, keep it at 1 to find red, 2 to find green, and 3 to find blue. This works because it just looks at an array made in the methods above and the addition to x accounts for the index value. Essentially, use methods for the three colors (Red, green, and blue) and find the summation of those for each pixel. Whichever pixel has the lowest value of red, green, and blue together is the most black.