Manipulating raw png data - objective-c

I want to read a PNG file such that I can:
a) Access the raw bitmap data of the file, with no color space adjustment or alpha premultiply.
b) Based on that bitmap, display bit slices (any single bit of R, G, B, or A, across the whole image) in an image in the window. If I have the bitmap I can find the right bits, but what can I stuff them into to get them onscreen?
c) After some modification of the bitplanes, write a new PNG file, again with no adjustments.
This is only for certain specific images. The PNG is not expected to have any data other than simply RGBA-32.
From reading some similar questions here, I'm suspecting NSBitmapImageRep for the file read/write, and drawing in an NSView for the onscreen part. Does this sound right?

1.) You can use NSBitmapImageRep's -bitmapData to get the raw pixel data. Unfortunately, CG (NSBitmapImageRep's backend) does not support native unpremultiplication so you would have to unpremultiply yourself. The colorspace used in this will be the same as present in the file. Here is how to unpremultiply the image data:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:data];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
unsigned char *bytes = [imageRep bitmapData];
for (NSUInteger y = 0; y < width * height * 4; y += 4) { // bgra little endian + alpha first
uint8_t a, r, g, b;
if (imageRep.bitmapFormat & NSAlphaFirstBitmapFormat) {
a = bytes[y];
r = bytes[y+1];
g = bytes[y+2];
b = bytes[y+3];
} else {
r = bytes[y+0];
g = bytes[y+1];
b = bytes[y+2];
a = bytes[y+3];
}
// unpremultiply alpha if there is any
if (a > 0) {
if (!(imageRep.bitmapFormat & NSAlphaNonpremultipliedBitmapFormat)) {
float factor = 255.0f/a;
b *= factor;
g *= factor;
r *= factor;
}
} else {
b = 0;
g = 0;
r = 0;
}
bytes[y]=a; // for argb
bytes[y+1]=r;
bytes[y+2]=g;
bytes[y+3]=b;
}
2.) I couldn't think of a simple way to do this. You could make your own image drawing method that loops through the raw image data and generates a new image based on the values. Refer above to see how to start doing it.
3.) Here is a method to get a CGImage from raw data places (you can write the png to a file using native CG functions or convert it to NSBitmapImageRep if CG makes you uncomfortable)
static CGImageRef cgImageFrom(NSData *data, uint16_t width, uint16_t height) {
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaFirst;
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, 4 * width, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return cgImage;
}
You can create the NSData object out of the raw data object with +dataWithBytes:length:

I haven't ever worked in this area, but you may be able to use Image IO for this.

Related

Image Quality getting affected on scaling the image using vImageScale_ARGB8888 - Cocoa Objective C

I am capturing my system's screen with AVCaptureSession and then create a video file out of the image buffers captured. It works fine.
Now I want to scale the image buffers by maintaining the aspect ratio for the video file's dimension. I have used the following code to scale the images.
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t finalWidth = 1080;
size_t finalHeight = 720;
size_t sourceWidth = CVPixelBufferGetWidth(imageBuffer);
size_t sourceHeight = CVPixelBufferGetHeight(imageBuffer);
CGRect aspectRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(sourceWidth, sourceHeight), CGRectMake(0, 0, finalWidth, finalHeight));
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t startY = aspectRect.origin.y;
size_t yOffSet = (finalWidth*startY*4);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* destData = malloc(finalHeight * finalWidth * 4);
vImage_Buffer srcBuffer = { (void *)baseAddress, sourceHeight, sourceWidth, bytesPerRow};
vImage_Buffer destBuffer = { (void *)destData+yOffSet, aspectRect.size.height, aspectRect.size.width, aspectRect.size.width * 4};
vImage_Error err = vImageScale_ARGB8888(&srcBuffer, &destBuffer, NULL, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer);
CVImageBufferRef pixelBuffer1 = NULL;
CVReturn result = CVPixelBufferCreateWithBytes(NULL, finalWidth, finalHeight, pixelFormat, destData, finalWidth * 4, NULL, NULL, NULL, &pixelBuffer1);
}
I am able scale the image with the above code but the final image seems to be blurry compare to resizing the image with Preview application. Because of this the video is not clear.
This works fine if I change the output pixel format to RGB with below code.
output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
But I want the image buffers in YUV format (which is the default format for AVCaptureVideoDataOutput) since this will reduce the size of the buffer when transferring it over network.
Image after scaling:
Image resized with Preview application:
I have tried using vImageScale_CbCr8 instead of vImageScale_ARGB8888 but the resulting image didn't contain correct RGB values.
I have also noticed there is function to convert image format: vImageConvert_422YpCbYpCr8ToARGB8888(const vImage_Buffer *src, const vImage_Buffer *dest, const vImage_YpCbCrToARGB *info, const uint8_t permuteMap[4], const uint8_t alpha, vImage_Flags flags);
But I don't know what should be the values for vImage_YpCbCrToARGB and permuteMap as I don't know anything about image processing.
Expected Solution:
How to convert YUV pixel buffers to RGB buffers and back to YUV (or) How to scale YUV pixel buffers without affecting the RGB values.
After a lot search and going through different questions related to image rendering, found the below code to convert the pixel format of the image buffers. Thanks to the answer in this link.
CVPixelBufferRef imageBuffer;
CVPixelBufferCreate(kCFAllocatorDefault, sourceWidth, sourceHeight, kCVPixelFormatType_32ARGB, 0, &imageBuffer);
VTPixelTransferSessionTransferImage(pixelTransferSession, pixelBuffer, imageBuffer);

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

Core Graphics pointillize effect on CGImage

So I have been writing a lot of image processing code lately using only core graphics and i have made quite a few working filters that manipulate the colors, apply blends, blurs and stuff like that. But I'm having trouble writing a filter to apply a pointillize effect to an image like this:
what I'm trying to do is get the color of a pixel and fill an ellipse with that color, looping through the image and doing this every few pixels here is the code:
EDIT: here is my new code this time its just drawing a few little circles in the bottom of the image am I doing it right like you said?
-(UIImage*)applyFilterWithAmount:(double)amount {
CGImageRef inImage = self.CGImage;
CFDataRef m_dataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8* m_pixelBuf = (UInt8*)CFDataGetBytePtr(m_dataRef);
int length = CFDataGetLength(m_dataRef);
CGContextRef ctx = CGBitmapContextCreate(m_pixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage));
int row = 0;
int imageWidth = self.size.width;
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
for (int i = 0; i<length; i+=4) {
//filterPointillize(m_pixelBuf, i, context);
int r = i;
int g = i+1;
int b = i+2;
int red = m_pixelBuf[r];
int green = m_pixelBuf[g];
int blue = m_pixelBuf[b];
CGContextSetRGBFillColor(ctx, red/255, green/255, blue/255, 1.0);
CGContextFillEllipseInRect(ctx, CGRectMake(col, row, amount, amount));
}
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage* finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CFRelease(m_dataRef);
return finalImage;
}
One problem I see right off the bat is you are using the raster cell number for both your X and Y origin. A raster in this configuration is just a single dimension line. It is up to you to calculate the second dimension based on the raster image's width. That could explain why you got a line.
Another thing: seems like you are reading every pixel of the image. Didn't you want to skip pixels that are the width of the the ellipses you are trying to draw?
Next thing that looks suspicious is I think you should create the context you are drawing in before drawing. In addition, you should not be calling:
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSaveGState(contextRef);
and
CGContextRestoreGState(contextRef);
inside the loop.
EDIT:
One further observation: your read RGB values are 0-255, and the CGContextSetRGBFillColor function expects values to be between 0.f - 1.f. This would explain why you got white. So you need to divide by 255:
CGContextSetRGBFillColor(contextRef, red / 255, green / 255, blue / 255, 1.0);
If you have any further questions, please don't hesitate to ask!
EDIT 2:
To calculate the row, first declare a row counter outside the loop:
int row = 0; //declare before the loop
int imageWidth = self.size.width; //get the image width
if ((i % imageWidth) == 0) { //we divide the cell number and if the remainder is 0
//then we want to increment the row counter
row++;
}
We can also use mod to calculate the current column:
int col = i % imageWidth; //divide i by the image width. the remainder is the col num
EDIT 3:
You have to put this inside the for loop:
if ((row%imageWidth)==0) {
row++;
}
int col = row%imageWidth;
Also, I forgot to mention before, to make the column and row 0 based (which is what you want) you will need to subtract 1 from the image size:
int imageWidth = self.size.width - 1;

How to get pixel color at location from UIimage scaled within a UIimageView

I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}

Using the contents of an array to set individual pixels in a Quartz bitmap context

I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view.
It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient.
I guess the other question is should I use OPENGL ES instead?
Thoughts/best practice would be much appreciated.
Regards
Dave
OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working:
- (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context
{
long bitmapData[WIDTH * HEIGHT];
// Build bitmap
int i, j, h;
for (i = 0; i < WIDTH; i++)
{
for (j = 0; j < HEIGHT; j++)
{
h = frameBuffer01[i][j];
bitmapData[i * j] = h;
}
}
// Blit the bitmap to the context
CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault);
CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(providerRef);
}
Read the documentation for CGImageCreate(). Basically, you have to create a CGDataProvider from your pixel array (using CGDataProviderCreateDirect()), then create a CGImage with this data provider as a source. You can then draw the image into any context. It's a bit tedious to get this right because these functions expect a lot of arguments, but the documentation is quite good.
Dave,
The blitting code works fine, but your code to copy from the frame buffer is incorrect.
// Build bitmap
int i, j, h;
for (i = 0; i < WIDTH; i++)
{
for (j = 0; j < HEIGHT; j++)
{
h = frameBuffer01[i][j];
bitmapData[/*step across a line*/i + /*step down a line*/j*WIDTH] = h;
}
}
Note my changes to the assignment to elements of bitmapData.
Not knowing the layout of frame, this may still be incorrect, but from your code, this looks closer to the intent.