Alpha Mask Extraction process Objective C implementation - objective-c

Do you have an Objective C implementation equivalent to ImageMagick's command :
convert -alpha Extract -type optimize -strip -quality 60 +dither Source.png Alpha.jpg
I was not able to find any solution right now.
I m looking for an AlphaExtractor snippet that would extract the alpha from a png and save it to JPG Grayscale
The mask is created using the code snippet :
CGImageRef createMaskWithImage(CGImageRef image)
{
int maskWidth = CGImageGetWidth(image);
int maskHeight = CGImageGetHeight(image);
// round bytesPerRow to the nearest 16 bytes, for performance's sake
int bytesPerRow = (maskWidth + 15) & 0xfffffff0;
int bufferSize = bytesPerRow * maskHeight;
// we use CFData instead of malloc(), because the memory has to stick around
// for the lifetime of the mask. if we used malloc(), we'd have to
// tell the CGDataProvider how to dispose of the memory when done. using
// CFData is just easier and cleaner.
CFMutableDataRef dataBuffer = CFDataCreateMutable(kCFAllocatorDefault, 0);
CFDataSetLength(dataBuffer, bufferSize);
// the data will be 8 bits per pixel, no alpha
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);//CGColorSpaceCreateDeviceGray();
CGContextRef ctx = CGBitmapContextCreate(CFDataGetMutableBytePtr(dataBuffer),
maskWidth, maskHeight,
8, bytesPerRow, colorSpace, kCGImageAlphaNone);
// drawing into this context will draw into the dataBuffer.
CGContextDrawImage(ctx, CGRectMake(0, 0, maskWidth, maskHeight), image);
CGContextRelease(ctx);
// now make a mask from the data.
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(dataBuffer);
CGImageRef mask = CGImageMaskCreate(maskWidth, maskHeight, 8, 8, bytesPerRow,
dataProvider, NULL, FALSE);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
CFRelease(dataBuffer);
return mask;
}
and saved :
-(void)_saveJPEGImage:(CGImageRef)imageRef path:(NSString *)path {
NSURL *fileURL = [NSURL fileURLWithPath:path];
CFURLRef fileUrlRef=(CFURLRef)fileURL;
CFMutableDictionaryRef mSaveMetaAndOpts = CFDictionaryCreateMutable(nil, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(mSaveMetaAndOpts, kCGImageDestinationLossyCompressionQuality, [NSNumber numberWithFloat:0.7]); // set the compression quality here
CFDictionarySetValue(mSaveMetaAndOpts, kCGImageDestinationBackgroundColor, kCGColorClear);
CGImageDestinationRef dr = CGImageDestinationCreateWithURL (fileUrlRef, kUTTypeJPEG , 1, NULL);
CGImageDestinationAddImage(dr, imageRef, mSaveMetaAndOpts);
CGImageDestinationFinalize(dr);
CFRelease(dr);
}

A really quick an dirty working solution :
Assuming we have a 32bytes raw data (if not the code need to be adapted)
1- we iterate through the bytes by +4 steps and alter the r,g,b components.
CGImageRef ref=CGImageCreateCopy([_imageView image]);
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(ref));
char *bytes = (char *)[data bytes];
int i;
for( i= 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
bytes[r] = 0;
bytes[g] = 0;
bytes[b] = 0;
bytes[a] = bytes[a];
}
2- We create a new RGBA (32Bit) image reference with the "modified data" :
size_t width = CGImageGetWidth(ref);
size_t height = CGImageGetHeight(ref);
size_t bitsPerComponent = CGImageGetBitsPerComponent(ref);
size_t bitsPerPixel = CGImageGetBitsPerPixel(ref);
size_t bytesPerRow = CGImageGetBytesPerRow(ref);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(ref);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bytes, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
3- We save this new 32Bytes image reference to a jpeg file.
The generated JPG will be usable as a mask.
We could do it a cleaner way by creating an 8Bit context and writing the "alpha component" only.

I see two problems:
CGContextDrawImage(ctx, CGRectMake(0, 0, maskWidth, maskHeight), image);
doesn't extract the alpha, it just alpha-composites the image onto a black background.
If the image is black with transparency then an all-black image would be the expected output.
and:
CGImageRef mask = CGImageMaskCreate(maskWidth, maskHeight, 8, 8, bytesPerRow,
dataProvider, NULL, FALSE);
You're treating this mask you create like a real image. If you replace this line with
CGImageRef mask = CGImageCreate(maskWidth, maskHeight, 8, 8, bytesPerRow, colorSpace, 0,
dataProvider, NULL, FALSE, kCGRenderingIntentDefault);
Then you will get a greyscale version of your image (see problem 1)

Related

How to convert from CGImageRef to GraphicsMagick Blob type?

I have a fairly standard RGBA image as a CGImageRef.
I'm looking to convert this into a GraphicsMagick Blob (http://www.graphicsmagick.org/Magick++/Image.html#blobs)
What's the best way to go about transposing it?
I have this but it produces only a plain black image if I specify PNG8 in the pathString or it crashes:
- (void)saveImage:(CGImageRef)image path:(NSString *)pathString
{
CGDataProviderRef dataProvider = CGImageGetDataProvider(image);
NSData *data = CFBridgingRelease(CGDataProviderCopyData(dataProvider));
const void *bytes = [data bytes];
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t length = CGImageGetBytesPerRow(image) * height;
NSString *sizeString = [NSString stringWithFormat:#"%ldx%ld", width, height];
Image pngImage;
Blob blob(bytes, length);
pngImage.read(blob);
pngImage.size([sizeString UTF8String]);
pngImage.magick("RGBA");
pngImage.write([pathString UTF8String]);
}
Needed to get the image in the right RGBA format first. The original CGImageRef had a huge number of bytes per row. Creating a context with only 4 bytes per pixel did the trick.
// Calculate the image width, height and bytes per row
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = 4 * width;
size_t length = bytesPerRow * height;
// Set the frame
CGRect frame = CGRectMake(0, 0, width, height);
// Create context
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
CGImageGetBitsPerComponent(image),
bytesPerRow,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
if (!context) {
return;
}
// Draw the image inside the context
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, frame, image);
// Get the bitmap data from the context
void *bytes = CGBitmapContextGetData(context);

dsp on an UIImage [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
Let's say I have a UIImage that I would like to get the rgb matrix of, in order to do some processing on it, not to change it, just get the UIImage data so i can use my C algorithms on it.
As you probably know, all the math is done on the image rgb matrixes.
The basic procedure is to create a bitmap context with CGBitmapContextCreate, then draw your image into that context and get the internal data with CGBitmapContextGetData. Here's an example:
UIImage *image = [UIImage imageNamed:#"MyImage.png"];
//Create the bitmap context:
CGImageRef cgImage = [image CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bitsPerComponent = 8;
size_t bytesPerRow = width * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
//Draw your image into the context:
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
//Get the raw image data:
unsigned char *data = CGBitmapContextGetData(context);
//Example how to access pixel values:
size_t x = 0;
size_t y = 0;
size_t i = y * bytesPerRow + x * 4;
unsigned char redValue = data[i];
unsigned char greenValue = data[i + 1];
unsigned char blueValue = data[i + 2];
unsigned char alphaValue = data[i + 3];
NSLog(#"RGBA at (%i, %i): %i, %i, %i, %i", x, y, redValue, greenValue, blueValue, alphaValue);
//Clean up:
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
//At this point, your data pointer becomes invalid, you would have to allocate
//your own buffer instead of passing NULL to avoid this.

<Error>: CGImageCreate: invalid image bits/pixel: 8

i'm using opencv with Xcode , i get this method to convert from IplImage to UIImage:
-(UIImage *)UIImageFromIplImage:(IplImage *)image {
NSLog(#"IplImage (%d, %d) %d bits by %d channels, %d bytes/row %s", image->width, image->height, image->depth, image->nChannels, image->widthStep, image->channelSeq);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}
the problem is, when i pass any image to this method(png,jpg,tiff) this error appears
: CGImageCreate: invalid image bits/pixel: 8, please help me in resolving that error,thanks.
if your image is gray(not RGBA), use this:
colorSpace = CGColorSpaceCreateDeviceGray();
From what I've done in my apps, you actually need to provide an alpha value in your image data. What I do, is taking the data out of the opencv struct, add an alpha value, and create the CGImage. Here is the code I use with the C++ API (if you stick to C just replace the call to aMat.ptr<>(y) by a pointer to the first pixel of the y-th row):
// Colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char* data = new unsigned char[4*aMat.cols*aMat.rows];
for (int y = 0; y < aMat.rows; ++y)
{
cv::Vec3b *ptr = aMat.ptr<cv::Vec3b>(y);
unsigned char *pdata = data + 4*y*aMat.cols;
for (int x = 0; x < aMat.cols; ++x, ++ptr)
{
*pdata++ = (*ptr)[2];
*pdata++ = (*ptr)[1];
*pdata++ = (*ptr)[0];
*pdata++ = 0;
}
}
// Bitmap context
CGContextRef context = CGBitmapContextCreate(data, aMat.cols, aMat.rows, 8, 4*aMat.cols, colorSpace, kCGImageAlphaNoneSkipLast);
CGImageRef cgimage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
delete[] data;
The shuffling part is necessary because OpenCV handles BGR images, while Quartz expects RGB.

Creating an NSImage from bitmap data

Ok, it appears that the I'm creating a PDFDocument where pixelWidth is incorrect in the images that I created. So the question becomes: How do I get the correct resolution into the image?
I start with bitmap data from a scanner. I'm doing this:
CGDataProviderRef provider= CGDataProviderCreateWithData(NULL (UInt8*)data, bytesPerRow * length, NULL);
CGImageRef cgImg = CGImageCreate (
width,
length,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapinfo, // ? CGBitmapInfo bitmapInfo,
provider, //? CGDataProviderRef provider,
NULL, //const CGFloat decode[],
true, //bool shouldInterpolate,
kCGRenderingIntentDefault // CGColorRenderingIntent intent
);
/* CGColorSpaceRelease(colorspace); */
NSData* imgData = [NSMutableData data];
CGImageDestinationRef dest = CGImageDestinationCreateWithData
(imgData, kUTTypeTIFF, 1, NULL);
CGImageDestinationAddImage(dest, cgImg, NULL);
CGImageDestinationFinalize(dest);
NSImage* img = [[NSImage alloc] initWithData: imgData];
there doesn't appear to be anywhere in there to include the actual width/height in inches or points, nor the actual resolution, which I DO know at this point... how am I supposed to do this?
If you've got a chunk of data, the easiest way to turn it into an NSImage is to use NSBitmapImageRep. Specifically something like:
NSData * byteData = [NSData dataWithBytes:data length:length];
NSBitmapImageRep * imageRep = [NSBitmapImageRep imageRepWithData:byteData];
NSSize imageSize = NSMakeSize(CGImageGetWidth([imageRep CGImage]), CGImageGetHeight([imageRep CGImage]));
NSImage * image = [[NSImage alloc] initWithSize:imageSize];
[image addRepresentation:imageRep];
...use image
Hope this will be helpful
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
_image = [[NSImage alloc] initWithCGImage:iref size:NSMakeSize(width, height)];
version with channel not 4:
size_t bufferLength = width * height * channel;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = bitsPerComponent * channel;
size_t bytesPerRow = channel * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
if(channel < 4) {
bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
}
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
_image = [[NSImage alloc] initWithCGImage:iref size:NSMakeSize(width, height)];

Converting RGB data into a bitmap in Objective-C++ Cocoa

I have a buffer of RGB unsigned char that I would like converted into a bitmap file, does anyone know how?
My RGB float is of the following format
R [(0,0)], G[(0,0)], B[(0,0)],R [(0,1)], G[(0,1)], B[(0,1)], R [(0,2)], G[(0,2)], B[(0,2)] .....
The values for each data unit ranges from 0 to 255. anyone has any ideas how I can go about making this conversion?
You can use CGBitmapContextCreate to make a bitmap context from your raw data. Then you can create a CGImageRef from the bitmap context and save it. Unfortunately CGBitmapContextCreate is a little picky about the format of the data. It does not support 24-bit RGB data. The loop at the beginning swizzles the rgb data to rgba with an alpha value of zero at the end. You have to include and link with ApplicationServices framework.
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = myBuffer[3*i];
rgba[4*i+1] = myBuffer[3*i+1];
rgba[4*i+2] = myBuffer[3*i+2];
rgba[4*i+3] = 0;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
rgba,
width,
height,
8, // bitsPerComponent
4*width, // bytesPerRow
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("image.png"), kCFURLPOSIXPathStyle, false);
CFStringRef type = kUTTypePNG; // or kUTTypeBMP if you like
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url, type, 1, 0);
CGImageDestinationAddImage(dest, cgImage, 0);
CFRelease(cgImage);
CFRelease(bitmapContext);
CGImageDestinationFinalize(dest);
free(rgba);
Borrowing from nschmidt's code to produce a familiar, if someone red-eyed image:
int width = 11;
int height = 8;
Byte r[8][11]={
{000,000,255,000,000,000,000,000,255,000,000},
{000,000,000,255,000,000,000,255,000,000,000},
{000,000,255,255,255,255,255,255,255,000,000},
{000,255,255,255,255,255,255,255,255,255,000},
{255,255,255,255,255,255,255,255,255,255,255},
{255,000,255,255,255,255,255,255,255,000,255},
{255,000,255,000,000,000,000,000,255,000,255},
{000,000,000,255,255,000,255,255,000,000,000}};
Byte g[8][11]={
{000,000,255,000,000,000,000,000,255,000,000},
{000,000,000,255,000,000,000,255,000,000,000},
{000,000,255,255,255,255,255,255,255,000,000},
{000,255,255,000,255,255,255,000,255,255,000},
{255,255,255,255,255,255,255,255,255,255,255},
{255,000,255,255,255,255,255,255,255,000,255},
{255,000,255,000,000,000,000,000,255,000,255},
{000,000,000,255,255,000,255,255,000,000,000}};
Byte b[8][11]={
{000,000,255,000,000,000,000,000,255,000,000},
{000,000,000,255,000,000,000,255,000,000,000},
{000,000,255,255,255,255,255,255,255,000,000},
{000,255,255,000,255,255,255,000,255,255,000},
{255,255,255,255,255,255,255,255,255,255,255},
{255,000,255,255,255,255,255,255,255,000,255},
{255,000,255,000,000,000,000,000,255,000,255},
{000,000,000,255,255,000,255,255,000,000,000}};
char* rgba = (char*)malloc(width*height*4);
int offset=0;
for(int i=0; i < height; ++i)
{
for (int j=0; j < width; j++)
{
rgba[4*offset] = r[i][j];
rgba[4*offset+1] = g[i][j];
rgba[4*offset+2] = b[i][j];
rgba[4*offset+3] = 0;
offset ++;
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
rgba,
width,
height,
8, // bitsPerComponent
4*width, // bytesPerRow
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
free(rgba);
UIImage *newUIImage = [UIImage imageWithCGImage:cgImage];
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 11,8)];
[iv setImage:newUIImage];
Then, addSubview:iv to get the image into your view and, of course, do the obligatory [releases] to keep a clean house.