Issue on image data processing with iOS 11 - objective-c

i really need your help today ! I'm debugging an old objective-c app created by another dev and there's a new bug that only appear on iOS 11.
The bug come's from a image processing function used when trying to create a "Scratch View", similar to this one -> https://github.com/joehour/ScratchCard
But, since iOS 11, the function doesn't work anymore, in the code above i've got an error on [Unknown process name] CGImageMaskCreate: invalid image provider: NULL. <-- the variable CGDataProviderRef dataProvider is not created (null)
// Method to change the view which will be scratched
- (void)setHideView:(UIView *)hideView
{
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
UIGraphicsBeginImageContextWithOptions(hideView.bounds.size, NO, 0);
[hideView.layer renderInContext:UIGraphicsGetCurrentContext()];
hideView.layer.contentsScale = scale;
_hideImage = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
size_t imageWidth = CGImageGetWidth(_hideImage);
size_t imageHeight = CGImageGetHeight(_hideImage);
CFMutableDataRef pixels = CFDataCreateMutable(NULL, imageWidth * imageHeight);
_contextMask = CGBitmapContextCreate(CFDataGetMutableBytePtr(pixels), imageWidth, imageHeight , 8, imageWidth, colorspace, kCGImageAlphaNone);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(pixels);
CFRelease(pixels);
CGContextSetFillColorWithColor(_contextMask, [UIColor blackColor].CGColor);
CGContextFillRect(_contextMask, self.frame);
CGContextSetStrokeColorWithColor(_contextMask, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(_contextMask, _sizeBrush);
CGContextSetLineCap(_contextMask, kCGLineCapRound);
CGImageRef mask = CGImageMaskCreate(imageWidth, imageHeight, 8, 8, imageWidth, dataProvider, nil, NO);
_scratchImage = CGImageCreateWithMask(_hideImage, mask);
CGDataProviderRelease(dataProvider);
CGImageRelease(mask);
CGColorSpaceRelease(colorspace);
}
I'm not an expert of this function of image processing and i'm really lost for debugging this part...
Does anyone know why this function doesn't work anymore in iOS 11 ?
Thanks for your help !

iOS 11 stopped handling CGDataProviderCreateWithCFData(null) therefore you will need to set the length of pixels.
Something like:
...
CFMutableDataRef pixels = CFDataCreateMutable(NULL, imageWidth * imageHeight);
CFDataSetLength(pixels, imageWidth * imageHeight); // this is the line you're missing for iOS11+
_contextMask = ...
...

Related

`-drawInRect` seems to be different in 10.11? What could have changed?

Edit: I got a downvote, and I just wanted to make sure it was because I didn't ask the dev forums (which I did, https://forums.developer.apple.com/thread/11593, but haven't gotten any response). This is a pretty big issue for us so I figured it'd be best to cast a wide line out, maybe somebody can help.
We have an app that works in 10.10, but has this issue in 10.11.
We call -drawRect in a NSGraphicsContext on an NSImage that is properly set in both OS
But in 10.11 this NSImage doesn't get drawn.
I'm pretty novice, but I have been debugging for quite a while to get to where I'm at now, and I'm just plain stuck. Was seeing if anybody ran into this before, or has any idea why this could be.
Here is the pertinent code:
(the layer object is a CGLayerRef that is passed into this method, from the -drawRect method)
Here is how the layer is instantiated:
NSRect scaledRect = [Helpers scaleRect:rect byScale:[self backingScaleFactorRelativeToZoom]];
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; /
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
Here is the method that draws the image:
CGContextRef mainLayerContext = CGLayerGetContext(layer);
NSRect scaledBounds = [contextInfo[kContextInfoBounds] rectValue];
if( !_flatCache )
{
_flatCache = CGLayerCreateWithContext(mainLayerContext, scaledBounds.size, NULL);
CGContextRef flatCacheCtx = CGLayerGetContext(_flatCache);
CGLayerRef tempLayer = CGLayerCreateWithContext(flatCacheCtx, scaledBounds.size, NULL);
CIImage *tempImage = [CIImage imageWithCGLayer:tempLayer];
NSLog(#"%#",tempImage);
CGContextRef tempLayerCtx = CGLayerGetContext(tempLayer);
CGContextTranslateCTM(tempLayerCtx, -scaledBounds.origin.x, -scaledBounds.origin.y);
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext* newContext = [NSGraphicsContext graphicsContextWithGraphicsPort:tempLayerCtx flipped:NO];
[NSGraphicsContext setCurrentContext:newContext];
if ( [_imageController background] )
{
NSRect bgRect = { [_imageController backgroundPosition], [_imageController backgroundSize] };
bgRect = [Helpers scaleRect:bgRect byScale:[self backingScaleFactorRelativeToZoom]];
[[_imageController background] drawInRect:bgRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
CIImage *tempImage2 = [CIImage imageWithCGLayer:tempLayer];
NSLog(#"%#",tempImage2);
In 10.10 and 10.11, tempImage is an empty image w/ the correct size.
In 10.10 tempImage2 now has [_imageController background] properly drawn
In 10.11 tempImage2 is the same as tempImage, a blank image w/ the correct size
Unfortunately the person who originally wrote this code is gone now, and I'm too novice to go dig any lower w/o finding a book and reading it.
bgRect is not the issue, already tried modifying that around. I have also messed with the -translate arguments, but still couldn't learn anything.
Does anybody know how else I could debug this to find the issue? Or better yet, has anybody seen this issue and know what my problem is?
Just needed to add this line:
//`bounds` calculated from the size of the image
//mainLayerContext and flatCache defined above in question ^^
CGContextDrawLayerInRect(mainLayerContext, bounds, _flatCache);
at the very end

CGLayerRef turns out empty only in OS X 10.11 (El Capitan)

I have an image editing application, that has been working through 10.10, but in 10.11 a bug came up
When I view a CIImage created w/ -imageWithCGLayer, it shows as an empty image (of the correct size) only in 10.11
CGSize size = NSSizeToCGSize(rect.size);
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
size_t dataSize = bytesPerRow * height;
void* data = calloc(1, dataSize);
CGColorSpaceRef colorspace = [[[_imageController document] captureColorSpace] CGColorSpace];
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaNone | kCGBitmapByteOrder32Host);
CGLayerRef canvasLayer = CGLayerCreateWithContext(bitmapContext, scaledRect.size, NULL);
[self drawCanvasInLayer:canvasLayer inRect:scaledRect];
CIImage *test = [CIImage imageWithCGLayer:canvasLayer];
NSLog(#"%#",test);
So when I view CIImage *test on 10.10, it looks precisely as I want it. On 10.11 it is a blank image of the same size.
I tried looking at the API diffs for CGLayer & CIImage but the documentation is too dense for me. Has anybody else stumbled across this issue? I imagine it must be something w/ the initialization of the CGContextRef, because everything else in the code is size related
That particular API was deprecated some time ago and completely removed in macOS 10.11. So your results are expected.
Since you already have a bitmapContext, modify your -drawCanvasInLayer: method to directly draw into the bitmap and then create the image using the bitmpap context thusly,
CGImageRef tmpCGImage = CGBitmapContextCreateImage( bitmapContext );
CIImage* myCIImage = [[CIImage alloc] initWithCGImage: myCIImage];
Remember to do CGImageRelease( tmpCGImage ) after you are done with your CIImage.
I recently solved this very problem and posted a sample objective-C project to work around the loss of this API.
See http://www.blinddogsoftware.com/goodies/#DontSpillTheBits
Also, don't forget to read the header file where that API is declared. There is very often extremely useful information there (in Xcode, Command+click on the specific API)

Xcode Screenshot EAGLContext [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}

Core Graphics effect: works on simulator but not in device

I have done a simple but effective emboss effect with Core Graphics.
It works great! But only in simulator...
Here is the result:
What I do is the following:
- From a picked image, I take the alpha out if it has and I fill it with white.
- I transform this RGB image to Grayscale
- I invert colors of this image
I then call a custom method to create the effect with parameters:
canvasImg: a semi-transparent image to mask on
maskImg: the image I just created, grayscale and inverted:
opacitity: the opacity of the resulting image
The method then makes a simple mask, applies shadows and oppacity and returns a brand new UIImage.
I can't understand why in the simulator it does work, nor the device.
While running in the device, I get a non-null UIImage tho...
Please help!
Here is the code:
- (UIImage *)stampImage:(UIImage *)canvasImg withMask:(UIImage *)maskImg withOpacity:(CGFloat)opacity
{
//Creating the mask Image
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
mainViewContentContext = CGBitmapContextCreate(NULL, maskImg.size.width, maskImg.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext == NULL) return NULL;
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImg.size.width, maskImg.size.height), maskImg.CGImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, maskImg.size.width, maskImg.size.height), canvasImg.CGImage);
CGContextSetAllowsAntialiasing(mainViewContentContext, true);
CGContextSetShouldAntialias(mainViewContentContext, true);
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *maskedImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
CGImageRelease(mainViewContentBitmapContext);
//Giving some Drop shadows
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef shadowContext = CGBitmapContextCreate(NULL, maskedImage.size.width + 10, maskedImage.size.height + 10,
CGImageGetBitsPerComponent(maskedImage.CGImage), 0,
colourSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
CGContextSetShadowWithColor(shadowContext, CGSizeMake(0, -1), 1, [UIColor colorWithWhite:1.0 alpha:0.3].CGColor);
CGContextSetAllowsAntialiasing(shadowContext, true);
CGContextSetShouldAntialias(shadowContext, true);
CGContextDrawImage(shadowContext, CGRectMake(0, 10, maskedImage.size.width, maskedImage.size.height), maskedImage.CGImage);
CGImageRef shadowedCGImage = CGBitmapContextCreateImage(shadowContext);
CGContextRelease(shadowContext);
UIImage *stampImg = [UIImage imageWithCGImage:shadowedCGImage];
CGImageRelease(shadowedCGImage);
return stampImg;
}
Also be aware of the memory limitations of the device vs the simulator. I've had CG logic that would build and run fine on the simulator; the same logic will build and run emitting no errors on the device, but the visual result is not the desired one. I'd suggest trying your logic on a considerably smaller image to verify that it works on the device. I had to abandon some very cool image masking stuff that I'd come up with because the device didn't have the horsepower to pull it off for larger images.

In Quartz 2D, Is it possible to mask an image by removing everything but the color channel you want?

So I tried to use the Quartz CGImageCreateWithMaskingColors function, but he only problem is that it masks the color range you are selecting.
I want to mask everything but the color range I am selecting. For instance, I want to show all red colors in a picture but remove the other channels (Green and Blue).
I am doing this in Objective-C and I am a noob so please give me examples :)
Any help is greatly appreciated.
use these methods.i found them in one of the SO posts.
-(void)changeColor
{
UIImage *temp23=Image;//Pass your image here
CGImageRef ref1=[self createMask:temp23];
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGImageRef New=CGImageCreateWithMaskingColors(ref1, colorMasking);
UIImage *resultedimage=[UIImage imageWithCGImage:New];
EditImageView.image = resultedimage;
[EditImageView setNeedsDisplay];
}
-(CGImageRef)createMask:(UIImage*)temp
{
CGImageRef ref=temp.CGImage;
int mWidth=CGImageGetWidth(ref);
int mHeight=CGImageGetHeight(ref);
int count=mWidth*mHeight*4;
void *bufferdata=malloc(count);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
CGRect rect = {0,0,mWidth,mHeight};
CGContextDrawImage(cgctx, rect, ref);
bufferdata = CGBitmapContextGetData (cgctx);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferdata, mWidth*mHeight*4, NULL);
CGImageRef savedimageref = CGImageCreate(mWidth,mHeight, 8, 32, mWidth*4, colorSpaceRef, bitmapInfo,provider , NULL, NO, renderingIntent);
CFRelease(colorSpaceRef);
return savedimageref;
}
then call changecolor method on a buttons click event and see the result
I found the answer for my above problem, Follow the above code of Rahul with some changes to set your own color,
-(void)changeColor
{
UIImage *temp23=Image;//Pass your image here
CGImageRef ref1=[self createMask:temp23];
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGImageRef New=CGImageCreateWithMaskingColors(ref1, colorMasking);
UIImage *resultedimage=[UIImage imageWithCGImage:New];
EditImageView.image = resultedimage;
[EditImageView setNeedsDisplay];
}
-(CGImageRef)createMask:(UIImage*)temp
{
CGImageRef ref=temp.CGImage;
int mWidth=CGImageGetWidth(ref);
int mHeight=CGImageGetHeight(ref);
int count=mWidth*mHeight*4;
void *bufferdata=malloc(count);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4,colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGRect rect = {0,0,mWidth,mHeight};
CGContextDrawImage(cgctx, rect, ref);
CGContextSaveGState(cgctx);
CGContextSetBlendMode(cgctx, kCGBlendModeColor);
CGContextSetRGBFillColor (cgctx, 1.0, 0.0, 0.0, 1.0);
CGContextFillRect(cgctx, rect);
bufferdata = CGBitmapContextGetData (cgctx);
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferdata, mWidth*mHeight*4, NULL);
CGImageRef savedimageref = CGImageCreate(mWidth,mHeight, 8, 32, mWidth*4, colorSpaceRef, bitmapInfo,provider , NULL, NO, renderingIntent);
CFRelease(colorSpaceRef);
return savedimageref;
}
Hmmm...
I may be missing something, but I don't believe that the provided answers apply to the question. The second response gets closer to the mark, but it contains spurious code that has little to do with the solution.
The createMask method makes a copy of the original image assuming alpha in LSB position. The changeColor performs a masking call that isn't going to do much to an RGB image -- basically only black is going to be masked (i.e, RGB triplets in the range/combinations of (1,1,2) to (3,2,1)).
I am guessing that the red shift that is being observed is due to the parameter
kCGImageAlphaPremultipliedFirst
in the line
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
due to improper treatment of the alpha channel. If in the changeColor method you modify the block
CGImageRef ref1=[self createMask:temp23];
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGImageRef New=CGImageCreateWithMaskingColors(ref1, colorMasking);
UIImage *resultedimage=[UIImage imageWithCGImage:New];
EditImageView.image = resultedimage;
to be
CGImageRef ref1=[self createMask:temp23];
UIImage *resultedimage=[UIImage imageWithCGImage:ref];
EditImageView.image = resultedimage;
you'll see no difference in the display. Changing the CGBitmapInfo constant to kCGImageAlphaPremultipliedLast should display the image correctly using either of the above code blocks.
The next response gets a bit closer to what the OP asks for, but its in terms of visual effect, not actual data. Here the pertinent code in createMask is
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4,colorSpaceRef, kCGImageAlphaPremultipliedLast);
which displays the image correctly, followed by
CGContextSetBlendMode(cgctx, kCGBlendModeColor);
CGContextSetRGBFillColor (cgctx, 1.0, 0.0, 0.0, 1.0);
CGContextFillRect(cgctx, rect);
and then the logic for constructing the image. The blend logic overlays a red tint on the original image, achieving a similar effect as the misplaced alpha channel in the original response. This still really is not what the OP asks for, which is to mask one or more channels, not blend colors.
This really amounts to setting the channel values for the colors that are not desired to zero. Here's an example for returning just the red channel as the OP requests; the assumption is that the format of the pixel is ABGR:
- (CGImageRef) redChannel:(CGImageRef)image
{
CGDataProviderRef provider = CGImageGetDataProvider(image);
NSMutableData* data = (id)CGDataProviderCopyData(provider);
int width = CGImageGetWidth(image);
int height = CGImageGetHeight(image);
[data autorelease];
// get a mutable reference to the image data
uint32_t* dwords = [data mutableBytes];
for (size_t idx = 0; idx < width*height; idx++) {
uint32_t* pixel = &dwords[idx];
// perform a logical AND of the pixel with a mask that zeroes out green and blue pixels
*pixel &= 0x000000ff;
}
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// now create a new image using the masked original data
CGDataProviderRef iprovider = CGDataProviderCreateWithData(NULL, dwords, width*height*4, NULL);
CGImageRef savedimageref = CGImageCreate(width, height, 8, 32, width*4, colorSpaceRef, bitmapInfo, iprovider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(iprovider);
return savedimageref;
}
A good summary of bitwise operations can be found here.
As is pointed out here, you may need to change the structure of the mask depending on the LSB/MSB ordering of the bits in the pixel. This example assumes 32 bit pixels from a standard iPhone PNG.