Using bitmapData to modify an image - objective-c

So I've got a method that successfully gets the color of a pixel.
//arrColT is an NSMutableArray<NSColor *>
NSInteger width = [bitIn pixelsWide];
NSInteger height = [bitIn pixelsHigh];
NSInteger rowBytes = [bitIn bytesPerRow];
unsigned char* pixels = [bitIn bitmapData];
int row, col;
//For every row,
for (row = 0; row < height; row++)
{
unsigned char* rowStart = (unsigned char*)(pixels + (row * rowBytes));
unsigned char* nextChannel = rowStart;
//For every pixel in a row,
for (col = 0; col < width; col++)
{
//Get its color.
unsigned char red, green, blue, alpha;
red = *nextChannel;
int intRed = (int)red;
nextChannel++;
green = *nextChannel;
int intGreen = (int)green;
nextChannel++;
blue = *nextChannel;
int intBlue = (int)blue;
nextChannel++;
alpha = *nextChannel;
int intAlpha = (int)alpha;
nextChannel++;
NSColor *colHas = [NSColor colorWithCalibratedRed:(float)intRed/255 green: (float)intGreen/255 blue: (float)intBlue/255 alpha:(float)intAlpha/255];
for (int i = 0; i<[arrColT count]; i++) {
//If the target color is equal to the current color, replace it with the parallel replace color...somehow.
if([colHas isEqualTo:arrColT[i]]){
}
}
}
}
The question is, how do I get color data back into the bitmapData?
With hope,
radzo73

See Technical Q&A QA1509, which basically advises
create pixel buffer (by creating CGContextRef of predefined format and drawing your image to that);
manipulate the bytes within that pixel buffer as you want; and
create resulting image with CGBitmapContextCreateImage.
E.g.
- (UIImage *)convertImage:(UIImage *)image
{
// get image
CGImageRef imageRef = image.CGImage;
// prepare context
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big; // bytes in RGBA order
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 4 * width, colorspace, bitmapInfo);
// draw image to that context
CGRect rect = CGRectMake(0, 0, width, height);
CGContextDrawImage(context, rect, imageRef);
uint8_t *buffer = CGBitmapContextGetData(context);
// ... manipulate pixel buffer bytes here ...
// get image from buffer
CGImageRef outputImage = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:outputImage scale:image.scale orientation:image.imageOrientation];
// clean up
CGImageRelease(outputImage);
CGContextRelease(context);
CGColorSpaceRelease(colorspace);
return result;
}
For example, here is a B&W conversion routine:
/** Convert the image to B&W as a single (non-parallel) task.
*
* This assumes the pixel buffer is in RGBA, 8 bits per pixel format.
*
* #param buffer The pixel buffer.
* #param width The image width in pixels.
* #param height The image height in pixels.
*/
- (void)convertToBWSimpleInBuffer:(uint8_t *)buffer width:(NSInteger)width height:(NSInteger)height
{
for (NSInteger row = 0; row < height; row++) {
for (NSInteger col = 0; col < width; col++) {
NSUInteger offset = (col + row * width) * 4;
uint8_t *pixel = buffer + offset;
// get the channels
uint8_t red = pixel[0];
uint8_t green = pixel[1];
uint8_t blue = pixel[2];
uint8_t alpha = pixel[3];
// update the channels with result
uint8_t gray = 0.2126 * red + 0.7152 * green + 0.0722 * blue;
pixel[0] = gray;
pixel[1] = gray;
pixel[2] = gray;
pixel[3] = alpha;
}
}
}
If you want to improve performance, you can use dispatch_apply and stride through your image buffer in parallel:
/** Convert the image to B&W, using GCD to split the conversion into several concurrent GCD tasks.
*
* This assumes the pixel buffer is in RGBA, 8 bits per pixel format.
*
* #param buffer The pixel buffer.
* #param width The image width in pixels.
* #param height The image height in pixels.
* #param count How many GCD tasks should the conversion be split into.
*/
- (void)convertToBWConcurrentInBuffer:(uint8_t *)buffer width:(NSInteger)width height:(NSInteger)height count:(NSInteger)count
{
dispatch_queue_t queue = dispatch_queue_create("com.domain.app", DISPATCH_QUEUE_CONCURRENT);
NSInteger stride = height / count;
dispatch_apply(height / stride, queue, ^(size_t idx) {
size_t j = idx * stride;
size_t j_stop = MIN(j + stride, height);
for (NSInteger row = j; row < j_stop; row++) {
for (NSInteger col = 0; col < width; col++) {
NSUInteger offset = (col + row * width) * 4;
uint8_t *pixel = buffer + offset;
// get the channels
uint8_t red = pixel[0];
uint8_t green = pixel[1];
uint8_t blue = pixel[2];
uint8_t alpha = pixel[3];
// update the channels with result
uint8_t gray = 0.2126 * red + 0.7152 * green + 0.0722 * blue;
pixel[0] = gray;
pixel[1] = gray;
pixel[2] = gray;
pixel[3] = alpha;
}
}
});
}

Well it depends on what bitIn is, but something like this should do the trick.
First, create a context. You can do this once and keep the context if you are going to make a lot of changes to the data and need a lot of pictures.
self.ctx = CGBitmapContextCreate (
[bitIn bitmapData],
[bitIn pixelsWide],
[bitIn pixelsHigh],
[bitIn bitsPerPixel],
[bitIn bytesPerRow],
[bitIn colorSpace],
[bitIn bitmapInfo] );
Here I freely used bitIn hoping it will provide all the missing pieces. Then you can assemble it all later, even after further changes to the data, using something like
CGImageRef cgImg = CGBitmapContextCreateImage ( self.ctx );
UIImage * img = [UIImage imageWithCGImage:cgImg];
CGImageRelease ( cgImg );
Now you can make more changes to the data and do the same to get a new image.

Related

How to convert from CGImageRef to GraphicsMagick Blob type?

I have a fairly standard RGBA image as a CGImageRef.
I'm looking to convert this into a GraphicsMagick Blob (http://www.graphicsmagick.org/Magick++/Image.html#blobs)
What's the best way to go about transposing it?
I have this but it produces only a plain black image if I specify PNG8 in the pathString or it crashes:
- (void)saveImage:(CGImageRef)image path:(NSString *)pathString
{
CGDataProviderRef dataProvider = CGImageGetDataProvider(image);
NSData *data = CFBridgingRelease(CGDataProviderCopyData(dataProvider));
const void *bytes = [data bytes];
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t length = CGImageGetBytesPerRow(image) * height;
NSString *sizeString = [NSString stringWithFormat:#"%ldx%ld", width, height];
Image pngImage;
Blob blob(bytes, length);
pngImage.read(blob);
pngImage.size([sizeString UTF8String]);
pngImage.magick("RGBA");
pngImage.write([pathString UTF8String]);
}
Needed to get the image in the right RGBA format first. The original CGImageRef had a huge number of bytes per row. Creating a context with only 4 bytes per pixel did the trick.
// Calculate the image width, height and bytes per row
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = 4 * width;
size_t length = bytesPerRow * height;
// Set the frame
CGRect frame = CGRectMake(0, 0, width, height);
// Create context
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
CGImageGetBitsPerComponent(image),
bytesPerRow,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
if (!context) {
return;
}
// Draw the image inside the context
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, frame, image);
// Get the bitmap data from the context
void *bytes = CGBitmapContextGetData(context);

Objective-c - Getting least used and most used color in a image

Im trying to get the least used color, and the most used color from MP3 file's album artwork for a music playing application. I need the colors to do an effect like the new itunes 11. Where the background color of the menu is the most used color, and the least used color is the color for song labels and artist name.
I am using
`- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}`
to get the color at the bottom of the image to make it blend in my view controller which uses the color for its background, and has a shadow to make it blended.
Question: So, as it says: How do I get the least and most used color from an image?
The method below takes an image and analyses it for its main colours, in the following steps:
1.) scale down the image and determine the main pixel colours.
2.) add some colour flexibility to allow for the loss during scaling
3.) distinguish colours, removing similar ones
4.) return the colours as an ordered array or with their percentages
You could adapt it to return a specific number of colours, e.g. top 10 colours in image if you needed a guaranteed number of colours returned, or just use the "detail" variable if you don't.
Larger images will take a long time to analyse at high detail.
No doubt the method could be cleaned up a bit but could be a good starting point.
Use like this:
NSDictionary * mainColours = [s mainColoursInImage:image detail:1];
-(NSDictionary*)mainColoursInImage:(UIImage *)image detail:(int)detail {
//1. determine detail vars (0==low,1==default,2==high)
//default detail
float dimension = 10;
float flexibility = 2;
float range = 60;
//low detail
if (detail==0){
dimension = 4;
flexibility = 1;
range = 100;
//high detail (patience!)
} else if (detail==2){
dimension = 100;
flexibility = 10;
range = 20;
}
//2. determine the colours in the image
NSMutableArray * colours = [NSMutableArray new];
CGImageRef imageRef = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(dimension * dimension * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * dimension;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, dimension, dimension, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, dimension, dimension), imageRef);
CGContextRelease(context);
float x = 0;
float y = 0;
for (int n = 0; n<(dimension*dimension); n++){
int index = (bytesPerRow * y) + x * bytesPerPixel;
int red = rawData[index];
int green = rawData[index + 1];
int blue = rawData[index + 2];
int alpha = rawData[index + 3];
NSArray * a = [NSArray arrayWithObjects:[NSString stringWithFormat:#"%i",red],[NSString stringWithFormat:#"%i",green],[NSString stringWithFormat:#"%i",blue],[NSString stringWithFormat:#"%i",alpha], nil];
[colours addObject:a];
y++;
if (y==dimension){
y=0;
x++;
}
}
free(rawData);
//3. add some colour flexibility (adds more colours either side of the colours in the image)
NSArray * copyColours = [NSArray arrayWithArray:colours];
NSMutableArray * flexibleColours = [NSMutableArray new];
float flexFactor = flexibility * 2 + 1;
float factor = flexFactor * flexFactor * 3; //(r,g,b) == *3
for (int n = 0; n<(dimension * dimension); n++){
NSArray * pixelColours = copyColours[n];
NSMutableArray * reds = [NSMutableArray new];
NSMutableArray * greens = [NSMutableArray new];
NSMutableArray * blues = [NSMutableArray new];
for (int p = 0; p<3; p++){
NSString * rgbStr = pixelColours[p];
int rgb = [rgbStr intValue];
for (int f = -flexibility; f<flexibility+1; f++){
int newRGB = rgb+f;
if (newRGB<0){
newRGB = 0;
}
if (p==0){
[reds addObject:[NSString stringWithFormat:#"%i",newRGB]];
} else if (p==1){
[greens addObject:[NSString stringWithFormat:#"%i",newRGB]];
} else if (p==2){
[blues addObject:[NSString stringWithFormat:#"%i",newRGB]];
}
}
}
int r = 0;
int g = 0;
int b = 0;
for (int k = 0; k<factor; k++){
int red = [reds[r] intValue];
int green = [greens[g] intValue];
int blue = [blues[b] intValue];
NSString * rgbString = [NSString stringWithFormat:#"%i,%i,%i",red,green,blue];
[flexibleColours addObject:rgbString];
b++;
if (b==flexFactor){ b=0; g++; }
if (g==flexFactor){ g=0; r++; }
}
}
//4. distinguish the colours
//orders the flexible colours by their occurrence
//then keeps them if they are sufficiently disimilar
NSMutableDictionary * colourCounter = [NSMutableDictionary new];
//count the occurences in the array
NSCountedSet *countedSet = [[NSCountedSet alloc] initWithArray:flexibleColours];
for (NSString *item in countedSet) {
NSUInteger count = [countedSet countForObject:item];
[colourCounter setValue:[NSNumber numberWithInteger:count] forKey:item];
}
//sort keys highest occurrence to lowest
NSArray *orderedKeys = [colourCounter keysSortedByValueUsingComparator:^NSComparisonResult(id obj1, id obj2){
return [obj2 compare:obj1];
}];
//checks if the colour is similar to another one already included
NSMutableArray * ranges = [NSMutableArray new];
for (NSString * key in orderedKeys){
NSArray * rgb = [key componentsSeparatedByString:#","];
int r = [rgb[0] intValue];
int g = [rgb[1] intValue];
int b = [rgb[2] intValue];
bool exclude = false;
for (NSString * ranged_key in ranges){
NSArray * ranged_rgb = [ranged_key componentsSeparatedByString:#","];
int ranged_r = [ranged_rgb[0] intValue];
int ranged_g = [ranged_rgb[1] intValue];
int ranged_b = [ranged_rgb[2] intValue];
if (r>= ranged_r-range && r<= ranged_r+range){
if (g>= ranged_g-range && g<= ranged_g+range){
if (b>= ranged_b-range && b<= ranged_b+range){
exclude = true;
}
}
}
}
if (!exclude){ [ranges addObject:key]; }
}
//return ranges array here if you just want the ordered colours high to low
NSMutableArray * colourArray = [NSMutableArray new];
for (NSString * key in ranges){
NSArray * rgb = [key componentsSeparatedByString:#","];
float r = [rgb[0] floatValue];
float g = [rgb[1] floatValue];
float b = [rgb[2] floatValue];
UIColor * colour = [UIColor colorWithRed:(r/255.0f) green:(g/255.0f) blue:(b/255.0f) alpha:1.0f];
[colourArray addObject:colour];
}
//if you just want an array of images of most common to least, return here
//return [NSDictionary dictionaryWithObject:colourArray forKey:#"colours"];
//if you want percentages to colours continue below
NSMutableDictionary * temp = [NSMutableDictionary new];
float totalCount = 0.0f;
for (NSString * rangeKey in ranges){
NSNumber * count = colourCounter[rangeKey];
totalCount += [count intValue];
temp[rangeKey]=count;
}
//set percentages
NSMutableDictionary * colourDictionary = [NSMutableDictionary new];
for (NSString * key in temp){
float count = [temp[key] floatValue];
float percentage = count/totalCount;
NSArray * rgb = [key componentsSeparatedByString:#","];
float r = [rgb[0] floatValue];
float g = [rgb[1] floatValue];
float b = [rgb[2] floatValue];
UIColor * colour = [UIColor colorWithRed:(r/255.0f) green:(g/255.0f) blue:(b/255.0f) alpha:1.0f];
colourDictionary[colour]=[NSNumber numberWithFloat:percentage];
}
return colourDictionary;
}
Not sure about finding most color or least color, but here is a method to find out the average color.
- (UIColor *)averageColor {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rgba[4];
CGContextRef context = CGBitmapContextCreate(rgba, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, 1, 1), self.CGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
if(rgba[3] > 0) {
CGFloat alpha = ((CGFloat)rgba[3])/255.0;
CGFloat multiplier = alpha/255.0;
return [UIColor colorWithRed:((CGFloat)rgba[0])*multiplier
green:((CGFloat)rgba[1])*multiplier
blue:((CGFloat)rgba[2])*multiplier
alpha:alpha];
}
else {
return [UIColor colorWithRed:((CGFloat)rgba[0])/255.0
green:((CGFloat)rgba[1])/255.0
blue:((CGFloat)rgba[2])/255.0
alpha:((CGFloat)rgba[3])/255.0];
}
}
You can probably follow a similar approach to find out the most used color.
Also check this answer about counting red color pixels in an image.
Thanks a lot for your code, #JohnnyRockex. It was really helpful in getting me started towards my goal (finding accent colors depending on the most predominant color in an image).
After going through it, I found the code could be simplified and made easier to read, so I'd like to give back to the community my own version; the -colors selector is in a UIImage extension.
- (NSArray *)colors {
// Original code by Johnny Rockex http://stackoverflow.com/a/29266983/825644
// Higher the dimension, the more pixels are checked against.
const float pixelDimension = 10;
// Higher the range, more similar colors are removed.
const float filterRange = 60;
unsigned char *rawData = (unsigned char*) calloc(pixelDimension * pixelDimension * kBytesPerPixel, sizeof(unsigned char));
NSUInteger bytesPerRow = kBytesPerPixel * pixelDimension;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, pixelDimension, pixelDimension, kBitsInAByte, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, pixelDimension, pixelDimension), [self CGImage]);
CGContextRelease(context);
NSMutableArray * colors = [[NSMutableArray alloc] init];
float x = 0;
float y = 0;
const int pixelMatrixSize = pixelDimension * pixelDimension;
for (int i = 0; i < pixelMatrixSize; i++){
int index = (bytesPerRow * y) + x * kBytesPerPixel;
int red = rawData[index];
int green = rawData[index + 1];
int blue = rawData[index + 2];
int alpha = rawData[index + 3];
UIColor * color = [UIColor colorWithRed:(red / 255.0f) green:(green / 255.0f) blue:(blue / 255.0f) alpha:alpha];
[colors addObject:color];
y++;
if (y == pixelDimension){
y = 0;
x++;
}
}
free(rawData);
NSMutableDictionary * colorCounter = [[NSMutableDictionary alloc] init];
NSCountedSet *countedSet = [[NSCountedSet alloc] initWithArray:colors];
for (NSString *item in countedSet) {
NSUInteger count = [countedSet countForObject:item];
[colorCounter setValue:[NSNumber numberWithInteger:count] forKey:item];
}
NSArray *orderedColors = [colorCounter keysSortedByValueUsingComparator:^NSComparisonResult(id obj1, id obj2){
return [obj2 compare:obj1];
}];
NSMutableArray *filteredColors = [NSMutableArray new];
for (UIColor *color in orderedColors){
bool filtered = false;
for (UIColor *rangedColor in filteredColors){
if (abs(color.redRGBComponent - rangedColor.redRGBComponent) <= filterRange &&
abs(color.greenRGBComponent - rangedColor.greenRGBComponent) <= filterRange &&
abs(color.blueRGBComponent - rangedColor.blueRGBComponent) <= filterRange) {
filtered = true;
break;
}
}
if (!filtered) {
[filteredColors addObject:color];
}
}
return [filteredColors copy];
The code for UIColor's extension adding the -rgbComponent function can be found underneath, but I wrote it in Swift (trying to write all new classes in Swift, but this wasn't the case for the -colors selector):
extension UIColor {
open func redRGBComponent() -> UInt8 {
let colorComponents = cgColor.components!
return UInt8(colorComponents[0] * 255)
}
open func greenRGBComponent() -> UInt8 {
let colorComponents = cgColor.components!
return UInt8(colorComponents[1] * 255)
}
open func blueRGBComponent() -> UInt8 {
let colorComponents = cgColor.components!
return UInt8(colorComponents[2] * 255)
}
}
Enjoy!
I wrote this tool to do that.
https://github.com/623637646/UIImageColorRatio
// replace the UIImage() to yourself's UIImage.
let theMostUsedColor = UIImage().calculateColorRatio(deviation: 0)?.colorRatioArray.first?.color
let theLeastUsedColor = UIImage().calculateColorRatio(deviation: 0)?.colorRatioArray.last?.color

dsp on an UIImage [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
Let's say I have a UIImage that I would like to get the rgb matrix of, in order to do some processing on it, not to change it, just get the UIImage data so i can use my C algorithms on it.
As you probably know, all the math is done on the image rgb matrixes.
The basic procedure is to create a bitmap context with CGBitmapContextCreate, then draw your image into that context and get the internal data with CGBitmapContextGetData. Here's an example:
UIImage *image = [UIImage imageNamed:#"MyImage.png"];
//Create the bitmap context:
CGImageRef cgImage = [image CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bitsPerComponent = 8;
size_t bytesPerRow = width * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
//Draw your image into the context:
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
//Get the raw image data:
unsigned char *data = CGBitmapContextGetData(context);
//Example how to access pixel values:
size_t x = 0;
size_t y = 0;
size_t i = y * bytesPerRow + x * 4;
unsigned char redValue = data[i];
unsigned char greenValue = data[i + 1];
unsigned char blueValue = data[i + 2];
unsigned char alphaValue = data[i + 3];
NSLog(#"RGBA at (%i, %i): %i, %i, %i, %i", x, y, redValue, greenValue, blueValue, alphaValue);
//Clean up:
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
//At this point, your data pointer becomes invalid, you would have to allocate
//your own buffer instead of passing NULL to avoid this.

Is it possible to make a bit map image as array?

I would like to make a bit map image as follow array.
I think that will need to use quartz...
How to can make a bit map image?
int bit_map[10][10] = {{0,0,1,0,0,0,0,1,0,0},
{0,1,1,1,1,0,1,1,1,0},
{1,1,1,1,1,1,1,1,1,1},
{1,1,1,1,1,1,1,1,1,1},
{0,1,1,1,1,1,1,1,1,0},
{0,1,1,1,1,1,1,1,1,0},
{0,1,1,1,1,1,1,1,1,0},
{0,0,1,1,1,1,1,1,0,0},
{0,0,0,1,1,1,1,0,0,0},
{0,0,0,0,1,1,0,0,0,0}};
Here is code to create a CGImageRef (cir) from the C array. Note that things can be simpler if the C array is one dimensional with values of 0 and 255.
size_t width = 10;
size_t height = 10;
int bit_map[10][10] = {
{0,0,1,0,0,0,0,1,0,0},
{0,1,1,1,1,0,1,1,1,0},
{1,1,1,1,1,1,1,1,1,1},
{1,1,1,1,1,1,1,1,1,1},
{0,1,1,1,1,1,1,1,1,0},
{0,1,1,1,1,1,1,1,1,0},
{0,1,1,1,1,1,1,1,1,0},
{0,0,1,1,1,1,1,1,0,0},
{0,0,0,1,1,1,1,0,0,0},
{0,0,0,0,1,1,0,0,0,0}
};
UIImage *barCodeImage = nil;
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 8;
size_t bytesPerRow = (width * bitsPerPixel + 7) / 8;
void *imageBytes;
size_t imageBytesSize = height * bytesPerRow;
imageBytes = calloc(1, imageBytesSize);
for (int i=0; i<10; i++) {
for (int j=0; j<10; j++) {
int pixel = bit_map[j][i];
if (pixel == 1)
pixel = 255;
((unsigned char*)imageBytes)[((i+1) * (j+1)) - 1] = pixel;
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, imageBytes, imageBytesSize, releasePixels);
CGColorSpaceRef colorSpaceGrey = CGColorSpaceCreateDeviceGray();
CGImageRef cir = CGImageCreate (width,
height,
bitsPerComponent,
bitsPerPixel,
imageBytesSize,
colorSpaceGrey,
kCGImageAlphaNone,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceGrey);

Converting RGB data into a bitmap in Objective-C++ Cocoa

I have a buffer of RGB unsigned char that I would like converted into a bitmap file, does anyone know how?
My RGB float is of the following format
R [(0,0)], G[(0,0)], B[(0,0)],R [(0,1)], G[(0,1)], B[(0,1)], R [(0,2)], G[(0,2)], B[(0,2)] .....
The values for each data unit ranges from 0 to 255. anyone has any ideas how I can go about making this conversion?
You can use CGBitmapContextCreate to make a bitmap context from your raw data. Then you can create a CGImageRef from the bitmap context and save it. Unfortunately CGBitmapContextCreate is a little picky about the format of the data. It does not support 24-bit RGB data. The loop at the beginning swizzles the rgb data to rgba with an alpha value of zero at the end. You have to include and link with ApplicationServices framework.
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = myBuffer[3*i];
rgba[4*i+1] = myBuffer[3*i+1];
rgba[4*i+2] = myBuffer[3*i+2];
rgba[4*i+3] = 0;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
rgba,
width,
height,
8, // bitsPerComponent
4*width, // bytesPerRow
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("image.png"), kCFURLPOSIXPathStyle, false);
CFStringRef type = kUTTypePNG; // or kUTTypeBMP if you like
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url, type, 1, 0);
CGImageDestinationAddImage(dest, cgImage, 0);
CFRelease(cgImage);
CFRelease(bitmapContext);
CGImageDestinationFinalize(dest);
free(rgba);
Borrowing from nschmidt's code to produce a familiar, if someone red-eyed image:
int width = 11;
int height = 8;
Byte r[8][11]={
{000,000,255,000,000,000,000,000,255,000,000},
{000,000,000,255,000,000,000,255,000,000,000},
{000,000,255,255,255,255,255,255,255,000,000},
{000,255,255,255,255,255,255,255,255,255,000},
{255,255,255,255,255,255,255,255,255,255,255},
{255,000,255,255,255,255,255,255,255,000,255},
{255,000,255,000,000,000,000,000,255,000,255},
{000,000,000,255,255,000,255,255,000,000,000}};
Byte g[8][11]={
{000,000,255,000,000,000,000,000,255,000,000},
{000,000,000,255,000,000,000,255,000,000,000},
{000,000,255,255,255,255,255,255,255,000,000},
{000,255,255,000,255,255,255,000,255,255,000},
{255,255,255,255,255,255,255,255,255,255,255},
{255,000,255,255,255,255,255,255,255,000,255},
{255,000,255,000,000,000,000,000,255,000,255},
{000,000,000,255,255,000,255,255,000,000,000}};
Byte b[8][11]={
{000,000,255,000,000,000,000,000,255,000,000},
{000,000,000,255,000,000,000,255,000,000,000},
{000,000,255,255,255,255,255,255,255,000,000},
{000,255,255,000,255,255,255,000,255,255,000},
{255,255,255,255,255,255,255,255,255,255,255},
{255,000,255,255,255,255,255,255,255,000,255},
{255,000,255,000,000,000,000,000,255,000,255},
{000,000,000,255,255,000,255,255,000,000,000}};
char* rgba = (char*)malloc(width*height*4);
int offset=0;
for(int i=0; i < height; ++i)
{
for (int j=0; j < width; j++)
{
rgba[4*offset] = r[i][j];
rgba[4*offset+1] = g[i][j];
rgba[4*offset+2] = b[i][j];
rgba[4*offset+3] = 0;
offset ++;
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
rgba,
width,
height,
8, // bitsPerComponent
4*width, // bytesPerRow
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
free(rgba);
UIImage *newUIImage = [UIImage imageWithCGImage:cgImage];
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 11,8)];
[iv setImage:newUIImage];
Then, addSubview:iv to get the image into your view and, of course, do the obligatory [releases] to keep a clean house.