Change color of UIImage by pixel array - objective-c

I want to change color of my image but dont want to change alpha of image.
I am using following code for change color in blue.
But i want to change image of all pixels array into perticuler RGB value.
Like I have to apply RGB value (R= 116 G=170 B= 243).
CGImageRef sourceImage = ImageView_Test.image.CGImage;
CFDataRef theData;
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
int dataLength = CFDataGetLength(theData);
for (int index = 0; index < dataLength; index += 4)
{
if (pixelData[index + blue] - 80 > 0)
{
pixelData[index + red] = pixelData[index + blue] - 139;
pixelData[index + green] = pixelData[index + blue] - 85;
}
else
{
pixelData[index + green] = 0;
pixelData[index + red] = 0;
}
}
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage];
ImageView_Test.image = newImage;
CGContextRelease(context);
CFRelease(theData);
CGImageRelease(newCGImage);

I am using following method for change color of UIImage without affecting alpha of it.
-(UIImage *)didImageColorchanged:(NSString *)name withColor:(UIColor *)color
{
UIImage *img = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(img.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[color setFill];
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImg;
}
Ex:
resultView.image = [self didImageColorchanged:[UIImage imageNamed:#"xyz.png"] withColor:[UIColor redColor]];

you can just use
// load image
UIImage *image = [UIImage imageNamed:#"test.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// this is where you manipulate the individual pixels
// assumes a 4 byte pixel consisting of rgb and alpha
// for PNGs without transparency use i+=3 and remove int a
for(int i = 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);

Related

CGBitmapContextCreateImage or CGContextDrawImage doesn't work

I can't solve this problem. Load image, convert to CGImageRef. Try to get bitmap context and render on the screen.
NSURL *imageFileURL = [NSURL fileURLWithPath:stringIMG];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
size_t rawData = bytesPerRow*height;
unsigned char *data = malloc(rawData);
memset(data2, 0, rawData);
CGContextRef context = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, bitmapInfo);
CGImageRef imageRef2 = CGBitmapContextCreateImage(context);
// CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef2);
UIImage *result = [UIImage imageWithCGImage:imageRef2];//if i do like this i got white empty screen
UIImageView *image = [[UIImageView alloc] initWithImage:result];
[self.view addSubview:image];// if i do like this i got black rectangle on the white screen
I have no idea. I check by breakpoint that context is not null. I don't know what should i do. Please maybe anyone can help me?
Category added it to UIImage for resizing using CoreGraphics
UIImage+Resize.h
#import <UIKit/UIKit.h>
#interface UIImage (Resizing)
-(UIImage*)resizedImageWithSize:(CGSize)size;
#end
UIImage+Resizing.m
#import "UIImage+Resizing.h"
#implementation UIImage (Resizing)
-(UIImage*)resizedImageWithSize:(CGSize)size {
CGImageRef cgImage = [self CGImage];
size_t bitsPerComponent = CGImageGetBitsPerComponent(cgImage);
size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cgImage);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(cgImage);
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), cgImage);
CGImageRef resizedImageRef = CGBitmapContextCreateImage(context);
UIImage *resizedImage = [UIImage imageWithCGImage:resizedImageRef];
CFRelease(resizedImageRef);
CFRelease(context);
return resizedImage;
}
#end

Rotate, Change Colors and Get RGB565 data from NSImage

I have found myself in a situation where I have several NSImage objects that I need to rotate by 90 degrees, change the colour of pixels that are one colour to another colour and then get the RGB565 data representation for it as an NSData object.
I found the vImageConvert_ARGB8888toRGB565 function in the Accelerate framework so this should be able to do the RGB565 output.
There are a few UIImage rotation I have found here on StackOverflow, but I'm having trouble converting them to NSImage as it appears I have to use NSGraphicsContext not CGContextRef?
Ideally I would like these in an NSImage Category so I can just call.
NSImage *rotated = [inputImage rotateByDegrees:90];
NSImage *colored = [rotated changeColorFrom:[NSColor redColor] toColor:[NSColor blackColor]];
NSData *rgb565 = [colored rgb565Data];
I just don't know where to start as image manipulation is new to me.
I appreciate any help I can get.
Edit (22/04/2013)
I have managed to piece this code together to generate the RGB565 data, it generates it upside down and with some small artefacts, I assume the first is due to different coordinate systems being used and the second possibly due to me going from PNG to BMP. I will do some more testing using a BMP to start and also a non-tranparent PNG.
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
For most of this, you'll want to use Core Image.
Rotation you can do with the CIAffineTransform filter. This takes an NSAffineTransform object. You may have already worked with that class before. (You could do the rotation with NSImage itself, but it's easier with Core Image and you'll probably need to use it for the next step anyway.)
I don't know what you mean by “change the colour of pixels that are one colour to another colour”; that could mean any of a lot of different things. Chances are, though, there's a filter for that.
I also don't know why you need 565 data specifically, but assuming you have a real need for that, you're correct that that function will be involved. Use CIContext's lowest-level rendering method to get 8-bit-per-component ARGB output, and then use that vImage function to convert it to 565 RGB.
I have managed to get what I want by using NSBitmapImageRep (accessing it with a bit of a hack). If anyone knows a better way of doing this, please do share.
The - (NSBitmapImageRep)bitmap method is my hack. The NSImage starts of having only an NSBitmapImageRep, however after the rotation method a CIImageRep is added which takes priority over the NSBitmapImageRep which breaks the colour code (as NSImage renders the CIImageRep which doesn't get colored).
BitmapImage.m (Subclass of NSImage)
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (NSBitmapImageRep*)bitmap
{
NSBitmapImageRep *bitmap = nil;
NSMutableArray *repsToRemove = [NSMutableArray array];
// Iterate through the representations that back the NSImage
for (NSImageRep *rep in self.representations)
{
// If the representation is a bitmap
if ([rep isKindOfClass:[NSBitmapImageRep class]])
{
bitmap = [(NSBitmapImageRep*)rep retain];
break;
}
else
{
[repsToRemove addObject:rep];
}
}
// If no bitmap representation was found, we create one (this shouldn't occur)
if (bitmap == nil)
{
bitmap = [[[NSBitmapImageRep alloc] initWithCGImage:self.CGImage] retain];
[self addRepresentation:bitmap];
}
for (NSImageRep *rep2 in repsToRemove)
{
[self removeRepresentation:rep2];
}
return [bitmap autorelease];
}
- (NSColor*)colorAtX:(NSInteger)x y:(NSInteger)y
{
return [self.bitmap colorAtX:x y:y];
}
- (void)setColor:(NSColor*)color atX:(NSInteger)x y:(NSInteger)y
{
[self.bitmap setColor:color atX:x y:y];
}
NSImage+Extra.m (NSImage Category)
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
Usage
- (IBAction)load:(id)sender
{
NSOpenPanel* openDlg = [NSOpenPanel openPanel];
[openDlg setCanChooseFiles:YES];
[openDlg setCanChooseDirectories:YES];
if ( [openDlg runModalForDirectory:nil file:nil] == NSOKButton )
{
NSArray* files = [openDlg filenames];
for( int i = 0; i < [files count]; i++ )
{
NSString* fileName = [files objectAtIndex:i];
BitmapImage *image = [[BitmapImage alloc] initWithContentsOfFile:fileName];
imageView.image = image;
}
}
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
NSColor *newColor = [img colorAtX:1 y:1];
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img colorAtX:x y:y] == newColor)
{
[img setColor:[NSColor redColor] atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
- (IBAction)rotate:(id)sender
{
BitmapImage *img = (BitmapImage*)imageView.image;
BitmapImage *newImg = [img rotate90DegreesClockwise:NO];
imageView.image = newImg;
}
Edit (24/04/2013)
I have changed the following code:
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
//NSLog(#"R: %ld, G:%ld, B:%ld", components[0], components[1], components[2]);
RGBColor color = {components[0], components[1], components[2]};
return color;
}
- (BOOL)color:(RGBColor)a isEqualToColor:(RGBColor)b
{
return ((a.red == b.red) && (a.green == b.green) && (a.blue == b.blue));
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
NSUInteger components[4] = {(NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue, 255};
//NSLog(#"R: %ld, G: %ld, B: %ld", components[0], components[1], components[2]);
[self.bitmap setPixel:components atX:x y:y];
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
RGBColor oldColor = [img colorAtX:0 y:0];
RGBColor newColor;// = {255, 0, 0};
newColor.red = 255;
newColor.green = 0;
newColor.blue = 0;
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img color:[img colorAtX:x y:y] isEqualToColor:oldColor])
{
[img setColor:newColor atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
But now it changes the pixels to red the first time and then blue the second time the colorize method is called.
Edit 2 (24/04/2013)
The following code fixes it. It was because the rotation code was adding an alpha channel to the NSBitmapImageRep.
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[1], components[2], components[3]};
return color;
}
else
{
NSUInteger components[3];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[0], components[1], components[2]};
return color;
}
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4] = {255, (NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
else
{
NSUInteger components[3] = {color.red, color.green, color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
}
Ok, I decided to spend the day researching Peter's suggestion of using CoreImage.
I had done some research previously and decided it was too hard but after an entire day of research I finally worked out what I needed to do and amazingly it couldn't be easier.
Early on I had decided that the Apple ChromaKey Core Image example would be a great starting point but the example code frightened me off due to the 3-dimensional colour cube. After watching the WWDC 2012 video on Core Image and finding some sample code on github (https://github.com/vhbit/ColorCubeSample) I decided to jump in and just give it a go.
Here are the important parts of the working code, I haven't included the RGB565Data method as I haven't written it yet, but it should be easy using the method Peter suggested:
CIImage+Extras.h
- (NSImage*) NSImage;
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise;
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor;
- (NSColor*) colorAtX:(NSUInteger)x y:(NSUInteger)y;
CIImage+Extras.m
- (NSImage*) NSImage
{
CGContextRef cg = [[NSGraphicsContext currentContext] graphicsPort];
CIContext *context = [CIContext contextWithCGContext:cg options:nil];
CGImageRef cgImage = [context createCGImage:self fromRect:self.extent];
NSImage *image = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
return [image autorelease];
}
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise
{
CIImage *im = self;
CIFilter *f = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform *t = [NSAffineTransform transform];
[t rotateByDegrees:clockwise ? -90 : 90];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
CGRect extent = [im extent];
f = [CIFilter filterWithName:#"CIAffineTransform"];
t = [NSAffineTransform transform];
[t translateXBy:-extent.origin.x
yBy:-extent.origin.y];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
return im;
}
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor
{
CIImage *im = self;
CIColor *backCIColor = [[CIColor alloc] initWithColor:backColor];
CIImage *backImage = [CIImage imageWithColor:backCIColor];
backImage = [backImage imageByCroppingToRect:self.extent];
[backCIColor release];
float chroma[3];
chroma[0] = chromaColor.redComponent;
chroma[1] = chromaColor.greenComponent;
chroma[2] = chromaColor.blueComponent;
// Allocate memory
const unsigned int size = 64;
const unsigned int cubeDataSize = size * size * size * sizeof (float) * 4;
float *cubeData = (float *)malloc (cubeDataSize);
float rgb[3];//, *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
size_t offset = 0;
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
float alpha = ((rgb[0] == chroma[0]) && (rgb[1] == chroma[1]) && (rgb[2] == chroma[2])) ? 0.0 : 1.0;
cubeData[offset] = rgb[0] * alpha;
cubeData[offset+1] = rgb[1] * alpha;
cubeData[offset+2] = rgb[2] * alpha;
cubeData[offset+3] = alpha;
offset += 4;
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:[NSNumber numberWithInt:size] forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
[colorCube setValue:im forKey:#"inputImage"];
im = [colorCube valueForKey:#"outputImage"];
CIFilter *sourceOver = [CIFilter filterWithName:#"CISourceOverCompositing"];
[sourceOver setValue:im forKey:#"inputImage"];
[sourceOver setValue:backImage forKey:#"inputBackgroundImage"];
im = [sourceOver valueForKey:#"outputImage"];
return im;
}
- (NSColor*)colorAtX:(NSUInteger)x y:(NSUInteger)y
{
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCIImage:self];
NSColor *color = [bitmap colorAtX:x y:y];
[bitmap release];
return color;
}

UIImage from RGBA having Alpha becoming white

I am making a military strategy game. When an army in my game takes over a territory, I want to be able to change the color of the territory on the map so that it shows the new controller of that territory. Here is the code I have that changes a territory's image:
I would write this line for example:
UIImage *newImg = [self imageOfTerritoryWithNewArmy:#"japan" AndOldTerritoryImage:[UIImage imageNamed:#"ireland.gif"]];
Here is the rest of the code:
-(UIImage *)createImageWithRGB:(NSArray *)colorData width:(NSInteger)width height:(NSInteger)height{
unsigned char *rawData = malloc(width*height*4);
for (int i=0; i<width*height; ++i)
{
CGFloat red;
CGFloat green;
CGFloat blue;
CGFloat alpha;
UIColor *color = colorData[i];
if ([color respondsToSelector:#selector(getRed:green:blue:alpha:)]) {
[color getRed:&red green:&green blue:&blue alpha:&alpha];
} else {
const CGFloat *components = CGColorGetComponents(color.CGColor);
red = components[0];
green = components[1];
blue = components[2];
alpha = components[3];
}
if(alpha > 0){
rawData[4*i] = red * 255;
rawData[4*i+1] = green * 255;
rawData[4*i+2] = blue * 255;
rawData[4*i+3] = alpha * 255;
}
else{
rawData[4*i] = 255;
rawData[4*i+1] = 255;
rawData[4*i+2] = 255;
rawData[4*i+3] = 0;
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
width*height*4,
NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
8,
32,
4*width,colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
return newImage;
}
- (NSArray *)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
-(UIColor *)colorOfArmy:(NSString *)army{
UIColor *color;
army = [army stringByReplacingOccurrencesOfString:#"\a" withString:#""];
if([army isEqual:#"france"]){
color = [[UIColor alloc] initWithRed:0.3137254 green:0.3686274 blue:0.9058823 alpha:1];
}
if([army isEqual:#"germany"]){
color = [[UIColor alloc] initWithRed:0.6352941 green:0.4313725 blue:0.3372549 alpha:1];
}
if([army isEqual:#"uk"]){
color = [[UIColor alloc] initWithRed:0.8941176 green:0.4235294 blue:0.4941176 alpha:1];
}
if([army isEqual:#"italy"]){
color = [[UIColor alloc] initWithRed:0.5137254 green:0.1215686 blue:0.4745098 alpha:1];
}
if([army isEqual:#"ussr"]){
color = [[UIColor alloc] initWithRed:0.3607843 green:0.0823529 blue:0.1215686 alpha:1];
}
if([army isEqual:#"japan"]){
color = [[UIColor alloc] initWithRed:0.9215686 green:0.6156862 blue:0.3137254 alpha:1];
}
if([army isEqual:#""]){
color = [[UIColor alloc] initWithRed:0.1215686 green:0.2823529 blue:0.1607843 alpha:1];
}
if(color == nil){
NSLog(#"the problem was %#", army);
}
return color;
}
-(UIImage *)imageOfTerritoryWithNewArmy:(NSString *)army AndOldTerritoryImage:(UIImage *)oldImg{
CGImageRef image = oldImg.CGImage;
NSInteger width = CGImageGetWidth(image);
NSInteger height = CGImageGetHeight(image);
NSArray *rgba = [self getRGBAsFromImage:oldImg atX:0 andY:0 count:width * height];
NSMutableArray *fixedRGBA = [[NSMutableArray alloc] init];
UIColor *armyColor = [self colorOfArmy:army];
for(UIColor *pixel in rgba){
CGFloat red;
CGFloat green;
CGFloat blue;
CGFloat alpha;
if ([pixel respondsToSelector:#selector(getRed:green:blue:alpha:)]) {
[pixel getRed:&red green:&green blue:&blue alpha:&alpha];
} else {
const CGFloat *components = CGColorGetComponents(pixel.CGColor);
red = components[0];
green = components[1];
blue = components[2];
alpha = components[3];
}
red = red * 255;
green = green * 255;
blue = blue * 255;
if(alpha > 0){
if(red < 50 && green < 50 && blue < 50){
[fixedRGBA addObject:pixel];
}
else{
[fixedRGBA addObject:armyColor];
}
}
else{
[fixedRGBA addObject:pixel];
}
}
return [self createImageWithRGB:fixedRGBA width:width height:height];
}
The problem that I am having is that when the image is drawn again, all of the pixels that used to be blank because they have an alpha value of 0 are displayed as white. How can I get these pixels to still be displayed as clear pixels?
When creating the image, you should specify that it contains an alpha channel. That is, instead of:
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
use:
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
See the CGImage interface specification for more info.
I believe your else clause here is where you want to set the color to clear
if(alpha > 0){
rawData[4*i] = red * 255;
rawData[4*i+1] = green * 255;
rawData[4*i+2] = blue * 255;
rawData[4*i+3] = alpha * 255;
}
else{
rawData[4*i] = 255;
rawData[4*i+1] = 255;
rawData[4*i+2] = 255;
rawData[4*i+3] = 255;
}
You have to change the last assignment to 0
if(alpha > 0){
rawData[4*i] = red * 255;
rawData[4*i+1] = green * 255;
rawData[4*i+2] = blue * 255;
rawData[4*i+3] = alpha * 255;
}
else{
rawData[4*i] = 255;
rawData[4*i+1] = 255;
rawData[4*i+2] = 255;
rawData[4*i+3] = 0;
}

Image is rotated after process. How can i do this correctly?

My problem is: UIImage is rotated after processing.
I use a helper class for the image processing called ProcessHelper. This class has two methods:
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image;
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *)rawData
withWidth:(int) width
withHeight:(int) height;
implementation
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
NSLog(#"Convert image [%d x %d] to RGBA8 char data", (int)image.size.width,
(int)image.size.height);
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) rawData
withWidth:(int) width
withHeight:(int) height {
CGContextRef ctx = CGBitmapContextCreate(rawData,
width,
height,
8,
width * 4,
CGColorSpaceCreateDeviceRGB(),
kCGImageAlphaPremultipliedLast );
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
free(rawData);
return rawImage;
}
I
On start I get pixel data:
rawData = [ProcessHelper convertUIImageToBitmapRGBA8:image];
Next I do some processing:
-(void)process_grayscale {
int byteIndex = 0;
for (int i = 0 ; i < workingImage.size.width * workingImage.size.height ; ++i)
{
int outputColor = (rawData[byteIndex] + rawData[byteIndex+1] + rawData[byteIndex+2]) / 3;
rawData[byteIndex] = rawData[byteIndex + 1] = rawData[byteIndex + 2] = (char) (outputColor);
byteIndex += 4;
}
workingImage = [ProcessHelper convertBitmapRGBA8ToUIImage:rawData
withWidth:CGImageGetWidth(workingImage.CGImage)
withHeight:CGImageGetHeight(workingImage.CGImage)];
}
After this I returned the workingImage to parent class and UIImageView shows it returned but in old size, I mean: image before is WxH and after is WxH but rotated (should be HxW, after rotate). I would like to make the image does not rotate.
This happens when I edit photos from ipad. Screenshots are ok and images from internet like backgrounds are ok.
How can I do this correctly?
use UIGraphicsPushContext(ctx); [image drawInRect:CGRectMake(0, 0, width, height)]; UIGraphicsPopContext(); instead of CGContextDrawImage. CGContextDrawImage will flip the image verticaly.
Or Scale and Transform the Context before calling CGContextDrawImage

I rotate image upside down but how to flip it horizontally

i load image from camera roll, but this image is upside down. so i wrote method to rotate it.
CGImageRef imageRef = [image CGImage];
float width = CGImageGetWidth(imageRef);
float height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
Byte *rawData = malloc(height * width * 4);
Byte bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
Byte bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
int byteIndex = 0;
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
Byte *rawData2 = malloc(height * width * 4);
for (int i = 0 ; i < width * height ; i++) {
int index = (width * height) * 4;
rawData2[byteIndex + 0] = rawData[index - byteIndex + 0];
rawData2[byteIndex + 1] = rawData[index - byteIndex + 1];
rawData2[byteIndex + 2] = rawData[index - byteIndex + 2];
rawData2[byteIndex + 3] = rawData[index - byteIndex + 3];
byteIndex += 4;
}
CGContextRef ctx = CGBitmapContextCreate(rawData2, CGImageGetWidth( imageRef ), CGImageGetHeight( imageRef ), 8, CGImageGetBytesPerRow( imageRef ), CGImageGetColorSpace( imageRef ),kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
image = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
return image;
it's ok, but now i must flip it horizontally, and i don't know how can i do this. i try to do this second day.
thank you for help
Have you tried this:
imageView.transform = CGAffineTransformMakeScale(-1, 1);
?
You can also do the rotation by using a transformation:
imageView.transform = CGAffineTransformMakeRotation(M_PI);
You can concat the two transformation in one like this:
imageView.transform = CGAffineTransformRotation(CGAffineTransformMakeScale(-1, 1), M_PI);
If you want to make your own UIImage object, rather than manipulating views and transformations, I would still suggest you to use the approach described above to make the view draw the image as you like, then convert your UIView content into an UIImage object:
UIGraphicsBeginImageContext(rect.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();