I'm quite new to working with UIImages on the byte level, but I was hoping that someone could point me to some guides on this matter?
I am ultimately looking to edit the RGBA values of the bytes, based on certain parameters (position, color, etc.) and I know I've come across samples/tutorials for this before, but I just can't seem to find anything now.
Basically, I'm hoping to be able to break a UIImage down to its bytes and iterate over them and edit the bytes' RGBA values individually. Maybe some sample code here would be a big help as well.
I've already been working in the different image contexts and editing the images with the CG power tools, but I would like to be able to work at the byte level.
EDIT:
Sorry, but I do understand that you cannot edit the bytes in a UIImage directly. I should have asked my question more clearly. I meant to ask how can I get the bytes of a UIImage, edit those bytes and then create a new UIImage from those bytes.
As pointed out by #BradLarson, OpenGL is a better option for this and there is a great library, which was created by #BradLarson, here. Thanks #CSmith for pointing it out!
#MartinR has right answer, here is some code to get you started:
UIImage *image = your image;
CGImageRef imageRef = image.CGImage;
NSUInteger nWidth = CGImageGetWidth(imageRef);
NSUInteger nHeight = CGImageGetHeight(imageRef);
NSUInteger nBytesPerRow = CGImageGetBytesPerRow(imageRef);
NSUInteger nBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger nBitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSUInteger nBytesPerPixel = nBitsPerPixel == 24 ? 3 : 4;
unsigned char *rawInput = malloc (nWidth * nHeight * nBytesPerPixel);
CGColorSpaceRef colorSpaceRGB = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawInput, nWidth, nHeight, nBitsPerComponent, nBytesPerRow, colorSpaceRGB, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextDrawImage (context, CGRectMake(0, 0, nWidth, nHeight), imageRef);
// modify the pixels stored in the array of 4-byte pixels at rawInput
.
.
.
UIImage *imageNew = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease (context);
free (rawInput);
You have no direct access to the bytes in an UIImage and you cannot change them directly.
You have to draw the image into a CGBitmapContext, modify the pixels in the bitmap, and then create a new image from the bitmap context.
Related
How can I create a CGImageRef from a NSBitmapImageRep?
Or how can I define a complete new CGImageRef in the same way as the NSBitmapImageRep? The definition of a NSBitmapImageRep works fine. But I need an image as CGImageRef.
unsigned char *plane = (unsigned char *)[data bytes]; // data = 3 bytes for each RGB pixel
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: &plane
pixelsWide: width
pixelsHigh: height
bitsPerSample: depth
samplesPerPixel: channel
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
//bitmapFormat: NSAlphaFirstBitmapFormat
bytesPerRow: channel * width
bitsPerPixel: channel * depth
];
I have no idea how to create the CGImageRef from the NSBitmapImageRep or how to define a new CGImageRef:
CGImageRef imageRef = CGImageCreate(width, height, depth, channel*depth, channel*width, CGColorSpaceCreateDeviceRGB(), ... );
Please, can somebody give me a hint?
The easy way is by using the CGImage property (introduced in 10.5):
CGImageRef image = imageRep.CGImage;
Documentation:
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/occ/instm/NSBitmapImageRep/CGImage
Return Value
Returns an autoreleased CGImageRef opaque type based on
the receiver’s current bitmap data.
Discussion
The returned CGImageRef has pixel dimensions that are
identical to the receiver’s. This method might return a preexisting
CGImageRef opaque type or create a new one. If the receiver is later
modified, subsequent invocations of this method might return different
CGImageRef opaque types.
From your code snippet, it seems you're starting with an NSData object. So, your question seems to be how to create a CGImage from a data object. In that case, there's no reason to go through NSBitmapImageRep.
You were almost there with the call to CGImageCreate(). You just needed to figure out how to supply a CGDataProvider to it. You can create a CGDataProvider from an NSData pretty directly, once you realize that NSData is toll-free bridged with CFData. So:
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGImageRef image = CGImageCreate(width, height, depth / 3, depth, channel*width, colorspace, kCGImageAlphaNone, provider, NULL, TRUE, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorspace);
i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?
The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}
I'm confused about one strange thing....I have an unsigned char array.... I allocate it using calloc and record some bytes data in it... but when I free this unsigned char and allocate it again, I see that it reserves the same address in memory which was allocated previous time. I understand why....But I cannot understand why the data that I'm trying to write there second time is not written...There is written the data that was written first time....Can anybody explain me this???????
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
This is how I allocate it....
Actually my problem is that because of this allocation , which happens once every 2 secs I have memory leak...But when I try to free the allocated memory sector happens thing described above....:(
Please if anybody can help me....I would be so glad...
Here is the code...
- (unsigned char*) createBitmapContext:(UIImage*)anImage{
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
imageRef=nil;
return rawData; }
in this code there is no the part where I free(rawData), and because I cannot free it inside this method I tried to define rawData globally and free it after calling this method...but nothing interesting....
Please if anybody can help me....I would be so glad...
Ok, so this method is rendering a UIImage into a freshly allocated byte buffer and returning the buffer to the caller. Since you're allocating it with calloc, it will be initialised to 0, then overwritten with the image contents.
when I free this unsigned char and allocate it again, I see that it reserves the same address in memory which was allocated previous time
Yes, there are no guarantees about the location of the buffer in memory. Assuming you call free() on the returned memory, requesting the exact same size is quite likely to give you the same buffer back. But - how are you verifying the contents are not written over a second time? What is in the buffer?
my problem is that because of this allocation , which happens once every 2 secs I have memory leak...But when I try to free the allocated memory sector happens thing described above....:(
If there is a leak, it is likely in the code that calls this method, since there is no obvious leakage here. The semantics are obviously such that the caller is responsible for freeing the buffer. So how is that done?
Also, are you verifying that the CGBitmapContext is being correctly created? It is possible that some creation flags or parameters may result in an error. So add a check for context being valid (at least not nil). That could explain why the content is not being overwritten.
One easy way to ensure your memory is being freshly updated is to write your own data to it. You could fill the buffer with a counter, and verify this outside the method. For example, just before you return rawData:
static unsigned updateCounter = 0;
memset(rawData, updateCounter & 0xff, width*height*4);
This will cycle through writing 0-255 into the buffer, which you can easily verify.
Another thing - what are you trying to achieve with this code? There might be an easier way to achieve what you're trying to achieve. Returning bare buffers devoid of metadata is not necessarily the best way to manage your images.
So guys I solved this issue...First thing I've changed createBitmapContext method to this
- (void) createBitmapContext:(UIImage*)anImage andRawData:(unsigned char *)theRawData{
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(theRawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
imageRef=nil;
// return theRawData;}
then...besides this I missed the part where I assign newRawData to oldRawData and by this I was having two pointers to the same memory address...So from here came the issue... I changed this assignment part to this memcpy(rawDataForOldImage, rawDataForNewImage,newCapturedImage.size.width*newCapturedImage.size.height*4); and here the problem is solved....Thanks to all
I am working through some existing code for a project i am assigned to.
I have a successful call to glTexImage2D like this:
glTexImage2D(GL_TEXTURE_2D, 0, texture->format, texture->widthTexture, texture->heightTexture, 0, texture->format, texture->type, texture->data);
I would like create an image (preferably a CGImage or UIImage) using the variables passed to glTexImage2D, but don't know if it's possible.
I need to create many sequential images(many of them per second) from an OpenGL view and save them for later use.
Should i be able to create a CGImage or UIImage using the variables i use in glTexImage2D?
If i should be able to, how should i do it?
If not, why can't i and what do you suggest for my task of saving/capturing the contents of my opengl view many times per second?
edit: i have already successfully captured images using some techniques provided by apple with glReadPixels, etc etc. i want something faster so i can get more images per second.
edit: after reviewing and adding the code from Thomson, here is the resulting image:
the image very slightly resembles what the image should look like, except duplicated ~5 times horizontally and with some random black space underneath.
note: the video(each frame) data is coming over an ad-hoc network connection to the iPhone. i believe the camera is shooting over each frame with the YCbCr color space
edit: further reviewing Thomson's code
I have copied your new code into my project and got a different image as result:
width: 320
height: 240
i am not sure how to find the number of bytes in texture-> data. it is a void pointer.
edit: format and type
texture.type = GL_UNSIGNED_SHORT_5_6_5
texture.format = GL_RGB
Hey binnyb, here's the solution to creating a UIImage using the data stored in texture->data. v01d is certainly right that you're not going to get the UIImage as it appears in your GL framebuffer, but it'll get you an image from the data before it has passed through the framebuffer.
Turns out your texture data is in 16 bit format, 5 bits for red, 6 bits for green, and 5 bits for blue. I've added code for converting the 16 bit RGB values into 32 bit RGBA values before creating a UIImage. I'm looking forward to hearing how this turns out.
float width = 512;
float height = 512;
int channels = 4;
// create a buffer for our image after converting it from 565 rgb to 8888rgba
u_int8_t* rawData = (u_int8_t*)malloc(width*height*channels);
// unpack the 5,6,5 pixel data into 24 bit RGB
for (int i=0; i<width*height; ++i)
{
// append two adjacent bytes in texture->data into a 16 bit int
u_int16_t pixel16 = (texture->data[i*2] << 8) + texture->data[i*2+1];
// mask and shift each pixel into a single 8 bit unsigned, then normalize by 5/6 bit
// max to 8 bit integer max. Alpha set to 0.
rawData[channels*i] = ((pixel16 & 63488) >> 11) / 31.0 * 255;
rawData[channels*i+1] = ((pixel16 & 2016) << 5 >> 10) / 63.0 * 255;
rawData[channels*i+2] = ((pixel16 & 31) << 11 >> 11) / 31.0 * 255;
rawData[channels*4+3] = 0;
}
// same as before
int bitsPerComponent = 8;
int bitsPerPixel = channels*bitsPerComponent;
int bytesPerRow = channels*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
channels*width*height,
NULL);
free( rawData );
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
The code for creating a new image comes from Creating UIImage from raw RGBA data thanks to Rohit. I've tested this with our original 320x240 image dimension, having converted a 24 bit RGB image into 5,6,5 format and then up to 32 bit. I haven't tested it on a 512x512 image but I don't expect any problems.
You could make an image from the data you are sending to GL, but I doubt that's really what you want to achieve.
My guess is you want the output of the Frame Buffer. To do that you need glReadPixels(). Bare in mind for a large buffer (say 1024x768) it will take seconds to read the pixels back from GL, you wont get more than 1 per second.
You should be able to use the UIImage initializer imageWithData for this. All you need is to ensure that the data in texture->data is in a structured format that is recognizable to the UIImage constructor.
NSData* imageData = [NSData dataWithBytes:texture->data length:(3*texture->widthTexture*texture->heightTexture)];
UIImage* theImage = [UIImage imageWithData:imageData];
The types that imageWithData: supports are not well documented, but you can create NSData from .png, .jpg, .gif, and I presume .ppm files without any difficulty. If texture->data is in one of those binary formats I suspect you can get this running with a little experimentation.
I save a NSBitmapImageRep to a BMP file (Snow Leopard). It seems ok when i open it on macos. But it makes an error on my multimedia device (which can show any BMP file from internet). I cannot figure out what is wrong, but when i look inside the file (with the cool hexfiend app on macos), 2 things wrong:
the header have a wrong value for the biHeight parameter : 4294966216 (hex=C8FBFFFF)
the header have a correct biWidth parameter : 1920
the first pixel in the bitmap content (after 54 bytes headers in BMP format) correspond to the upper left corner of the original image. In the original BMP file and as specified in the BMP format, it should be the down left corner pixel first.
To explain the full workflow in my app, i have an NSImageView where i can drag a BMP image. This View is bind to an NSImage.
After a drag & drop i have an action to save this image (with some text drawing over it) to a BMP file.
Here's the code for saving the new BMP file :
CGColorSpaceRefcolorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRefcontext = CGBitmapContextCreate(NULL, (int)1920, (int)1080, 8, 4*(int)1920, colorSpace, kCGImageAlphaNoneSkipLast);
[duneScreenViewdrawBackgroundWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) needScale:NO];
if(folderType==DXFolderTypeMovie) {
[duneScreenViewdrawSynopsisContentWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) withScale:1.0];
}
CGImageRef backgroundImageRef = CGBitmapContextCreateImage(context);
NSBitmapImageRep*bitmapBackgroundImageRef = [[NSBitmapImageRepalloc] initWithCGImage:backgroundImageRef];
NSData*data = [destinationBitmap representationUsingType:NSBMPFileType properties:nil];
[data writeToFile:[NSStringstringWithFormat:#"%#/%#", folderPath,backgroundPath] atomically: YES];
The duneScreenViewdrawSynopsisContentWithDuneFolder method uses CGContextDrawImage to draw the image.
The duneScreenViewdrawSynopsis method uses CoreText to draw some text in the same context.
Do you know what's wrong?
I just registered an openid account, so i can not edit my own question. I found a way to manage this problem and wanted to post my solution.
There was 2 problems :
wrong biHeight parameter in BMP header
vertically flipped data in the bitmap content (start with upper left corner in the place of down left corner first)
For the biHeight parameter, i replaced the biHeight bytes to the good value (1080 for my image)
For the flipped problem, i just flip all rows in the content bitmap bytes.
May be this is not the most elegant solution, but it work fine. Just let me know if you if you have other solutions.
Here's the code :
intwidth = 1920;
intheight = 1080;
CGColorSpaceRefcolorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRefcontext = CGBitmapContextCreate(NULL, (int)width, (int)height, 8, 4* (int)width, colorSpace, kCGImageAlphaNoneSkipLast);
[duneScreenViewdrawBackgroundWithDuneFolder:selfinContext:context inRect:NSMakeRect(0,0,width,height) needScale:NO];
if(folderType==DXFolderTypeMovie) {
[duneScreenViewdrawSynopsisContentWithDuneFolder:selfinContext:context inRect:NSMakeRect(0,0,width,height) withScale:1.0];
}
CGImageRefbackgroundImageRef = CGBitmapContextCreateImage(context);
NSBitmapImageRep*bitmapBackgroundImageRef = [[NSBitmapImageRepalloc] initWithCGImage:backgroundImageRef];
NSData*data = [bitmapBackgroundImageRef representationUsingType:NSBMPFileTypeproperties:nil];
NSMutableData*mutableData = [[NSMutableDataalloc] init];
intbitmapBytesOffset = 54;
//headers
[mutableData appendData:[data subdataWithRange:NSMakeRange(0,bitmapBytesOffset)]];
//bitmap data
intlineIndex=height-1;
while(lineIndex>=0) {
[mutableData appendData:[data subdataWithRange:NSMakeRange(bitmapBytesOffset+lineIndex*width*3,width*3)]];
lineIndex--;
}
//force biHeight header parameter to 1080
NSString*biHeightString = #"\x38\x04\x00\x00";
NSData*biHeightData = [biHeightString dataUsingEncoding:NSUTF8StringEncoding];
[mutableData replaceBytesInRange:NSMakeRange(22,4) withBytes:[biHeightData bytes]];
[mutableData writeToFile:[NSStringstringWithFormat:#"%#/%#", folderPath,backgroundPath] atomically: YES];
[mutableData release];
CGImageRelease(backgroundImageRef);
[bitmapBackgroundImageRef release];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);