iOS OpenGL using parameters for glTexImage2D to make a UIImage? - objective-c

I am working through some existing code for a project i am assigned to.
I have a successful call to glTexImage2D like this:
glTexImage2D(GL_TEXTURE_2D, 0, texture->format, texture->widthTexture, texture->heightTexture, 0, texture->format, texture->type, texture->data);
I would like create an image (preferably a CGImage or UIImage) using the variables passed to glTexImage2D, but don't know if it's possible.
I need to create many sequential images(many of them per second) from an OpenGL view and save them for later use.
Should i be able to create a CGImage or UIImage using the variables i use in glTexImage2D?
If i should be able to, how should i do it?
If not, why can't i and what do you suggest for my task of saving/capturing the contents of my opengl view many times per second?
edit: i have already successfully captured images using some techniques provided by apple with glReadPixels, etc etc. i want something faster so i can get more images per second.
edit: after reviewing and adding the code from Thomson, here is the resulting image:
the image very slightly resembles what the image should look like, except duplicated ~5 times horizontally and with some random black space underneath.
note: the video(each frame) data is coming over an ad-hoc network connection to the iPhone. i believe the camera is shooting over each frame with the YCbCr color space
edit: further reviewing Thomson's code
I have copied your new code into my project and got a different image as result:
width: 320
height: 240
i am not sure how to find the number of bytes in texture-> data. it is a void pointer.
edit: format and type
texture.type = GL_UNSIGNED_SHORT_5_6_5
texture.format = GL_RGB

Hey binnyb, here's the solution to creating a UIImage using the data stored in texture->data. v01d is certainly right that you're not going to get the UIImage as it appears in your GL framebuffer, but it'll get you an image from the data before it has passed through the framebuffer.
Turns out your texture data is in 16 bit format, 5 bits for red, 6 bits for green, and 5 bits for blue. I've added code for converting the 16 bit RGB values into 32 bit RGBA values before creating a UIImage. I'm looking forward to hearing how this turns out.
float width = 512;
float height = 512;
int channels = 4;
// create a buffer for our image after converting it from 565 rgb to 8888rgba
u_int8_t* rawData = (u_int8_t*)malloc(width*height*channels);
// unpack the 5,6,5 pixel data into 24 bit RGB
for (int i=0; i<width*height; ++i)
{
// append two adjacent bytes in texture->data into a 16 bit int
u_int16_t pixel16 = (texture->data[i*2] << 8) + texture->data[i*2+1];
// mask and shift each pixel into a single 8 bit unsigned, then normalize by 5/6 bit
// max to 8 bit integer max. Alpha set to 0.
rawData[channels*i] = ((pixel16 & 63488) >> 11) / 31.0 * 255;
rawData[channels*i+1] = ((pixel16 & 2016) << 5 >> 10) / 63.0 * 255;
rawData[channels*i+2] = ((pixel16 & 31) << 11 >> 11) / 31.0 * 255;
rawData[channels*4+3] = 0;
}
// same as before
int bitsPerComponent = 8;
int bitsPerPixel = channels*bitsPerComponent;
int bytesPerRow = channels*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
channels*width*height,
NULL);
free( rawData );
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
The code for creating a new image comes from Creating UIImage from raw RGBA data thanks to Rohit. I've tested this with our original 320x240 image dimension, having converted a 24 bit RGB image into 5,6,5 format and then up to 32 bit. I haven't tested it on a 512x512 image but I don't expect any problems.

You could make an image from the data you are sending to GL, but I doubt that's really what you want to achieve.
My guess is you want the output of the Frame Buffer. To do that you need glReadPixels(). Bare in mind for a large buffer (say 1024x768) it will take seconds to read the pixels back from GL, you wont get more than 1 per second.

You should be able to use the UIImage initializer imageWithData for this. All you need is to ensure that the data in texture->data is in a structured format that is recognizable to the UIImage constructor.
NSData* imageData = [NSData dataWithBytes:texture->data length:(3*texture->widthTexture*texture->heightTexture)];
UIImage* theImage = [UIImage imageWithData:imageData];
The types that imageWithData: supports are not well documented, but you can create NSData from .png, .jpg, .gif, and I presume .ppm files without any difficulty. If texture->data is in one of those binary formats I suspect you can get this running with a little experimentation.

Related

Is there a way to preserve the RGB values of a pixel with an Alpha different from 255 in a PNG?

I'm currently working on a project that envolves working with PNGs that have custom RGBA values. This is a preview of the code I use to create a PNG NSData that contains my custom RGBA values (all of this is working as it should):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(&pixelData, width, height, bitsPerComponent, BytesPerRow, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGImageRef toCGImage = CGBitmapContextCreateImage(gtx);
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:toCGImage];
NSData *pngData = [newRep representationUsingType:NSPNGFileType properties:nil];
When I'm creating the pixelData (which is an uint8_t array that contains all the RGBA values) and I set the Alpha index of each pixel as 255, the int values of each R/G/B of each pixel are the same when I was creating them.
Here's an RGBA example (Alpha = 255) -> (72, 101, 114, 255)
Now, if I set the Alpha of all pixels to be, let's say, 100, instead of 255, the above RGBA example will look like: (184, 255, 255, 100).
As you can see, the RGB values are totally different from what I created initially, and I really need to preserve the original values (through a custom property when creating the NSData or something like that) or a way to calculate them back, no matter what the Alpha value is. Is there any way of doing this?
Thanks!
Your data has un-premultiplied (by alpha) color components. Core Graphics is expecting premultiplied alpha.
Depending on how your build your pixelData array, you can either pre-multiply it in your own code as you go (e.g. premultipliedRed = unpremultipliedRed * alpha / 255.0) or you can use the Accelerate framework to convert it after the fact. You would use the function vImagePremultiplyData_RGBA8888() to do that:
vImage_Buffer buffer = { pixelData, width, height, BytesPerRow };
vImagePremultiplyData_RGBA8888(&buffer, &buffer, 0);
If, later, you get back premultiplied data and you need to convert back to un-premultiplied, you can do the reverse using vImageUnpremultiplyData_RGBA8888(). There may be some loss of precision in the round trip, so you're not absolutely guaranteed to get back the original source data bit-for-bit. It's an inevitable consequence of converting between unpremultiplied and premultiplied. If that's a problem, you need to keep the original source data or not use Core Graphics.

Edit Color Bytes in UIImage

I'm quite new to working with UIImages on the byte level, but I was hoping that someone could point me to some guides on this matter?
I am ultimately looking to edit the RGBA values of the bytes, based on certain parameters (position, color, etc.) and I know I've come across samples/tutorials for this before, but I just can't seem to find anything now.
Basically, I'm hoping to be able to break a UIImage down to its bytes and iterate over them and edit the bytes' RGBA values individually. Maybe some sample code here would be a big help as well.
I've already been working in the different image contexts and editing the images with the CG power tools, but I would like to be able to work at the byte level.
EDIT:
Sorry, but I do understand that you cannot edit the bytes in a UIImage directly. I should have asked my question more clearly. I meant to ask how can I get the bytes of a UIImage, edit those bytes and then create a new UIImage from those bytes.
As pointed out by #BradLarson, OpenGL is a better option for this and there is a great library, which was created by #BradLarson, here. Thanks #CSmith for pointing it out!
#MartinR has right answer, here is some code to get you started:
UIImage *image = your image;
CGImageRef imageRef = image.CGImage;
NSUInteger nWidth = CGImageGetWidth(imageRef);
NSUInteger nHeight = CGImageGetHeight(imageRef);
NSUInteger nBytesPerRow = CGImageGetBytesPerRow(imageRef);
NSUInteger nBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger nBitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSUInteger nBytesPerPixel = nBitsPerPixel == 24 ? 3 : 4;
unsigned char *rawInput = malloc (nWidth * nHeight * nBytesPerPixel);
CGColorSpaceRef colorSpaceRGB = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawInput, nWidth, nHeight, nBitsPerComponent, nBytesPerRow, colorSpaceRGB, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextDrawImage (context, CGRectMake(0, 0, nWidth, nHeight), imageRef);
// modify the pixels stored in the array of 4-byte pixels at rawInput
.
.
.
UIImage *imageNew = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease (context);
free (rawInput);
You have no direct access to the bytes in an UIImage and you cannot change them directly.
You have to draw the image into a CGBitmapContext, modify the pixels in the bitmap, and then create a new image from the bitmap context.

32 bits big endian floating point data to CGImage

I am trying to write an application which read FITS image. FITS stand for Flexible Image Transport format and it is a format wish is primarily used to store scientific data related to astrophysics, and secondarily, it is used by most amator astronomer which take picture of the sky with CCD camera. So FITS file contains images, but they also may contains tables and other kind of data. As I am new in Objectiv-C and cocoa programming (I start this project one year ago, but since I am busy, I almost not touch it for one year !), I started trying to create a library which allow me to convert the image content of the file to a NSImageRep. FITS image binary data may be 8 bit/pix, 16 bit/pix, 32 bit/pix unsigned integer or 32 bit/pix, 64 bit/pix floating point, all in Big endian.
I manage to have image representation for grey scale FITS image in 16 bit/pix, 32 bit/pix unsigned integer but I obtain very weird behaviour when I am looking for 32 bit/pix floating point (and the problem is worth for RGB 32 bit/pix floating points). So far, I haven't test for 8 bits/pix integer data and RGB image based on 16 bit/pix and 32 bit/pix integer data because I haven't yet find example file on the web.
As follow is my code to create a grey scale image form fits file :
-(void) ConstructImgGreyScale
{
CGBitmapInfo bitmapInfo;
int bytesPerRow;
switch ([self BITPIX]) // BITPIX : Number bits/pixel. Information extracted from the FITS header
{
case 8:
bytesPerRow=sizeof(int8_t);
bitmapInfo = kCGImageAlphaNone ;
break;
case 16:
bytesPerRow=sizeof(int16_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder16Big;
break;
case 32:
bytesPerRow=sizeof(int32_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big;
break;
case 64:
bytesPerRow=sizeof(int64_t);
bitmapInfo = kCGImageAlphaNone;
break;
case -32:
bytesPerRow=sizeof(Float32);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big | kCGBitmapFloatComponents;
case -64:
bytesPerRow=sizeof(Float64);
bitmapInfo = kCGImageAlphaNone | kCGBitmapFloatComponents;
break;
default:
NSLog(#"Unknown pixel bit size");
return;
}
[self setBitsPerSample:abs([self BITPIX])];
[self setColorSpaceName:NSCalibratedWhiteColorSpace];
[self setPixelsWide:[self NAXESofAxis:0]]; // <- Size of the X axis. Extracted from FITS header
[self setPixelsHigh:[self NAXESofAxis:1]]; // <- Size of the Y axis. Extracted from FITS header
[self setSize: NSMakeSize( 2*[self pixelsWide], 2*[self pixelsHigh])];
[self setAlpha: NO];
[self setOpaque:NO];
CGDataProviderRef provider=CGDataProviderCreateWithCFData ((CFDataRef) Img);
CGFloat Scale[2]={0,28};
image = CGImageCreate ([self pixelsWide],
[self pixelsHigh],
[self bitsPerSample],
[self bitsPerSample],
[self pixelsWide]*bytesPerRow,
[[NSColorSpace deviceGrayColorSpace] CGColorSpace],
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
CGDataProviderRelease(provider);
return;
}
and here is the snapshot of the result for a 32/bits/pix floating point data : NASA HST picture!
The Image seems to be shift to the left, but what is more annoying is that I get two representation of the same image (upper and lower part of the frame) in the same frame.
And for some other file, the behaviour is more strange :
Star Field 1 , (For the other link se the comment, as new user, I can not have more than two link in this text. As well as I can not put directly the image.)
All three star field images are the representation of the same fits file content. I obtain a correct representation of the image in the bottom part of the frame (the star are too much saturated but I haven't yet play with the encoding). But, in the upper part, each time I open the same file I got a different representation of the image. Look like each time I open this file, it do not tack the same sequence of bytes to produce the image representation (at least for the upper part).
Also, I do not know if the image which is duplicated on the bottom contain half of the data
and the upper one the other half, or if it is simply a copy of the data.
When I convert the content of my Data in primitive format (human readable number) the number are compatible with what should be in the pixel, at the good position. This let me think the problem is not coming from the data but from the way the CGImage interpret the data i.e. I am wrong somewhere in the argument I pass to the CGImageCreate function.
In case of RGB fits image data, I obtain at the end 18 image into my frame. 6 copy of each R, G and B image. All in gray scale. Note that in case of RGB image, my code is different.
What am I doing wrong ?
Ok, I finally find the solution of on of my problem, concerning the duplication of the image. And this was a very stupid mistake and I am not proud of myself not having find it earlier.
In the code, I forget the break in the case -32. Still the question remain about the shift of the picture. I do not see the shift when I am opening 32 bit integer image but it appears on the 32 bit floating points data.
Does any one have an idea of where this shift could come from in my code ? Does it is due to the way I construct the image ? Or it is possible it is du to the way I draw the image ?
Bellow is the piece of code I use to draw the image. Since the image was first upside down, I made a little change of coordinate.
- (bool)draw {
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
if (!context || !image) {
return NO;
}
NSSize size = [self size];
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), image);
return YES;
}

Average Color of Mac Screen

I'm trying to find out a way to calculate the average color of the screen using objective-c.
So far I use this code to get a screen shot, which works great:
CGImageRef image1 = CGDisplayCreateImage(kCGDirectMainDisplay);
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:image1];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
Now my problem is to calculate the average RGB color of this image.
I've found one solution, but the R G and B color components were always calculated to be the same (equal):
NSInteger i = 0;
NSInteger components[3] = {0,0,0};
unsigned char *data = [bitmapRep bitmapData];
NSInteger pixels = ([bitmapRep size].width *[bitmapRep size].height);
do {
components[0] += *data++;
components[1] += *data++;
components[2] += *data++;
} while (++i < pixels);
int red = (CGFloat)components[0] / pixels;
int green = (CGFloat)components[1] / pixels;
int blue = (CGFloat)components[2] / pixels;
A short analysis of bitmapRep shows that each pixel has 32 bits (4 bytes) where the first byte is unused, it is a padding byte, in other words the format is XRGB and X is not used. (There are no padding bytes at the end of a pixel row).
Another remark: for counting the number of pixels you use the method -(NSSize)size.
You should never do this! size has nothing to do with pixels. It only says how big the image should be depicted (expressed in inch or cm or mm) on the screen or the printer. For counting (or using otherwise) the pixels you should use -(NSInteger)pixelsWide and -(NSInteger)pixelsHigh. But the (wrong) using of -size works if and only if the resolution of the imageRep is 72 dots per inch.
Finally: there is a similar question at Average Color of Mac Screen
Your data is probably aligned as 4 bytes per pixel (and not 3 bytes, like you assume). That would (statistically) explain the near-equal values that you get.

How to load PNG with alpha with Cocoa?

I'm developing an iPhone OpenGL application, and I need to use some textures with transparency. I have saved the images as PNGs. I already have all the code to load PNGs as OpenGL textures and render them. This is working fine for all images that don't have transparency (all alpha values are 1.0). However, now that I'm trying to load and use some PNGs that have transparency (varying alpha values), my texture is messed up, like it loaded the data incorrectly or something.
I'm pretty sure this is due to my loading code which uses some of the Cocoa APIs. I will post the relevant code here though.
What is the best way to load PNGs, or any image format which supports transparency, on OSX/iPhone? This method feels roundabout. Rendering it to a CGContext and getting the data seems weird.
* LOADING *
CGImageRef CGImageRef_load(const char *filename) {
NSString *path = [NSString stringWithFormat:#"%#/%s",
[[NSBundle mainBundle] resourcePath],
filename];
UIImage *img = [UIImage imageWithContentsOfFile:path];
if(img) return [img CGImage];
return NULL;
}
unsigned char* CGImageRef_data(CGImageRef image) {
NSInteger width = CGImageGetWidth(image);
NSInteger height = CGImageGetHeight(image);
unsigned char *data = (unsigned char*)malloc(width*height*4);
CGContextRef context = CGBitmapContextCreate(data,
width, height,
8, width * 4,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context,
CGRectMake(0.0, 0.0, (float)width, (float)height),
image);
CGContextRelease(context);
return data;
}
* UPLOADING *
(define (image-opengl-upload data width height)
(let ((tex (alloc-opengl-image)))
(glBindTexture GL_TEXTURE_2D tex)
(glTexEnvi GL_TEXTURE_ENV GL_TEXTURE_ENV_MODE GL_DECAL)
(glTexImage2D GL_TEXTURE_2D
0
GL_RGBA
width
height
0
GL_RGBA
GL_UNSIGNED_BYTE
(->void-array data))
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MIN_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MAG_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_S
GL_CLAMP_TO_EDGE)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_T
GL_CLAMP_TO_EDGE)
(glBindTexture GL_TEXTURE_2D 0)
tex))
To be explicit…
The most common issue with loading textures using Core Image is that it insists on converting data to premultiplied alpha format. In the case of PNGs included in the bundle, this is actually done in a preprocessing step in the build process. Failing to take this into account results in dark banding around blended objects.
The way to take it into account is to use glBlendMode(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead of glBlendMode(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If you want to use alpha channels for something other than regular blending, your only option is to switch to a different format and loader (as prideout suggested, but for different reasons).
ETA: the premultiplication issue also exists under Mac OS X, but the preprocessing is iPhone-specific.
Your Core Graphics surface should be cleared to all zeroes before you render to it, so I recommend using calloc instead of malloc, or adding a memset after the malloc.
Also, I'm not sure you want your TexEnv set to GL_DECAL. You might want to leave it set to its default (GL_MODULATE).
If you'd like to avoid Core Graphics for decoding PNG images, I recommend loading in a PVR file instead. PVR is an exceedingly simple file format. An app called PVRTexTool is included with the Imagination SDK which makes it easy to convert from PNG to PVR. The SDK also includes some sample code that shows how to parse their file format.
I don't know anything about OpenGL, but Cocoa abstracts this functionality with NSImage/UIImage.
You can use PVR's but there will be some compression artifacts, so I would only recommend those for 3D object textures, or textures that do not require a certain level of detail that PVR can not offer, especially with gradients.