32 bits big endian floating point data to CGImage - objective-c

I am trying to write an application which read FITS image. FITS stand for Flexible Image Transport format and it is a format wish is primarily used to store scientific data related to astrophysics, and secondarily, it is used by most amator astronomer which take picture of the sky with CCD camera. So FITS file contains images, but they also may contains tables and other kind of data. As I am new in Objectiv-C and cocoa programming (I start this project one year ago, but since I am busy, I almost not touch it for one year !), I started trying to create a library which allow me to convert the image content of the file to a NSImageRep. FITS image binary data may be 8 bit/pix, 16 bit/pix, 32 bit/pix unsigned integer or 32 bit/pix, 64 bit/pix floating point, all in Big endian.
I manage to have image representation for grey scale FITS image in 16 bit/pix, 32 bit/pix unsigned integer but I obtain very weird behaviour when I am looking for 32 bit/pix floating point (and the problem is worth for RGB 32 bit/pix floating points). So far, I haven't test for 8 bits/pix integer data and RGB image based on 16 bit/pix and 32 bit/pix integer data because I haven't yet find example file on the web.
As follow is my code to create a grey scale image form fits file :
-(void) ConstructImgGreyScale
{
CGBitmapInfo bitmapInfo;
int bytesPerRow;
switch ([self BITPIX]) // BITPIX : Number bits/pixel. Information extracted from the FITS header
{
case 8:
bytesPerRow=sizeof(int8_t);
bitmapInfo = kCGImageAlphaNone ;
break;
case 16:
bytesPerRow=sizeof(int16_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder16Big;
break;
case 32:
bytesPerRow=sizeof(int32_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big;
break;
case 64:
bytesPerRow=sizeof(int64_t);
bitmapInfo = kCGImageAlphaNone;
break;
case -32:
bytesPerRow=sizeof(Float32);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big | kCGBitmapFloatComponents;
case -64:
bytesPerRow=sizeof(Float64);
bitmapInfo = kCGImageAlphaNone | kCGBitmapFloatComponents;
break;
default:
NSLog(#"Unknown pixel bit size");
return;
}
[self setBitsPerSample:abs([self BITPIX])];
[self setColorSpaceName:NSCalibratedWhiteColorSpace];
[self setPixelsWide:[self NAXESofAxis:0]]; // <- Size of the X axis. Extracted from FITS header
[self setPixelsHigh:[self NAXESofAxis:1]]; // <- Size of the Y axis. Extracted from FITS header
[self setSize: NSMakeSize( 2*[self pixelsWide], 2*[self pixelsHigh])];
[self setAlpha: NO];
[self setOpaque:NO];
CGDataProviderRef provider=CGDataProviderCreateWithCFData ((CFDataRef) Img);
CGFloat Scale[2]={0,28};
image = CGImageCreate ([self pixelsWide],
[self pixelsHigh],
[self bitsPerSample],
[self bitsPerSample],
[self pixelsWide]*bytesPerRow,
[[NSColorSpace deviceGrayColorSpace] CGColorSpace],
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
CGDataProviderRelease(provider);
return;
}
and here is the snapshot of the result for a 32/bits/pix floating point data : NASA HST picture!
The Image seems to be shift to the left, but what is more annoying is that I get two representation of the same image (upper and lower part of the frame) in the same frame.
And for some other file, the behaviour is more strange :
Star Field 1 , (For the other link se the comment, as new user, I can not have more than two link in this text. As well as I can not put directly the image.)
All three star field images are the representation of the same fits file content. I obtain a correct representation of the image in the bottom part of the frame (the star are too much saturated but I haven't yet play with the encoding). But, in the upper part, each time I open the same file I got a different representation of the image. Look like each time I open this file, it do not tack the same sequence of bytes to produce the image representation (at least for the upper part).
Also, I do not know if the image which is duplicated on the bottom contain half of the data
and the upper one the other half, or if it is simply a copy of the data.
When I convert the content of my Data in primitive format (human readable number) the number are compatible with what should be in the pixel, at the good position. This let me think the problem is not coming from the data but from the way the CGImage interpret the data i.e. I am wrong somewhere in the argument I pass to the CGImageCreate function.
In case of RGB fits image data, I obtain at the end 18 image into my frame. 6 copy of each R, G and B image. All in gray scale. Note that in case of RGB image, my code is different.
What am I doing wrong ?

Ok, I finally find the solution of on of my problem, concerning the duplication of the image. And this was a very stupid mistake and I am not proud of myself not having find it earlier.
In the code, I forget the break in the case -32. Still the question remain about the shift of the picture. I do not see the shift when I am opening 32 bit integer image but it appears on the 32 bit floating points data.
Does any one have an idea of where this shift could come from in my code ? Does it is due to the way I construct the image ? Or it is possible it is du to the way I draw the image ?
Bellow is the piece of code I use to draw the image. Since the image was first upside down, I made a little change of coordinate.
- (bool)draw {
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
if (!context || !image) {
return NO;
}
NSSize size = [self size];
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), image);
return YES;
}

Related

CGContextShowGlyphsAtPoint DEPRECATED

After spending quite a bit of time to display "Thai Phonetic YK" fonts in an iPhone app. I finally got things sorted out and working.
Though it is functionning there is still a complaint (warning) from the compiler about one line of code in the (void)drawRect: method of my class performing the display.
CGContextShowGlyphsAtPoint(context, 20, 50, textToPrint, textLength);
The compiler tells me that this code is DEPRECATED. My question is “How am I supposed to change it?”.
Even though I searched the net for an answer, I didn’t find any thing clear.
The documentation says something like “Use Core Text instead” which is far too vague to be considered as an answer.
Core Graphics:
void CGContextShowGlyphsAtPoint (
CGContextRef context,
CGFloat x,
CGFloat y,
const CGGlyph glyphs[],
size_t count
);
Core Text:
void CTFontDrawGlyphs (
CTFontRef font,
const CGGlyph glyphs[],
const CGPoint positions[],
size_t count,
CGContextRef context
);
The Core Text version requires a CTFontRef (in the Core Graphics version, the font is expected to be set in the context).
You can obtain a CTFontRef from a UIFont:
CTFontRef ctFont = CTFontCreateWithName( (__bridge CFStringRef)uiFont.fontName, uiFont.pointSize, NULL);
The CT version also requires an array of points, one for each glyph. Assuming you were drawing a single glyph with the CG code, you could create the array like this:
CGPoint point = CGPointMake(x, y);
const CGPoint* positions = &point;
This change does mean you will need a point position for each glyph. In my case the extra work was minimal: I was advancing the typesetter one character at a time (for curved text) so I had to do this calculation anyway.
You may be able to typeset one text run at a time with CTRun:
void CTRunDraw (
CTRunRef run,
CGContextRef context,
CFRange range );
That could save you the trouble of iterating over each glyph. You could use it something like this...
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CTLineRef line = CTLineCreateWithAttributedString(
(__bridge CFTypeRef)self.attributedString);
CFArrayRef runs = CTLineGetGlyphRuns(line);
CFIndex runCount = CFArrayGetCount(runs);
for (CFIndex runIndex = 0; runIndex < runCount; ++runIndex) {
CTRunRef run = CFArrayGetValueAtIndex(runs, runIndex);
[self adjustContextForRun:run];
CTRunDraw (run, context, 0)
}
(thats just a sketch, implementation will depend on your needs, and i haven't tested the code)
adjustContextForRun would be responsible for setting things like font and initial draw position for the run.
Each CTRun represents a subrange of an attributed string where all of the attributes are the same. IF you don't vary attributes over a line, you can abstract this further:
CTLineDraw(line, context);
I don't know if that level of abstraction would work in your case (i am only used to working with Latin fonts), but its worth knowing it's there, saves a lot of lower-level trouble.
You can set the initial drawing position for the text with this:
void CGContextSetTextPosition (
CGContextRef c,
CGFloat x,
CGFloat y );

PDF generated from postscript vs. Quartz 2D

I'm working on a program which outputs needlework patterns as PDF files using Quartz 2D and Objective C. I've used another program which was coded in Python that outputs postscript files, which are converted to PDF when I open them in Preview. Since the second app is open source, I've been able to check that the settings I use to layout my PDF are the same, specifically the size of the squares and gap sizes between them.
In the image below, the output of the other program is on the left, while mine is on the right and both are at actual size. The problem I'm having is that at actual size, the gap lines in my output are intermittent, while in the other, all the gaps can be seen. I'm wondering if anyone knows about a rendering difference with postscript files which allows for this. I can zoom in on my output and the gaps show up, but I don't understand why there would be this difference.
The squares are set to be 8 pixels wide and tall, with a 1 pixel gap between them in both applications with 2 pixel wide gaps every 10 squares and mine is set to not use antialiasing. With my output, I've tried drawing directly to a CGPDFContext and drawing to a CGLayerRef then drawing the layer to the PDF Context, but I get the same result. I'm using integer values for positioning the layout and I'm pretty sure I've avoided trying to place squares in fractions of pixel positions.
I have also tried drawing the output to a CGBitmapContext and then drawing the resulting bitmap to the PDF context, but zooming in on that gives terrible artifacts since it is then a raster being magnified.
A last difference I've noted is that the file size of the postscript generated PDF is much smaller than the one I make, and I'm thinking that might have to do with the paths I draw since it says drawing to a PDF context records the drawing as a series of PDF drawing commands written to a file, which I would imagine takes up quite a bit of space compared to just displaying an image.
I have included my code to generate my PDFs below incase it would be helpful, but I'm really just wondering about if there is a rendering difference between postscript and Quartz that could explain these differences and if there is a way to make my output match up.
(The uploader says I need at least 10 reputation to post images, but I have this link http://i.stack.imgur.com/nr588.jpg again, the postscript output is on the left, my Quartz output is on the right and in my output, the gridlines are intermittent)
-(void)makePDF:(NSImage*)image withPixelArray:(unsigned char *)rawData{
NSString *currentUserHomeDirectory = NSHomeDirectory();
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingString:#"/Desktop/"];
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingString:[image name]];
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingPathExtension:#"pdf"];
CGContextRef pdfContext;
CFStringRef path;
CFURLRef url;
int width = 792;
int height = 612;
CFMutableDictionaryRef myDictionary = NULL;
CFMutableDictionaryRef pageDictionary = NULL;
const char *filename = [currentUserHomeDirectory UTF8String];
path = CFStringCreateWithCString (NULL, filename,
kCFStringEncodingUTF8);
url = CFURLCreateWithFileSystemPath (NULL, path,
kCFURLPOSIXPathStyle, 0);
CFRelease (path);
myDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CGRect pageRect = CGRectMake(0, 0, width, height);
pdfContext = CGPDFContextCreateWithURL (url, &pageRect, myDictionary);
const CGFloat whitePoint[3]= {0.95047, 1.0, 1.08883};
const CGFloat blackPoint[3]={0,0,0};
const CGFloat gammavalues[3] = {2.2,2.2,2.2};
const CGFloat matrix[9] = {0.4124564, 0.3575761, 0.1804375, 0.2126729, 0.7151522, 0.072175, 0.0193339, 0.119192, 0.9503041};
CGColorSpaceRef myColorSpace = CGColorSpaceCreateCalibratedRGB(&whitePoint[3], &blackPoint[3], &gammavalues[3], &matrix[9]);
CGContextSetFillColorSpace (
pdfContext,
myColorSpace
);
int annotationNumber =0;
int match=0;
CFRelease(myDictionary);
CFRelease(url);
pageDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDataRef boxData = CFDataCreate(NULL,(const UInt8 *)&pageRect, sizeof (CGRect));
CFDictionarySetValue(pageDictionary, kCGPDFContextMediaBox, boxData);
int m = 0;
int sidestep = 0;
int downstep = 0;
int maxc = 0;
int maxr = 0;
int columnsPerPage = 70;
int rowsPerPage = 60;
int symbolSize = 8;
int gapSize=1;
CGContextSetShouldAntialias(pdfContext, NO);
int pages = ceil([image size].width/columnsPerPage) * ceil([image size].height/rowsPerPage);
for (int g=0; g<pages; g++) {
int offsetX = 32;
int offsetY = 32;
if (sidestep == ceil([image size].width/columnsPerPage)-1) {
maxc=[image size].width-sidestep*columnsPerPage;
}else {
maxc=columnsPerPage;
}
if (downstep == ceil([image size].height/rowsPerPage)-1) {
maxr=[image size].height-downstep*rowsPerPage;
}else {
maxr=rowsPerPage;
}
CGPDFContextBeginPage (pdfContext, pageDictionary);
CGContextTranslateCTM(pdfContext, 0.0, 612);
CGContextScaleCTM(pdfContext, 1.0, -1.0);
CGContextSetShouldAntialias(pdfContext, NO);
int r=0;
while (r<maxr){
int c=0;
while (c<maxc){
m = sidestep*columnsPerPage+c+downstep*[image size].width*rowsPerPage+r*[image size].width;
//Reset offsetX
if (c==0) {
offsetX=32;
}
//Increase offset for gridlines
if (c==0 && r%10==0&&r!=0) {
offsetY+=2;
}
if (c%10==0&&c!=0) {
offsetX+=2;
}
//DRAW SQUARES
CGContextSetRGBFillColor (pdfContext, (double)rawData[m*4]/255.,(double) rawData[m*4+1]/255., (double)rawData[m*4+2]/255., 1);
CGContextFillRect (pdfContext, CGRectMake (c*(symbolSize+gapSize)+offsetX, r*(symbolSize+gapSize)+offsetY, symbolSize, symbolSize ));
if ([usedColorsPaths count]!=0) {
for (int z=0; z<[usedColorsPaths count]; z++) {
if ([[[usedColorsPaths allKeys] objectAtIndex:z] isEqualToString:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]]) {
match=1;
if (rawData[m*4+3] == 0) {
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY),[Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]] intValue] :symbolSize :0]);
}
break;
}
}
if (match==0) {
if (rawData[m*4+3] == 0) {
[usedColorsPaths setObject:[NSNumber numberWithInt:455] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
[usedColorsPaths setObject:[NSNumber numberWithInt:annotationNumber] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize :0]);
}
annotationNumber++;
if (annotationNumber==9) {
annotationNumber=0;
}
}
match=0;
}
if ([usedColorsPaths count]==0) {
if (rawData[m*4+3] == 0) {
[usedColorsPaths setObject:[NSNumber numberWithInt:455] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
[usedColorsPaths setObject:[NSNumber numberWithInt:annotationNumber] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize :0]);
}
annotationNumber++;
}
c++;
}
r++;
}
sidestep++;
if (sidestep == ceil([image size].width/columnsPerPage)) {
sidestep=0;
downstep+=1;
}
CGContextSaveGState(pdfContext);
CGPDFContextEndPage (pdfContext);
}
CGContextRelease(pdfContext);
CFRelease(pageDictionary);
CFRelease(boxData);}
I would need to see both PDF files to be able to make any judgement about why the sizes are different, or why the line spacing is intermittent, but here's a few points:
The most likely reason why the files are different sizes is that one PDF file has the content stream (the drawing operations you refer to) compressed, while the other does not.
In general emitting a sequence of drawing operations is more compact than including a bitmap image, unless the image resolution is very low. An RGB image takes 3 bytes for every image sample. If you think about a 1 inch square image at 300 dpi that's 300x300x3 bytes, or 270,000 bytes. If the image was all one colour (see example below) I could describe it in PDF drawing operations in 22 bytes.
You can't specify the size of the squares, or any of the other graphic features, in pixels. PDF is a vector based scalable format, not a bitmap format. I don't work on a Mac so I can't comment on your sample code but I suspect you are confusing the media width and height (specified in points) with pixels, these are not the same. The width and height describe a media size, there are no pixels involved until the PDF file is rendered to a bitmap, at that time the resolution of the device determines how many pixels are in each point.
Lets consider a PDF one inch square; that would have a width of 72 and a height of 72. I'll fill that rectangle with pure red, the PDF operations for that would be:
1 0 0 rg
0 0 72 72 re
So that's set the non-stroking colour to RGB (1, 0, 0), then starting at 0, 0 (bottom left) and extending for 72 points wide and 72 points high (one inch in each direction) fill a rectangle with that colour.
I view that on screen here on my PC that one inch square is rendered as 96 pixels by 96 pixels. Now I view it on an iPad with a retina display, the square is rendered 264 pixels by 264. Finally I print it to my laser printer, now the square is rendered 600 pixels by 600. The PDF content hasn't changed, but the number of pixels certainly has. A square is too simple of course, I could have used a circle instead, and obviously the high resolution display would have smoother curves. Of course, if I did use an image, the smoothness of the curve is 'baked in', the device rendering the PDF can't alter it if the resolution changes, all it can do is discard image samples to render down, or interpolate new ones to render up. That looks jagged when you scale down, and fuzzy when you scale up. The vector representation remains smooth, limited only by the current resolution.
The point of PDF is that the PDF isn't limited to one resolution, it can print to all of them and the output should be the same (as far as possible) on each device.
Now I suspect the problem is that you are "using integer values for positioning the layout", you can't do that and get correct (ie expected) results. You should be using real numbers for the layout which will allow you finer control over the position. Remember you are not addressing individual pixels, you are positioning graphics in a co-ordinate system, the resolution only comes into play when rendering (ie viewing or printing) the PDF file. You need to put aside concerns over pixels and just focus on the positioning.
So, after messing with it for another day,I think I found out that my problem was turning off antialiasing. I thought I wanted sharper drawing, but since PDF contains vector graphics, the antialiasing is okay and zooming in on the graphics keeps them sharp.
The first thing I did was in Preview, I went to Preview>preferences>PDF and selected "Define 100% scale as: 1 point equals 1 screen pixel". Doing this with my antialisaing turned off resulted in my image being displayed as I wanted, but zooming in on it, it seems like Preview had difficulty deciding when to draw the 1 pixel gaps.
I then altered my code by deleting the calls to turn off antialiasing and my output renders perfectly, so having antialiasing fixed my problem. Pretty embarrassing that this took me three days to figure out, but I'm glad for this simple fix.

Edit Color Bytes in UIImage

I'm quite new to working with UIImages on the byte level, but I was hoping that someone could point me to some guides on this matter?
I am ultimately looking to edit the RGBA values of the bytes, based on certain parameters (position, color, etc.) and I know I've come across samples/tutorials for this before, but I just can't seem to find anything now.
Basically, I'm hoping to be able to break a UIImage down to its bytes and iterate over them and edit the bytes' RGBA values individually. Maybe some sample code here would be a big help as well.
I've already been working in the different image contexts and editing the images with the CG power tools, but I would like to be able to work at the byte level.
EDIT:
Sorry, but I do understand that you cannot edit the bytes in a UIImage directly. I should have asked my question more clearly. I meant to ask how can I get the bytes of a UIImage, edit those bytes and then create a new UIImage from those bytes.
As pointed out by #BradLarson, OpenGL is a better option for this and there is a great library, which was created by #BradLarson, here. Thanks #CSmith for pointing it out!
#MartinR has right answer, here is some code to get you started:
UIImage *image = your image;
CGImageRef imageRef = image.CGImage;
NSUInteger nWidth = CGImageGetWidth(imageRef);
NSUInteger nHeight = CGImageGetHeight(imageRef);
NSUInteger nBytesPerRow = CGImageGetBytesPerRow(imageRef);
NSUInteger nBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger nBitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSUInteger nBytesPerPixel = nBitsPerPixel == 24 ? 3 : 4;
unsigned char *rawInput = malloc (nWidth * nHeight * nBytesPerPixel);
CGColorSpaceRef colorSpaceRGB = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawInput, nWidth, nHeight, nBitsPerComponent, nBytesPerRow, colorSpaceRGB, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextDrawImage (context, CGRectMake(0, 0, nWidth, nHeight), imageRef);
// modify the pixels stored in the array of 4-byte pixels at rawInput
.
.
.
UIImage *imageNew = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease (context);
free (rawInput);
You have no direct access to the bytes in an UIImage and you cannot change them directly.
You have to draw the image into a CGBitmapContext, modify the pixels in the bitmap, and then create a new image from the bitmap context.

iOS OpenGL using parameters for glTexImage2D to make a UIImage?

I am working through some existing code for a project i am assigned to.
I have a successful call to glTexImage2D like this:
glTexImage2D(GL_TEXTURE_2D, 0, texture->format, texture->widthTexture, texture->heightTexture, 0, texture->format, texture->type, texture->data);
I would like create an image (preferably a CGImage or UIImage) using the variables passed to glTexImage2D, but don't know if it's possible.
I need to create many sequential images(many of them per second) from an OpenGL view and save them for later use.
Should i be able to create a CGImage or UIImage using the variables i use in glTexImage2D?
If i should be able to, how should i do it?
If not, why can't i and what do you suggest for my task of saving/capturing the contents of my opengl view many times per second?
edit: i have already successfully captured images using some techniques provided by apple with glReadPixels, etc etc. i want something faster so i can get more images per second.
edit: after reviewing and adding the code from Thomson, here is the resulting image:
the image very slightly resembles what the image should look like, except duplicated ~5 times horizontally and with some random black space underneath.
note: the video(each frame) data is coming over an ad-hoc network connection to the iPhone. i believe the camera is shooting over each frame with the YCbCr color space
edit: further reviewing Thomson's code
I have copied your new code into my project and got a different image as result:
width: 320
height: 240
i am not sure how to find the number of bytes in texture-> data. it is a void pointer.
edit: format and type
texture.type = GL_UNSIGNED_SHORT_5_6_5
texture.format = GL_RGB
Hey binnyb, here's the solution to creating a UIImage using the data stored in texture->data. v01d is certainly right that you're not going to get the UIImage as it appears in your GL framebuffer, but it'll get you an image from the data before it has passed through the framebuffer.
Turns out your texture data is in 16 bit format, 5 bits for red, 6 bits for green, and 5 bits for blue. I've added code for converting the 16 bit RGB values into 32 bit RGBA values before creating a UIImage. I'm looking forward to hearing how this turns out.
float width = 512;
float height = 512;
int channels = 4;
// create a buffer for our image after converting it from 565 rgb to 8888rgba
u_int8_t* rawData = (u_int8_t*)malloc(width*height*channels);
// unpack the 5,6,5 pixel data into 24 bit RGB
for (int i=0; i<width*height; ++i)
{
// append two adjacent bytes in texture->data into a 16 bit int
u_int16_t pixel16 = (texture->data[i*2] << 8) + texture->data[i*2+1];
// mask and shift each pixel into a single 8 bit unsigned, then normalize by 5/6 bit
// max to 8 bit integer max. Alpha set to 0.
rawData[channels*i] = ((pixel16 & 63488) >> 11) / 31.0 * 255;
rawData[channels*i+1] = ((pixel16 & 2016) << 5 >> 10) / 63.0 * 255;
rawData[channels*i+2] = ((pixel16 & 31) << 11 >> 11) / 31.0 * 255;
rawData[channels*4+3] = 0;
}
// same as before
int bitsPerComponent = 8;
int bitsPerPixel = channels*bitsPerComponent;
int bytesPerRow = channels*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
channels*width*height,
NULL);
free( rawData );
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
The code for creating a new image comes from Creating UIImage from raw RGBA data thanks to Rohit. I've tested this with our original 320x240 image dimension, having converted a 24 bit RGB image into 5,6,5 format and then up to 32 bit. I haven't tested it on a 512x512 image but I don't expect any problems.
You could make an image from the data you are sending to GL, but I doubt that's really what you want to achieve.
My guess is you want the output of the Frame Buffer. To do that you need glReadPixels(). Bare in mind for a large buffer (say 1024x768) it will take seconds to read the pixels back from GL, you wont get more than 1 per second.
You should be able to use the UIImage initializer imageWithData for this. All you need is to ensure that the data in texture->data is in a structured format that is recognizable to the UIImage constructor.
NSData* imageData = [NSData dataWithBytes:texture->data length:(3*texture->widthTexture*texture->heightTexture)];
UIImage* theImage = [UIImage imageWithData:imageData];
The types that imageWithData: supports are not well documented, but you can create NSData from .png, .jpg, .gif, and I presume .ppm files without any difficulty. If texture->data is in one of those binary formats I suspect you can get this running with a little experimentation.

How to load PNG with alpha with Cocoa?

I'm developing an iPhone OpenGL application, and I need to use some textures with transparency. I have saved the images as PNGs. I already have all the code to load PNGs as OpenGL textures and render them. This is working fine for all images that don't have transparency (all alpha values are 1.0). However, now that I'm trying to load and use some PNGs that have transparency (varying alpha values), my texture is messed up, like it loaded the data incorrectly or something.
I'm pretty sure this is due to my loading code which uses some of the Cocoa APIs. I will post the relevant code here though.
What is the best way to load PNGs, or any image format which supports transparency, on OSX/iPhone? This method feels roundabout. Rendering it to a CGContext and getting the data seems weird.
* LOADING *
CGImageRef CGImageRef_load(const char *filename) {
NSString *path = [NSString stringWithFormat:#"%#/%s",
[[NSBundle mainBundle] resourcePath],
filename];
UIImage *img = [UIImage imageWithContentsOfFile:path];
if(img) return [img CGImage];
return NULL;
}
unsigned char* CGImageRef_data(CGImageRef image) {
NSInteger width = CGImageGetWidth(image);
NSInteger height = CGImageGetHeight(image);
unsigned char *data = (unsigned char*)malloc(width*height*4);
CGContextRef context = CGBitmapContextCreate(data,
width, height,
8, width * 4,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context,
CGRectMake(0.0, 0.0, (float)width, (float)height),
image);
CGContextRelease(context);
return data;
}
* UPLOADING *
(define (image-opengl-upload data width height)
(let ((tex (alloc-opengl-image)))
(glBindTexture GL_TEXTURE_2D tex)
(glTexEnvi GL_TEXTURE_ENV GL_TEXTURE_ENV_MODE GL_DECAL)
(glTexImage2D GL_TEXTURE_2D
0
GL_RGBA
width
height
0
GL_RGBA
GL_UNSIGNED_BYTE
(->void-array data))
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MIN_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MAG_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_S
GL_CLAMP_TO_EDGE)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_T
GL_CLAMP_TO_EDGE)
(glBindTexture GL_TEXTURE_2D 0)
tex))
To be explicit…
The most common issue with loading textures using Core Image is that it insists on converting data to premultiplied alpha format. In the case of PNGs included in the bundle, this is actually done in a preprocessing step in the build process. Failing to take this into account results in dark banding around blended objects.
The way to take it into account is to use glBlendMode(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead of glBlendMode(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If you want to use alpha channels for something other than regular blending, your only option is to switch to a different format and loader (as prideout suggested, but for different reasons).
ETA: the premultiplication issue also exists under Mac OS X, but the preprocessing is iPhone-specific.
Your Core Graphics surface should be cleared to all zeroes before you render to it, so I recommend using calloc instead of malloc, or adding a memset after the malloc.
Also, I'm not sure you want your TexEnv set to GL_DECAL. You might want to leave it set to its default (GL_MODULATE).
If you'd like to avoid Core Graphics for decoding PNG images, I recommend loading in a PVR file instead. PVR is an exceedingly simple file format. An app called PVRTexTool is included with the Imagination SDK which makes it easy to convert from PNG to PVR. The SDK also includes some sample code that shows how to parse their file format.
I don't know anything about OpenGL, but Cocoa abstracts this functionality with NSImage/UIImage.
You can use PVR's but there will be some compression artifacts, so I would only recommend those for 3D object textures, or textures that do not require a certain level of detail that PVR can not offer, especially with gradients.