NSBitmapImageRep from duration for animated GIF - objective-c

I have two animated gifs:
http://d-32.com/uploads/gif1.gif and http://d-32.com/uploads/gif2.gif
When using NSBitmapImageRep I get a frame duration of 0.15s for the first gif and 0.1s for the second.
But 0.1 is way to slow for the second gif.
When using imagemagick I also get 0.15s for the first, but 0.03s for the second gif, which is correct.
Am I doing something wrong, or is NSBitmapImageRep, i.e. using NSImageView for displaying gifs useless and I would have to resort back to a webview (which can display both gifs correctly)?

So thanks to Heinrich I now know that the gif as two values, the real one and the clamped value. Sadly NSBitmapImageRep only provides the clamped value, but with CGImageProperty I can read both values. So I iterate through each frame, update the duration with the correct one and in the end create a CAAnimation like in: https://github.com/orta/GIFs/blob/master/objc/GIFs/ORImageView.m
So this is the code for updating the duration:
NSDictionary *properties = (__bridge_transfer NSDictionary *)CGImageSourceCopyPropertiesAtIndex(imageSource, i, nil);
float duration = [[[properties objectForKey:(__bridge NSString *)kCGImagePropertyGIFDictionary]
objectForKey:(__bridge NSString *) kCGImagePropertyGIFUnclampedDelayTime] doubleValue];
[bitmapRepresentation setProperty:NSImageCurrentFrame withValue:[NSNumber numberWithInt:i]];
[bitmapRepresentation setProperty:NSImageCurrentFrameDuration withValue:[NSNumber numberWithFloat:duration]];

Related

Why ever use `keyTimes`, `timingFunctions` or `timingFunction` on CAKeyframeAnimation?

Interpolating the values with a custom function is very easy. But is it bad practice? Should I instead (or in addition) use keyTimes, timingFunctions or timingFunction to explain the animation-curve to the framework? When working with custom animation curves I really don't see why I should use those properties. I want to do this right.
This works just fine. As expected it animates the views-position with a custom cubic-ease-out animation curve:
CAKeyframeAnimation *anim = [CAKeyframeAnimation animationWithKeyPath:#"position.x"];
anim.duration = 5;
NSUInteger numberOfFrames = anim.duration * 60;
NSMutableArray *values = [NSMutableArray new];
for (int i = 0; i < numberOfFrames; i++)
{
CGFloat linearProgress = (double) i / (double) numberOfFrames;
CGPoint position = view.layer.position;
position.x = 10 + (300 * CubicEaseOut(linearProgress));
[values addObject:[NSValue valueWithCGPoint:position]];
}
anim.values = values;
[view.layer addAnimation:anim forKey:#"position"];
Your method adds 300 keyframes to the animation, for Core Animation to interpolate, linearly. There are two reasons this might not be worse than using fewer keyframes with non-linear interpolation to to get the same result: (1) more data to send to CA, i.e. more data to store and read every animation frame; (2) if you ever wanted to slow down the animation so that more than 300 frames are rendered from it the linear interpolation artifacts may become visible.
If you just have one 3s animation, it's likely neither of those reasons are important, but e.g. if you had 100 10s animations all running at once you may see worse performance than if using fewer keyframes.

Objective-C: Display an Array of Floats as an Image

In a Cocoa App I would like to display a 2d array of floats in an NSImageView. To make the code as simple as possible, start off by converting the data from float to NSData:
// dataArray: an Nx by Ny array of floats
NSMutableData *nsdata = [NSMutableData dataWithCapacity:0];
long numPixels = Nx*Ny;
for (int i = 0; i < numPixels; i++) {
[nsdata appendBytes:&dataArray[i] length:sizeof(float)];
}
and now try to display the data (the display is left blank):
[theNSImageView setImage:[[NSImage alloc] initWithData:nsdata]];
Is this the correct approach? Is a CGContext needed first? I was hoping to accomplish this with NSData.
I have noted the earlier Stack posts: 32 bit data, close but in reverse, almost worked but no NSData, color image data here, but not much luck getting variations on these working. Thanks for any suggestions.
You can use an NSBitmapImageRep to build up an NSImage float-by-float.
Interestingly, one of its initialisers has the longest method name in all of Cocoa:
- (id)initWithBitmapDataPlanes:(unsigned char **)planes
pixelsWide:(NSInteger)width
pixelsHigh:(NSInteger)height
bitsPerSample:(NSInteger)bps
samplesPerPixel:(NSInteger)spp
hasAlpha:(BOOL)alpha
isPlanar:(BOOL)isPlanar
colorSpaceName:(NSString *)colorSpaceName
bitmapFormat:(NSBitmapFormat)bitmapFormat
bytesPerRow:(NSInteger)rowBytes
bitsPerPixel:(NSInteger)
It's well documented at least. Once you've built it up by supplying float arrays in planes you can then get the NSImage to put in your view:
NSImage *image = [[NSImage alloc] initWithCGImage:[bitmapImageRep CGImage] size:NSMakeSize(width,height)];
Or, slightly cleaner
NSImage *image = [[[NSImage alloc] init] autorelease];
[im addRepresentation:bitmapImageRep];
There is an initialiser which just uses an NSData container:
+ (id)imageRepWithData:(NSData *)bitmapData
although that depends on your bitmapData containing one of the correct bitmap formats.
Ok got it to work. I had tried the NSBitmapImageRep before (thanks Tim) but the part I was missing was in properly converting my floating point data to a byte array. NSData doesn't do that and returns nil. So the solution was not so much in needing to build up an NSImage float-by-float. In fact, one can similarly build up a bitmapContext (using CGBitmapContextCreate (mentioned by HotLicks above)) and that works too, once the floating point data has been represented properly.

QTMovie at 29.97 with QTMakeTime

I'm trying to use QTKit to convert a list of images to a quicktime movie. I've figured out how to do everything except get the frame rate to 29.97. Through other forums and resources, the trick seems to be using something like this:
QTTime frameDuration = QTMakeTime(1001, 30000)
However, all my attempts using this method, or even (1000, 29970) still produce a movie at 30fps. This fps is what shows up when playing with Quicktime player.
Any ideas? Is there some other way to set the frame rate for the entire movie once its created?
Here's some sample code:
NSDictionary *outputMovieAttribs = [NSDictionary dictionaryWithObjectsAndKeys:#"jpeg", QTAddImageCodecType, [NSNumber numberWithLong:codecHighQuality], QTAddImageCodecQuality, nil];
QTTime frameDuration = QTMakeTime(1001, 30000);
QTMovie *outputMovie = [[QTMovie alloc] initToWritableFile:#"/tmp/testing.mov" error:nil];
[outputMovie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieEditableAttribute];
[outputMovie setAttribute:[NSNumber numberWithLong:30000] forKey:QTMovieTimeScaleAttribute];
if (!outputMovie) {
printf("ERROR: Chunk: Could not create movie object:\n");
} else {
int frameID = 0;
while (frameID < [framePaths count]) {
NSAutoreleasePool *readPool = [[NSAutoreleasePool alloc] init];
NSData *currFrameData = [NSData dataWithContentsOfFile:[framePaths objectAtIndex:frameID]];
NSImage *currFrame = [[NSImage alloc] initWithData:currFrameData];
if (currFrame) {
[outputMovie addImage:currFrame forDuration:frameDuration withAttributes:outputMovieAttribs];
[outputMovie updateMovieFile];
NSString *newDuration = QTStringFromTime([outputMovie duration]);
printf("new Duration: %s\n", [newDuration UTF8String]);
currFrame = nil;
} else {
printf("ERROR: Could not add image to movie");
}
frameID++;
[readPool drain];
}
}
NSString *outputDuration = QTStringFromTime([outputMovie duration]);
printf("output Duration: %s\n", [outputDuration UTF8String]);
Ok, thanks to your code, I could solve the issue. I was using the development tool called Atom Inpector to see that the data structure looked totally different than the movies I am currently working with. As I said, I never created a movie from images as you do, but it seems that this is not the way to go if you want to have a movie afterwards. QuickTime recognizes the clip as "Photo-JPEG", so not a normal movie file. The reason for this seems to be, that the added pictures are NOT added to a movie track but just somewhere in the movie. This can also be seen with Atom Inspector.
With the "movieTimeScaleAttribute", you set a timeScale that is not used!
To solve the issue I changed the code just a tiny bit.
NSDictionary *outputMovieAttribs = [NSDictionary dictionaryWithObjectsAndKeys:#"jpeg",
QTAddImageCodecType, [NSNumber numberWithLong:codecHighQuality],
QTAddImageCodecQuality,[NSNumber numberWithLong:2997], QTTrackTimeScaleAttribute, nil];
QTTime frameDuration = QTMakeTime(100, 2997);
QTMovie *outputMovie = [[QTMovie alloc] initToWritableFile:#"/Users/flo/Desktop/testing.mov" error:nil];
[outputMovie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieEditableAttribute];
[outputMovie setAttribute:[NSNumber numberWithLong:2997] forKey:QTMovieTimeScaleAttribute];
Everything else is unaltered.
Oh, by the way. To print the timeValue and timeScale, you could also do :
NSLog(#"new Duration timeScale : %ld timeValue : %lld \n",
[outputMovie duration].timeScale, [outputMovie duration].timeValue);
This way you can see better if your code does as desired.
Hope that helps!
Best regards
I have never done what you're trying to do, but I can tell you how to get the desired framerate I guess.
If you "ask" a movie for its current timing information, you always get a QTTime structure, which contains the timeScale and the timeValue.
For a 29.97 fps video, you would get a timeScale of 2997 ( for example, see below ).
This is the amount of "units" per second.
So, if the playback position of the movie is currently at exactly 2 seconds, you would get a timeValue of 5994.
The frameDuration is therefore 100, because 2997 / 100 = 29.97 fps.
QuickTime cannot handle float values, so you have to convert all the values to a long value by multiplication.
By the way, you don't have to use 100, you could also use 1000 and a timeScale of 29970, or 200 as frame duration and 5994 timeScale. That's all I can tell you from what you get if you read timing information from already existing clips.
You wrote that this didn't work out for you, but this is how QuickTime works internally.
You should look into it again!
Best regards

Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging.
I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me???
My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting!
Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep.
CIImage * myResult = [transform valueForKey:#"outputImage"];
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
ir = [NSCIImageRep imageRepWithCIImage:myResult];
outputImage = [[[NSImage alloc] initWithSize:
NSMakeSize(inputImage.size.width, inputImage.size.height)]
autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage];
NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult];
Thanks,
Adam
Edit #1 -- for Peter H. comment:
Sample code accessing bitmap data...
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++) {
if (row == 1340) { //just check this one row, that I know what to expect
NSLog(#"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
}
}
Results from above (all columns contain the same zero/null value, which is what I called "empty")...
2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1664 pixel redByte of pixel is 0
2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1665 pixel redByte of pixel is 0
2010-06-13 10:39:07.766 ImageTransform[5582:a0f] Row 1340 column 1666 pixel redByte of pixel is 0
If I change the %d to %h nothing prints at all (blank rather than "0"). If I change it to %# I get "(null)" on every line, instead of the "0" shown above. On the other hand ... when I run just the NSAffineTransform filter and then execute this code the bytes printed contain the data I would expect (regardless of how I format the NSLog output, something prints).
Adding more code on 6/14 ...
// prior code retrieves JPG image from disk and loads into NSImage
CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap];
if (inputCIimage == nil) {
NSLog(#"Bailing out. Could not create CI Image");
return;
}
NSLog (#"CI Image created. working on transforms...");
Filter that rotates image.... this was previously in a method, but I have since moved it to be "in line" as I have been trying to figure out what is wrong...
// rotate imageIn by degreesToRotate, using an AffineTransform
CIFilter *transform = [CIFilter filterWithName:#"CIAffineTransform"];
[transform setDefaults];
[transform setValue:inputCIimage forKey:#"inputImage"];
NSAffineTransform *affineTransform = [NSAffineTransform transform];
[affineTransform transformPoint: NSMakePoint(inputImage.size.width/2, inputImage.size.height / 2)];
//inputImage.size.width /2.0,inputImage.size.height /2.0)];
[affineTransform rotateByDegrees:3.75];
[transform setValue:affineTransform forKey:#"inputTransform"];
CIImage * myResult2 = [transform valueForKey:#"outputImage"];
Filter to apply CILineOverlay filter... (was also previously in a method)
CIFilter *lineOverlay = [CIFilter filterWithName:#"CILineOverlay"];
[lineOverlay setDefaults];
[lineOverlay setValue: inputCIimage forKey:#"inputImage"];
// start off with default values, then tweak the ones needed to achieve desired results
[lineOverlay setValue: [NSNumber numberWithFloat: .07] forKey:#"inputNRNoiseLevel"]; //.07 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: .71] forKey:#"inputNRSharpness"]; //.71 (0-2)
[lineOverlay setValue: [NSNumber numberWithFloat: 1] forKey:#"inputEdgeIntensity"]; //1 (0-200)
[lineOverlay setValue: [NSNumber numberWithFloat: .1] forKey:#"inputThreshold"]; //.1 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: 50] forKey:#"inputContrast"]; //50 (.25-200)
CIImage *myResult2 = [lineOverlay valueForKey:#"outputImage"]; //apply the filter to the CIImage object and return it
Finally ... the code that uses the results...
if (myResult2 == Nil)
NSLog(#"Transformations failed");
else {
NSLog(#"Finished transformations successfully ... now render final image");
// make an NSImage from the CIImage (to display it, during initial development)
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
// show the tranformed output on screen...
ir = [NSCIImageRep imageRepWithCIImage:myResult2];
outputImage = [[[NSImage alloc] initWithSize:
NSMakeSize(inputImage.size.width, inputImage.size.height)]
autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage]; //rotatedImage
At this point the transformed image displays on screen just fine, regardless of which transform I apply and which one I leave commented out. It even works just fine if I "chain" together the transforms so that the output from #1 goes into #2. So, to me, this seems to indicates that the filters are working.
However ... the code that I really need to use is the "bitmap analysis" code that is examining the bitmap that is in (or "should be" in) Results2. And that code works only on the bitmap resulting from the CIAffineTransform filter. When I use it to examine the bitmap resulting from the CILineOverlay, the entire bitmap seems to contain only zeroes.
So here is the code used for that analysis...
// this is the next line after the [outputImageView ...] shown above
[self findLeftEdge :myResult2];
And then this is the code from the findLeftEdge method...
- (void) findLeftEdge :(CIImage*)imageInCI {
// find the left edge of the input image, assuming it will be the first non-white pixel
// because we have already applied the Threshold filter
NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: imageInCI];
if (outputBitmap == nil)
NSLog(#"unable to create outputBitmap");
else
NSLog(#"ouputBitmap image rep created -- samples per pixel = %d", [outputBitmap samplesPerPixel]);
RGBAPixel
*thisPixel,
*bitmapPixels = (RGBAPixel *)[outputBitmap bitmapData];
int
row,
column,
widthInPixels = [outputBitmap pixelsWide],
heightInPixels = [outputBitmap pixelsHigh];
//RGBAPixel *leftEdge [heightInPixels];
struct {
int pixelNumber;
unsigned char pixelValue;
} leftEdge[heightInPixels];
// Is this necessary, or does objective-c always intialize it to zero, for me?
for (row = 0; row < heightInPixels; row++) {
leftEdge[row].pixelNumber = 0;
leftEdge[row].pixelValue = 0;
}
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++) {
thisPixel = (&bitmapPixels[((widthInPixels * row) + column)]);
//red is as good as any channel, for this test (assume threshold filter already applied)
//this should "save" the column number of the first non-white pixel encountered
if (leftEdge[row].pixelValue < thisPixel->redByte) {
leftEdge[row].pixelValue = thisPixel->redByte;
leftEdge[row].pixelNumber = column;
}
// For debugging, display contents of each pixel
//NSLog(#"Row %d column %d pixel redByte of pixel is %#",row,column,thisPixel->redByte);
// For debugging, display contents of each pixel on one row
//if (row == 1340) {
// NSLog(#"Row 1340 column %d pixel redByte of pixel is %#",column,thisPixel->redByte);
//}
}
// For debugging, display the left edge that we discovered
for (row = 0; row < heightInPixels; row++) {
NSLog(#"Left edge on row %d was at pixel #%d", row, leftEdge[row].pixelNumber);
}
[outputBitmap release];
}
Here is another filter. When I use it I do get data in the "output bitmap" (just like the Rotation filter). So it is just the AffineTransform that does not yield up its data for me in the resulting bitmap ...
- (CIImage*) applyCropToCI:(CIImage*) imageIn {
rectToCrop {
// crop the rectangle specified from the input image
CIFilter *crop = [CIFilter filterWithName:#"CICrop"];
[crop setDefaults];
[crop setValue:imageIn forKey:#"inputImage"];
// [crop setValue:rectToCrop forKey:#"inputRectangle"]; //vector defaults to 0,0,300,300
//CIImage * myResult = [transform valueForKey:#"outputImage"]; //this is the way it was "in-line", before putting this code into a method
return [crop valueForKey:#"outputImage"]; //does this need to be retained?
}
You claim that the bitmap data contains “all zeroes”, but you're only looking at one byte per pixel. You're assuming that the first component is the red component, and you're assuming that the data is one byte per component; if the data is alpha-first or floating-point, one or both of these assumptions will be wrong.
Create a bitmap context in whatever format you want using a buffer you allocate, and render the image into that context. Your buffer will then contain the image in the format you expect.
You might also want to switch from structure-based access to byte-based access—i.e., pixels[(row*bytesPerRow)+col], incrementing col by the number of components per pixel. Endianness can easily become a headache when you use structures to access the components.
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++) {
if (row == 1340) { //just check this one row, that I know what to expect
NSLog(#"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
}
}
Aside from the syntax error, this code doesn't work because you never assigned to thisPixel. You are looping through indexes for nothing, since you never actually look up a pixel value at those indexes and assign it to thisPixel in order to inspect it.
Add such an assignment before the NSLog statement.
Furthermore, if the only row you care about is 1340, there's no need to loop through rows. Check using an if statement whether 1340 is less than the height, and if it is, then do only the columns loop. (Also, don't embed magic number literals like this in your code. Give that constant a name that explains the significance of the number 1340—i.e., why it's the only row you care about.)

How to load PNG with alpha with Cocoa?

I'm developing an iPhone OpenGL application, and I need to use some textures with transparency. I have saved the images as PNGs. I already have all the code to load PNGs as OpenGL textures and render them. This is working fine for all images that don't have transparency (all alpha values are 1.0). However, now that I'm trying to load and use some PNGs that have transparency (varying alpha values), my texture is messed up, like it loaded the data incorrectly or something.
I'm pretty sure this is due to my loading code which uses some of the Cocoa APIs. I will post the relevant code here though.
What is the best way to load PNGs, or any image format which supports transparency, on OSX/iPhone? This method feels roundabout. Rendering it to a CGContext and getting the data seems weird.
* LOADING *
CGImageRef CGImageRef_load(const char *filename) {
NSString *path = [NSString stringWithFormat:#"%#/%s",
[[NSBundle mainBundle] resourcePath],
filename];
UIImage *img = [UIImage imageWithContentsOfFile:path];
if(img) return [img CGImage];
return NULL;
}
unsigned char* CGImageRef_data(CGImageRef image) {
NSInteger width = CGImageGetWidth(image);
NSInteger height = CGImageGetHeight(image);
unsigned char *data = (unsigned char*)malloc(width*height*4);
CGContextRef context = CGBitmapContextCreate(data,
width, height,
8, width * 4,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context,
CGRectMake(0.0, 0.0, (float)width, (float)height),
image);
CGContextRelease(context);
return data;
}
* UPLOADING *
(define (image-opengl-upload data width height)
(let ((tex (alloc-opengl-image)))
(glBindTexture GL_TEXTURE_2D tex)
(glTexEnvi GL_TEXTURE_ENV GL_TEXTURE_ENV_MODE GL_DECAL)
(glTexImage2D GL_TEXTURE_2D
0
GL_RGBA
width
height
0
GL_RGBA
GL_UNSIGNED_BYTE
(->void-array data))
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MIN_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MAG_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_S
GL_CLAMP_TO_EDGE)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_T
GL_CLAMP_TO_EDGE)
(glBindTexture GL_TEXTURE_2D 0)
tex))
To be explicit…
The most common issue with loading textures using Core Image is that it insists on converting data to premultiplied alpha format. In the case of PNGs included in the bundle, this is actually done in a preprocessing step in the build process. Failing to take this into account results in dark banding around blended objects.
The way to take it into account is to use glBlendMode(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead of glBlendMode(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If you want to use alpha channels for something other than regular blending, your only option is to switch to a different format and loader (as prideout suggested, but for different reasons).
ETA: the premultiplication issue also exists under Mac OS X, but the preprocessing is iPhone-specific.
Your Core Graphics surface should be cleared to all zeroes before you render to it, so I recommend using calloc instead of malloc, or adding a memset after the malloc.
Also, I'm not sure you want your TexEnv set to GL_DECAL. You might want to leave it set to its default (GL_MODULATE).
If you'd like to avoid Core Graphics for decoding PNG images, I recommend loading in a PVR file instead. PVR is an exceedingly simple file format. An app called PVRTexTool is included with the Imagination SDK which makes it easy to convert from PNG to PVR. The SDK also includes some sample code that shows how to parse their file format.
I don't know anything about OpenGL, but Cocoa abstracts this functionality with NSImage/UIImage.
You can use PVR's but there will be some compression artifacts, so I would only recommend those for 3D object textures, or textures that do not require a certain level of detail that PVR can not offer, especially with gradients.