CGContextShowGlyphsAtPoint DEPRECATED - ios7

After spending quite a bit of time to display "Thai Phonetic YK" fonts in an iPhone app. I finally got things sorted out and working.
Though it is functionning there is still a complaint (warning) from the compiler about one line of code in the (void)drawRect: method of my class performing the display.
CGContextShowGlyphsAtPoint(context, 20, 50, textToPrint, textLength);
The compiler tells me that this code is DEPRECATED. My question is “How am I supposed to change it?”.
Even though I searched the net for an answer, I didn’t find any thing clear.
The documentation says something like “Use Core Text instead” which is far too vague to be considered as an answer.

Core Graphics:
void CGContextShowGlyphsAtPoint (
CGContextRef context,
CGFloat x,
CGFloat y,
const CGGlyph glyphs[],
size_t count
);
Core Text:
void CTFontDrawGlyphs (
CTFontRef font,
const CGGlyph glyphs[],
const CGPoint positions[],
size_t count,
CGContextRef context
);
The Core Text version requires a CTFontRef (in the Core Graphics version, the font is expected to be set in the context).
You can obtain a CTFontRef from a UIFont:
CTFontRef ctFont = CTFontCreateWithName( (__bridge CFStringRef)uiFont.fontName, uiFont.pointSize, NULL);
The CT version also requires an array of points, one for each glyph. Assuming you were drawing a single glyph with the CG code, you could create the array like this:
CGPoint point = CGPointMake(x, y);
const CGPoint* positions = &point;
This change does mean you will need a point position for each glyph. In my case the extra work was minimal: I was advancing the typesetter one character at a time (for curved text) so I had to do this calculation anyway.
You may be able to typeset one text run at a time with CTRun:
void CTRunDraw (
CTRunRef run,
CGContextRef context,
CFRange range );
That could save you the trouble of iterating over each glyph. You could use it something like this...
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CTLineRef line = CTLineCreateWithAttributedString(
(__bridge CFTypeRef)self.attributedString);
CFArrayRef runs = CTLineGetGlyphRuns(line);
CFIndex runCount = CFArrayGetCount(runs);
for (CFIndex runIndex = 0; runIndex < runCount; ++runIndex) {
CTRunRef run = CFArrayGetValueAtIndex(runs, runIndex);
[self adjustContextForRun:run];
CTRunDraw (run, context, 0)
}
(thats just a sketch, implementation will depend on your needs, and i haven't tested the code)
adjustContextForRun would be responsible for setting things like font and initial draw position for the run.
Each CTRun represents a subrange of an attributed string where all of the attributes are the same. IF you don't vary attributes over a line, you can abstract this further:
CTLineDraw(line, context);
I don't know if that level of abstraction would work in your case (i am only used to working with Latin fonts), but its worth knowing it's there, saves a lot of lower-level trouble.
You can set the initial drawing position for the text with this:
void CGContextSetTextPosition (
CGContextRef c,
CGFloat x,
CGFloat y );

Related

How to get pixel coordinates when CTRunDelegate callbacks are called

I have dynamic text drawn into a custom UIImageView. Text can contain combinations of characters like :-) or ;-), which I'd like to replace with PNG images.
I apologize for bunch of codes below.
Code that creates CTRunDelegate follows:
CTRunDelegateCallbacks callbacks;
callbacks.version = kCTRunDelegateVersion1;
callbacks.dealloc = emoticonDeallocationCallback;
callbacks.getAscent = emoticonGetAscentCallback;
callbacks.getDescent = emoticonGetDescentCallback;
callbacks.getWidth = emoticonGetWidthCallback;
// Functions: emoticonDeallocationCallback, emoticonGetAscentCallback, emoticonGetDescentCallback, emoticonGetWidthCallback are properly defined callback functions
CTRunDelegateRef ctrun_delegate = CTRunDelegateCreate(&callbacks, self);
// self is what delegate will be using as void*refCon parameter
Code for creating attributed string is:
NSMutableAttributedString* attString = [[NSMutableAttributedString alloc] initWithString:self.data attributes:attrs];
// self.data is string containing text
// attrs is just setting for font type and color
I've then added CTRunDelegate to this string:
CFAttributedStringSetAttribute((CFMutableAttributedStringRef)attString, range, kCTRunDelegateAttributeName, ctrun_delegate);
// where range is for one single emoticon location in text (eg. location=5, length = 2)
// ctrun_delegate is previously created delegate for certain type of emoticon
Callback functions are defined like:
void emoticonDeallocationCallback(void*refCon)
{
// dealloc code goes here
}
CGFloat emoticonGetAscentCallback(void * refCon)
{
return 10.0;
}
CGFloat emoticonGetDescentCallback(void * refCon)
{
return 4.0;
}
CGFloat emoticonGetWidthCallback(void * refCon)
{
return 30.0;
}
Now all this works fine - I get callback functions called, and I can see that width, ascent and descent affect how text before and after detected "emoticon char combo" is drawn.
Now I'd like to draw an image at the spot where this "hole" is made, however I can't find any documentation that can guide me how do I get pixel (or some other) coordinates in each callback.
Can anyone guide me how to read these?
Thanks in advance!
P.S.
As far as I've seen, callbacks are called when CTFramesetterCreateWithAttributedString is called. So basically there's no drawing going on yet. I couldn't find any example showing how to match emoticon location to a place in drawn text. Can it be done?
I've found a solution!
To recap: issue is to draw text using CoreText into UIImageView, and this text, aside from obvious font type and color formatting, needs to have parts of the text replaced with small images, inserted where replaced sub-text was (eg. :-) will become a smiley face).
Here's how:
1) Search provided string for all supported emoticons (eg. search for :-) substring)
NSRange found = [self.rawtext rangeOfString:emoticonString options:NSCaseInsensitiveSearch range:searchRange];
If occurrence found, store it in CFRange:
CFRange cf_found = CFRangeMake(found.location, found.length);
If you're searching for multiple different emoticons (eg. :) :-) ;-) ;) etc.), sort all found occurrences in ascending order of it's location.
2) Replace all emoticon substrings (eg. :-)) you will want to replace with an image, with an empty space. After this, you must also update found locations to match these new spaces. It's not as complicated as it sounds.
3) Use CTRunDelegateCreate for each emoticon to add callback to newly created string (the one that does not have :-) but [SPACE] instead).
4) Callback functions should obviously return correct emoticon width based on image size you will use.
5) As soon as you will execute CTFramesetterCreateWithAttributedString, these callbacks will be executed as well, giving framesetter data which will be later used in creating glyphs for drawing in given frame path.
6) Now comes the part I missed: once you create frame for framesetter using CTFramesetterCreateFrame, cycle through all found emoticons and do following:
Get num of lines from frame and get origin of the first line:
CFArrayRef lines = CTFrameGetLines(frame);
int linenum = CFArrayGetCount(lines);
CGPoint origins[linenum];
CTFrameGetLineOrigins(frame, CFRangeMake(0, linenum), origins);
Cycle through all lines, for each emoticon, looking for glyph that contains it (based on the range.location for each emoticon, and number of characters in each glyph):
(Inspiration came from here: CTRunGetImageBounds returning inaccurate results)
int eloc = emoticon.range.location; // emoticon's location in text
for( int i = 0; i<linenum; i++ )
{
CTLineRef line = (CTLineRef)CFArrayGetValueAtIndex(lines, i);
CFArrayRef gruns = CTLineGetGlyphRuns(line);
int grunnum = CFArrayGetCount(gruns);
for( int j = 0; j<grunnum; j++ )
{
CTRunRef grun = (CTRunRef) CFArrayGetValueAtIndex(gruns, j);
int glyphnum = CTRunGetGlyphCount(grun);
if( eloc > glyphnum )
{
eloc -= glyphnum;
}
else
{
CFRange runRange = CTRunGetStringRange(grun);
CGRect runBounds;
CGFloat ascent,descent;
runBounds.size.width = CTRunGetTypographicBounds(grun, CFRangeMake(0, 0), &ascent, &descent, NULL);
runBounds.size.height = ascent + descent;
CGFloat xOffset = CTLineGetOffsetForStringIndex(line, runRange.location, NULL);
runBounds.origin.x = origins[i].x + xOffset;
runBounds.origin.y = origins[i].y;
runBounds.origin.y -= descent;
emoticon.location = CGPointMake(runBounds.origin.x + runBounds.size.width, runBounds.origin.y);
emoticon.size = CGPointMake([emoticon EmoticonWidth] ,runBounds.size.height);
break;
}
}
}
Please do not take this code as copy-paste-and-will-work as I had to strip lots of other stuff - so this is just to explain what I did, not for you to use it as is.
7) Finally I can create context and draw both text and emoticons at correct place:
if(currentContext)
{
CGContextSaveGState(currentContext);
{
CGContextSetTextMatrix(currentContext, CGAffineTransformIdentity);
CTFrameDraw(frame, currentContext);
}
CGContextRestoreGState(currentContext);
if( foundEmoticons != nil )
{
for( FoundEmoticon *emoticon in foundEmoticons )
{
[emoticon DrawInContext:currentContext];
}
}
}
And function that draws emoticon (I just made it to draw it's border and pivot point):
-(void) DrawInContext:(CGContext*)currentContext
{
CGFloat R = round(10.0 * [self randomFloat] ) * 0.1;
CGFloat G = round(10.0 * [self randomFloat] ) * 0.1;
CGFloat B = round(10.0 * [self randomFloat] ) * 0.1;
CGContextSetRGBStrokeColor(currentContext,R,G,B,1.0);
CGFloat pivotSize = 8.0;
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, self.location.x, self.location.y - pivotSize);
CGContextAddLineToPoint(currentContext, self.location.x, self.location.y + pivotSize);
CGContextMoveToPoint(currentContext, self.location.x - pivotSize, self.location.y);
CGContextAddLineToPoint(currentContext, self.location.x + pivotSize, self.location.y);
CGContextDrawPath(currentContext, kCGPathStroke);
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, self.location.x, self.location.y);
CGContextAddLineToPoint(currentContext, self.location.x + self.size.x, self.location.y);
CGContextAddLineToPoint(currentContext, self.location.x + self.size.x, self.location.y + self.size.y);
CGContextAddLineToPoint(currentContext, self.location.x, self.location.y + self.size.y);
CGContextAddLineToPoint(currentContext, self.location.x, self.location.y);
CGContextDrawPath(currentContext, kCGPathStroke);
}
Resulting image: http://i57.tinypic.com/rigis5.png
:-)))
P.S.
Here is result image with multiple lines: http://i61.tinypic.com/2pyce83.png
P.P.S.
Here is result image with multiple lines and with PNG image for emoticon:
http://i61.tinypic.com/23ixr1y.png
Are you drawing the text in a UITextView object? If so, then you can ask it's layout manager where the emoticon is drawn, specifically the -[NSLayoutManager boundingRectForGlyphRange:inTextContainer: method (also grab the text container of the text view).
Note that it expects the glyph range, not a character range. Multiple characters can make up a single glyph, so you will need to convert between them. Again, NSLayoutManager has methods to convert between character ranges and glyph ranges.
Alternatively, if you're not drawing inside a text view, you should create your own layout manager and text container, so you can do the same.
A text container describes a region on the screen where text will be drawn, typically it's a rectangle but it can be any shape:
A layout manager figures out how to fit the text within whatever shape the text container describes.
Which brings me to the other approach you could take. You can modify the text container object, adding a blank space where no text can be rendered, and put a UIImageView inside that blank space. Use the layout manager to figure out where the blank spaces should be.
Under iOS 7 and later, you can do this by adding "exclusion paths" to the text container, which is just an array of paths (rectangles probably) where each image is. For earlier versions of iOS you need to subclass NSTextContainer and override lineFragmentRectForProposedRect:atIndex:writingDirection:remainingRect:.

NSPoint and IMKCandidate Window Placement

I am using the inputmethodkit and trying to place a window underneath some text.
_currentClient is an IMKTextInput instance
candidates is an IMKCandidates instance
// Get the current location to place the window
NSRect tempRect = NSMakeRect(0, 0, 0, 0);
NSDictionary* clientData = [_currentClient attributesForCharacterIndex:0 lineHeightRectangle:&tempRect];
NSPoint* windowInsertionPoint = (NSPoint*)[clientData objectForKey:#"IMKBaseline"];
...
[candidates setCandidateFrameTopLeft:*windowInsertionPoint];
[candidates showCandidates];
Now, I know that the windowInsertionPoint variable is fine, when I debug I can see it's value, eg: NSPoint: {647,365}
However when I use this, the candidate window just shows in the bottom left corner of the screen. I haven't worked with screen placement of stuff before, so help is appreciated.
If I pass in arbitrary static values to setCandidateFrameTopLeft, it gets placed in the screen. The following works:
[candidates setCandidateFrameTopLeft:NSMakePoint(401, 354)];
Is it a pointer problem?
OK, the solution to this is that I am an idiot. Here is the code you need:
NSRect tempRect;
NSDictionary* clientData = [_currentClient attributesForCharacterIndex:0 lineHeightRectangle:&tempRect];
NSPoint windowInsertionPoint = NSMakePoint(NSMinX(tempRect), NSMinY(tempRect));
The documentation for IMKTextInput attributesForCharacterIndex says
lineRect: On return, a rectangle that frames a one-pixel wide rectangle with the height of the line. This rectangle is oriented the same way the line is oriented.
This means it returns an NSRect into the variable your passed it for the lineHeightRectangle value. The important point is that the location of that NSRect is the location of the character you are searching for. So, then you need to just make a point from that rectangle and use NSMinY for the Y value. The rectangle is only a single pixel wide so Min/Max for X are basically the same.
You probably don't have this issue anymore, but this works too, for future:
[candidates show:kIMKLocateCandidatesBelowHint];

32 bits big endian floating point data to CGImage

I am trying to write an application which read FITS image. FITS stand for Flexible Image Transport format and it is a format wish is primarily used to store scientific data related to astrophysics, and secondarily, it is used by most amator astronomer which take picture of the sky with CCD camera. So FITS file contains images, but they also may contains tables and other kind of data. As I am new in Objectiv-C and cocoa programming (I start this project one year ago, but since I am busy, I almost not touch it for one year !), I started trying to create a library which allow me to convert the image content of the file to a NSImageRep. FITS image binary data may be 8 bit/pix, 16 bit/pix, 32 bit/pix unsigned integer or 32 bit/pix, 64 bit/pix floating point, all in Big endian.
I manage to have image representation for grey scale FITS image in 16 bit/pix, 32 bit/pix unsigned integer but I obtain very weird behaviour when I am looking for 32 bit/pix floating point (and the problem is worth for RGB 32 bit/pix floating points). So far, I haven't test for 8 bits/pix integer data and RGB image based on 16 bit/pix and 32 bit/pix integer data because I haven't yet find example file on the web.
As follow is my code to create a grey scale image form fits file :
-(void) ConstructImgGreyScale
{
CGBitmapInfo bitmapInfo;
int bytesPerRow;
switch ([self BITPIX]) // BITPIX : Number bits/pixel. Information extracted from the FITS header
{
case 8:
bytesPerRow=sizeof(int8_t);
bitmapInfo = kCGImageAlphaNone ;
break;
case 16:
bytesPerRow=sizeof(int16_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder16Big;
break;
case 32:
bytesPerRow=sizeof(int32_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big;
break;
case 64:
bytesPerRow=sizeof(int64_t);
bitmapInfo = kCGImageAlphaNone;
break;
case -32:
bytesPerRow=sizeof(Float32);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big | kCGBitmapFloatComponents;
case -64:
bytesPerRow=sizeof(Float64);
bitmapInfo = kCGImageAlphaNone | kCGBitmapFloatComponents;
break;
default:
NSLog(#"Unknown pixel bit size");
return;
}
[self setBitsPerSample:abs([self BITPIX])];
[self setColorSpaceName:NSCalibratedWhiteColorSpace];
[self setPixelsWide:[self NAXESofAxis:0]]; // <- Size of the X axis. Extracted from FITS header
[self setPixelsHigh:[self NAXESofAxis:1]]; // <- Size of the Y axis. Extracted from FITS header
[self setSize: NSMakeSize( 2*[self pixelsWide], 2*[self pixelsHigh])];
[self setAlpha: NO];
[self setOpaque:NO];
CGDataProviderRef provider=CGDataProviderCreateWithCFData ((CFDataRef) Img);
CGFloat Scale[2]={0,28};
image = CGImageCreate ([self pixelsWide],
[self pixelsHigh],
[self bitsPerSample],
[self bitsPerSample],
[self pixelsWide]*bytesPerRow,
[[NSColorSpace deviceGrayColorSpace] CGColorSpace],
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
CGDataProviderRelease(provider);
return;
}
and here is the snapshot of the result for a 32/bits/pix floating point data : NASA HST picture!
The Image seems to be shift to the left, but what is more annoying is that I get two representation of the same image (upper and lower part of the frame) in the same frame.
And for some other file, the behaviour is more strange :
Star Field 1 , (For the other link se the comment, as new user, I can not have more than two link in this text. As well as I can not put directly the image.)
All three star field images are the representation of the same fits file content. I obtain a correct representation of the image in the bottom part of the frame (the star are too much saturated but I haven't yet play with the encoding). But, in the upper part, each time I open the same file I got a different representation of the image. Look like each time I open this file, it do not tack the same sequence of bytes to produce the image representation (at least for the upper part).
Also, I do not know if the image which is duplicated on the bottom contain half of the data
and the upper one the other half, or if it is simply a copy of the data.
When I convert the content of my Data in primitive format (human readable number) the number are compatible with what should be in the pixel, at the good position. This let me think the problem is not coming from the data but from the way the CGImage interpret the data i.e. I am wrong somewhere in the argument I pass to the CGImageCreate function.
In case of RGB fits image data, I obtain at the end 18 image into my frame. 6 copy of each R, G and B image. All in gray scale. Note that in case of RGB image, my code is different.
What am I doing wrong ?
Ok, I finally find the solution of on of my problem, concerning the duplication of the image. And this was a very stupid mistake and I am not proud of myself not having find it earlier.
In the code, I forget the break in the case -32. Still the question remain about the shift of the picture. I do not see the shift when I am opening 32 bit integer image but it appears on the 32 bit floating points data.
Does any one have an idea of where this shift could come from in my code ? Does it is due to the way I construct the image ? Or it is possible it is du to the way I draw the image ?
Bellow is the piece of code I use to draw the image. Since the image was first upside down, I made a little change of coordinate.
- (bool)draw {
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
if (!context || !image) {
return NO;
}
NSSize size = [self size];
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), image);
return YES;
}

Dynamically allocating length to an objective C static array

Hi I am relatively new to programming on iOS and using objective C. Recently I have come across an issue I cannot seem to solve, I am writing a OBJ model loader to use within my iOS programming. For this I use two arrays as below:
static CGFloat modelVertices[360*9]={};
static CGFloat modelColours[360*12]={};
As can be seen the length is currently allocated with a hard coded value of 360 (the number of faces in a particular model). Is there no way this can be dynamically allocated from a value that has been calculated after reading the OBJ file as is done below?
int numOfVertices = //whatever this is read from file;
static CGFloat modelColours[numOfVertices*12]={};
I have tried using NSMutable arrays but found these difficult to use as when it comes to actually drawing the mesh gathered I need to use this code:
-(void)render
{
// load arrays into the engine
glVertexPointer(vertexStride, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorStride, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
//render
glDrawArrays(renderStyle, 0, vertexCount);
}
As you can see the command glVertexPointer requires the values as a CGFloat array:
glVertexPointer (GLint size, GLenum type, GLsizei stride, const GLvoid *pointer);
You could use a c-style malloc to dynamically allocate space for the array.
int numOfVertices = //whatever this is read from file;
CGFloat *modelColours = (CGFloat *) malloc(sizeof(CGFloat) * numOfVertices);
When you declare a static variable, its size and initial value must be known at compile time. What you can do is declare the variable as a pointer instead of an array, the use malloc or calloc to allocate space for the array and store the result in your variable.
static CGFloat *modelColours = NULL;
int numOfVertices = //whatever this is read from file;
if(modelColours == NULL) {
modelColours = (CGFloat *)calloc(sizeof(CGFloat),numOfVertices*12);
}
I used calloc instead of malloc here because a static array would be filled with 0s by default, and this would ensure that the code was consistent.

How to load PNG with alpha with Cocoa?

I'm developing an iPhone OpenGL application, and I need to use some textures with transparency. I have saved the images as PNGs. I already have all the code to load PNGs as OpenGL textures and render them. This is working fine for all images that don't have transparency (all alpha values are 1.0). However, now that I'm trying to load and use some PNGs that have transparency (varying alpha values), my texture is messed up, like it loaded the data incorrectly or something.
I'm pretty sure this is due to my loading code which uses some of the Cocoa APIs. I will post the relevant code here though.
What is the best way to load PNGs, or any image format which supports transparency, on OSX/iPhone? This method feels roundabout. Rendering it to a CGContext and getting the data seems weird.
* LOADING *
CGImageRef CGImageRef_load(const char *filename) {
NSString *path = [NSString stringWithFormat:#"%#/%s",
[[NSBundle mainBundle] resourcePath],
filename];
UIImage *img = [UIImage imageWithContentsOfFile:path];
if(img) return [img CGImage];
return NULL;
}
unsigned char* CGImageRef_data(CGImageRef image) {
NSInteger width = CGImageGetWidth(image);
NSInteger height = CGImageGetHeight(image);
unsigned char *data = (unsigned char*)malloc(width*height*4);
CGContextRef context = CGBitmapContextCreate(data,
width, height,
8, width * 4,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context,
CGRectMake(0.0, 0.0, (float)width, (float)height),
image);
CGContextRelease(context);
return data;
}
* UPLOADING *
(define (image-opengl-upload data width height)
(let ((tex (alloc-opengl-image)))
(glBindTexture GL_TEXTURE_2D tex)
(glTexEnvi GL_TEXTURE_ENV GL_TEXTURE_ENV_MODE GL_DECAL)
(glTexImage2D GL_TEXTURE_2D
0
GL_RGBA
width
height
0
GL_RGBA
GL_UNSIGNED_BYTE
(->void-array data))
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MIN_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MAG_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_S
GL_CLAMP_TO_EDGE)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_T
GL_CLAMP_TO_EDGE)
(glBindTexture GL_TEXTURE_2D 0)
tex))
To be explicit…
The most common issue with loading textures using Core Image is that it insists on converting data to premultiplied alpha format. In the case of PNGs included in the bundle, this is actually done in a preprocessing step in the build process. Failing to take this into account results in dark banding around blended objects.
The way to take it into account is to use glBlendMode(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead of glBlendMode(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If you want to use alpha channels for something other than regular blending, your only option is to switch to a different format and loader (as prideout suggested, but for different reasons).
ETA: the premultiplication issue also exists under Mac OS X, but the preprocessing is iPhone-specific.
Your Core Graphics surface should be cleared to all zeroes before you render to it, so I recommend using calloc instead of malloc, or adding a memset after the malloc.
Also, I'm not sure you want your TexEnv set to GL_DECAL. You might want to leave it set to its default (GL_MODULATE).
If you'd like to avoid Core Graphics for decoding PNG images, I recommend loading in a PVR file instead. PVR is an exceedingly simple file format. An app called PVRTexTool is included with the Imagination SDK which makes it easy to convert from PNG to PVR. The SDK also includes some sample code that shows how to parse their file format.
I don't know anything about OpenGL, but Cocoa abstracts this functionality with NSImage/UIImage.
You can use PVR's but there will be some compression artifacts, so I would only recommend those for 3D object textures, or textures that do not require a certain level of detail that PVR can not offer, especially with gradients.