NSImage vs. CIImage vs. CGImage? - objective-c

When should I use each?

NSImage is an abstract data type that can represent many different types of images, as well as multiple representations of an image. It is often useful when the actual type of image is not important for what you're trying to do. It is also the only image class that AppKit will accept in its APIs (NSImageView and so forth).
CGImage can only represent bitmaps. If you need to get down and dirty with the actual bitmap data, CGImage is an appropriate type to use. The operations in CoreGraphics, such as blend modes and masking, require CGImageRefs. CGImageRefs can be used to create NSBitmapImageRefs, which can then added to an NSImage.
I think the documentation describes a CIImage best:
Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
CIImages are the type required to use the various GPU-optimized Core Image filters that come with Mac OS X, but, like CGImageRefs, they can also be converted to NSBitmapImageReps.

Related

Convert NSImage to binary image (1-bit)

How can I 'binarize' an NSImage (or an NSBitmapImageRep if that's a better approach)?
I understand the concept of thresholding an image to turn it into a 1-bit image. I don't know, though, whether this are already methods/functions to do this to an NSImage, or whether I must manually iterate over the pixels, and deal with each pixel.
There are Core Image filters designed for this; check out the answer to this question.

Recommended approach to load proprietary binary image file into NSImage?

I have a bunch of image files in a proprietary binary format that I want to load into NSImages. The format is not a simple bitmap, but rather a kind of an RLE representation mixed with transparency and miscellaneous additional information.
In order to display one of these images in a Cocoa app, I need a way to parse the image file byte by byte and "calculate" a bitmap from it which I will then put into an NSImage.
What is a good approach for doing this in Objective-C/Cocoa?
The tasks of interpreting image data are handled by the image's representation object(s). To use a proprietary format, you have a few options: (a) create a custom representation class, (b) use NSCustomImageRep with a custom delegate, or (c) use a custom object to translate your image to a supported format, such as a raw bitmap.
If you choose to create a custom representation class, you will create a subclass of NSImageRep as described in Creating New Image Representation Classes. This basically requires that your class register itself and be able to draw the image data. In addition to this, you can override methods to return information about the image, and you will be able to instantiate your images using the normal NSImage methods. This method requires the most work.
Using NSCustomImageRep requires less work than creating a custom implementation. Your delegate object only needs to be able to draw the image at a fixed location. However, you cannot return other information about the image, and you will need to create the NSCustomImageRep object manually before creating the NSImage.
Translating the image into a different format is also simpler than creating a custom representation. It could be as simple as creating a blank NSImage of the proper size and drawing into it. Creating the image is still more complicated since you need to call your translation method, and this will affect efficiency (both future drawing time and memory usage) since you are changing formats, which could be good or bad. You will also lose any association between the image object and its source.

iPhone Objective-C image manipulation

I am looking for a way to, in Objective-C, create a PNG from several smaller PNGs based on how the user sets things up. Is this possible using existing Apple classes, or do I need to use a 3rd party library? If 3rd party code is needed, can anyone recommend a good library? The simpler the better - simple filters (such as darkening/lightening the image) would be nice but not required.
Here is some pseudo-code, to give you a better idea of what I am looking for:
image = [myImageLibrary imageWithHeight:1024 width:768];
[image addImage:#"background.png" atX:0 andY:0 withRotation:0];
[image addImage:#"image2.png" atX:100 andY:200 withRotation:90];
[image saveAtLocation:#"output.png"];
At output.png we see image2.png placed on top of background.png and rotated 90 degrees
P.S. - I am sorry if this seems to be a duplicate of another question, I just have not found an answer that works for what I am trying to do.
Have you read the "Creating and Drawing Images" section of the Drawing and Printing Guide for iOS and the UIImage Class Reference docs?
What you're after is perfectly possible - with a well built class you could pretty much use that pseudo code as-is.
As a starter for ten, you could:
Create your own graphics context via UIGraphicsBeginImageContext.
Draw into that via the drawAtPoint: method of the UIImage class
Save the resultant image data out via UIGraphicsGetImageFromCurrentImageContext.
In terms of steps 1 and 3, see the UIKit Function Reference for more info. Additionally, the imageWithCGImage:scale:orientation: method of the UIImage class may prove useful for performing transformations, etc. as a part of step 2.
You'll want to look at CGContextDrawImage to draw your images, using a custom bitmap context, and then save it out using UIGraphicsGetImageFromCurrentImageContext(). The rotation can be done by applying CGAffineTransforms to your CGContext.
More information on Core Graphics here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html

Applying transformations to NSBitmapImageRep

So ... I have an image loaded into an NSBitmapImageRep object, so I am able to examine the contents of specific pixels via a two dimensional array. Now I want to apply a couple of "transformations" to the image, in preparation for some additional processing. If I was manipulating the image manually, in Photoshop, I would:
Rotate the image
Crop a portion of it and discard the rest
Apply a "threshold" transformation (which essentially converts the image to black and white, based on the threshold value I provide)
Resample the image to shrink it down a bit (which, although losing some image quality, will speed up the subsequent processing)
(not necessarily in that order)
Are there objective C methods available to facilitate these specific image manipulations, with the data in the NSBitmapImageRep object? If so, can someone point me to some good examples?
Create a CIImage for the CGImage of the bitmap image rep. Then you can:
Rotate it
Crop it
Apply a threshold filter
Scale it

UIImage change raw pixels from white to clear?

I've tried some code from each of these questions:
How to make one color transparent on a UIImage?
How to mask a UIImage so that white becomes transparent on iphone?
but have come up unsuccessful, unfortunately working with Core Graphics and images is not my strong suit.
How would I go about accessing a UIImage's raw data and changing the white pixels to clear?
How would I go about accessing a UIImage's raw data …?
Look at the documentation.
You'll find that there is no way to get the raw data behind a UIImage. The closest you can get is a CGImage. That will let you get its data provider, which you can ask for a copy of the raw data.
The problem with that solution is that you need to handle every possible configuration (RGBA, ARGB, RGB_, _RGB, RGB, 8-bpc, 16-bpc, etc.) that CGImage supports. That's a lot of work. If you don't do it, then someday, you'll get surprised by an image that somehow doesn't work with your code, or by an OS upgrade changing how the CGImage gets created.
The CGImageCreateWithMaskingColors function, suggested on one of the other questions you linked to, is the correct solution.
One thing that's tripping you up is that the values shown in the accepted answer on that question are generally bogus: They're out of range. The Quartz 2D Programming Guide has more details in at least two.places.
I also argue against including that answer's createMask: method, since it doesn't do what it says it does and is barely useful at all (it's only worth having if the source image may be CMYK, but how likely is that on an iPhone app?). Skip it and create the mask image from the UIImage's CGImage directly.
That answer will probably work just fine once you fix those two problems.