To retrieve pixel values from CGImage I use CGContextDrawImage (like described here:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?). The only difference is that I create 128 bpp float components context, not usual 32 bpp context. The source CGImage obtained from CGImageSource created with option kCGImageSourceShouldAllowFloat. That way I hoped to get access to float pixel values color matched with my bitmap context's color space and use them in further image processing. The problem is that resulting image data seems to be loosing dynamic range. It can be seen in shadow, plain blue sky areas. They become contoured and lacking detail. Some investigation showed the problem occurs in CGContextDrawImage (Source CGImage contains full dynamic range, saving it through CGImageDestination proves it) and after CGContextDrawImage context contents become posterized.
After some more investigation I found this:
http://lists.apple.com/archives/quartz-dev/2007/mar/msg00026.html
That led me to conclusion that problem is not in my code but in core graphics or that is intended behaviour.
My question is: what is correct way to obtain floating point data from image using core graphics?
After some more investigation the problem is narrowed down to the following: posterization occurs when 8 bit image is drawn to the 128 bpp float context created with linear color space (kCGColorSpaceGenericRGBLinear). If I draw the same image to context created with kCGColorSpaceGenericRGB then retrieve CGImage from that context and draw that second image to linear color space context everything is ok.
Other solution (workaround ?) is to use Core Image: create CIImage from source CGImage and draw it to CIContext created with corresponding kCGColorSpaceGenericRGBLinear CGContext. But that only option on OS X (not on iOS).
Related
I have a situation were I'm trying to draw an image into a display's CGContext retrieved using CGDisplayGetDrawingContext. Despite having the image at the correct high resolution, using CGContextDrawImage to draw the image onto the context results in a pixilated image. I've also tried scaling down the image in a bitmap context (using CGBitmapContextCreate) then drawing that one onto the display's context, however that also results in a pixilated image (I thought it may retain the DIP, was a long shot). Any idea how to fix this?.
I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.
How would I draw a UIImage in Core Graphics with dimensions 16x16 filled with random pixels at random coordinates and random grayscale color? This seems slightly impossible to do at the moment...
EDIT: Perhaps I should start with a diagonal line texture? My problem is filling in each pixel one by one. Doesn't seem doable in Core Graphics.
Create a buffer as many bytes long as you want pixels (so, in this case, 16 * 16).
Fill this buffer by reading from /dev/random.
Pass this buffer to the CGImageCreate function using kCGImageAlphaNone.
Once you have created a CGImage, it is trivial to create a UIImage from it. Depending on your requirements, you can actually create up to eight “random” UIImages from the same CGImage by specifying different orientation values.
ETA: You might also try creating a two-byte-per-pixel buffer and image. Then, by using each of the endianness flags, you can create two “random” CGImages from the same buffer, for a total of 16 “random” UIImages. However, I don't know whether two-byte-per-pixel no-alpha grayscale is supported on any version of iOS; the Quartz 2D Programming Guide lists only Mac OS X version numbers.
So ... I have an image loaded into an NSBitmapImageRep object, so I am able to examine the contents of specific pixels via a two dimensional array. Now I want to apply a couple of "transformations" to the image, in preparation for some additional processing. If I was manipulating the image manually, in Photoshop, I would:
Rotate the image
Crop a portion of it and discard the rest
Apply a "threshold" transformation (which essentially converts the image to black and white, based on the threshold value I provide)
Resample the image to shrink it down a bit (which, although losing some image quality, will speed up the subsequent processing)
(not necessarily in that order)
Are there objective C methods available to facilitate these specific image manipulations, with the data in the NSBitmapImageRep object? If so, can someone point me to some good examples?
Create a CIImage for the CGImage of the bitmap image rep. Then you can:
Rotate it
Crop it
Apply a threshold filter
Scale it
I have a UIImage with white background. I would like replace the white background/pixels with alpha-transparent pixels. I've looked at other questions on StackOverflow, along with Quartz documentation, but have yet to find a coherent "start-to-end" for this problem. How is this done?
CGImageCreateWithMaskingColors
A UIImage wraps a CGImage. Take the CGImage, run it through CGImageCreateWithMaskingColors, then either create a new UIImage from the result or assign the result back to the UIImage.
The first step is you need to define some sort of "distance" function to determine how far away a pixel is from being white. Then you need to define a distance threshold below which a pixel is considered white. Then you would need to iterate over the pixels of the image, changing any pixels that were considered white according to your distance and threshold, to being transparent. The main trick, though, is making this efficient... touching pixels through functions will be very slow; your best bet is to touch the pixels directl by gaining access to the memory buffer in which the pixels reside and stepping through them.