Retrieving last non-transparent pixel position of a transparent UIImage - objective-c

How can I retrieve the top-left/top-right/bottom-left/bottom-right non-transparent pixel of a transparent image?

You can extract the pixel data of the image and compare alpha values (either by iterating through the returned array or (more effectively) by changing the linked algorithm to compare the points in place, thus evading the need of the NSArray with colors).

Related

UIImageView half moon slice

I'm trying to create an app with groups you can switch between. My idea was to pick the first 3 photo's of the members in the group, and lay the images over each other. Adding three images over each other is not really difficult, the difficult part for me is to make the other two images show up like a "half moon" beneath the other images. See the attached image for an example.
It isn't really a half moon. It's more like a crescent moon or lunate shape.
The principle is not a difficult one. Practice as follows:
Start with an image, roughly a square.
Make an image context the same size as the image.
Fill a circle the size of the image, roughly offset about a third of its width to the left.
Fill another circle the size of the image, roughly offset about two thirds of its width to the left, using Clear blend mode.
Extract the resulting image from the image context.
You now have the desired lunate shape:
Now use that lunate shape as a mask or clipping area for the original image:

NSImage from two NSImages

I have a rectangular NSImage A and I want to scale to embed into a squared transparent image B keeping A's ratio. So, in the end I'll get a squared image with the rectangle in it.
How can I compose that image?. I mean, how can I draw an NSImage over another NSImage and save the resulting image?.
I've been reading about clipping an NSImage inside a beizer but I need to keep ratio instead of filling the beizer square.
I hope you understand what I want.
Thanks.
The 'Cocoa Drawing Guide' has a section called 'Drawing to an Image'. From that documentation:
It is possible to create images programmatically by locking focus on an NSImage object and drawing other images or paths into the image context. This technique is most useful for creating images that you intend to render to the screen, although you can also save the resulting image data to a file.
There is example code there.

Change a color in a UIImage

I will like to know how I can change just one color in a image.
Like saying: if the color in this pixel is "red" change it to "blue".
The technical approach is straightforward:
get all pixel values (explained here)
Look for the pixel values you don't like and change them
Draw the image using the changed pixel values (explained here)
Keep in mind, if you mix the three steps into one method, without creating UIColor objects but changing them and immediately afterwards drawing the changed image, you'll get much better performance.

Applying transformations to NSBitmapImageRep

So ... I have an image loaded into an NSBitmapImageRep object, so I am able to examine the contents of specific pixels via a two dimensional array. Now I want to apply a couple of "transformations" to the image, in preparation for some additional processing. If I was manipulating the image manually, in Photoshop, I would:
Rotate the image
Crop a portion of it and discard the rest
Apply a "threshold" transformation (which essentially converts the image to black and white, based on the threshold value I provide)
Resample the image to shrink it down a bit (which, although losing some image quality, will speed up the subsequent processing)
(not necessarily in that order)
Are there objective C methods available to facilitate these specific image manipulations, with the data in the NSBitmapImageRep object? If so, can someone point me to some good examples?
Create a CIImage for the CGImage of the bitmap image rep. Then you can:
Rotate it
Crop it
Apply a threshold filter
Scale it

Replace all white/nearly white pixels in a UIImage with alpha using CGImage?

I have a UIImage with white background. I would like replace the white background/pixels with alpha-transparent pixels. I've looked at other questions on StackOverflow, along with Quartz documentation, but have yet to find a coherent "start-to-end" for this problem. How is this done?
CGImageCreateWithMaskingColors
A UIImage wraps a CGImage. Take the CGImage, run it through CGImageCreateWithMaskingColors, then either create a new UIImage from the result or assign the result back to the UIImage.
The first step is you need to define some sort of "distance" function to determine how far away a pixel is from being white. Then you need to define a distance threshold below which a pixel is considered white. Then you would need to iterate over the pixels of the image, changing any pixels that were considered white according to your distance and threshold, to being transparent. The main trick, though, is making this efficient... touching pixels through functions will be very slow; your best bet is to touch the pixels directl by gaining access to the memory buffer in which the pixels reside and stepping through them.