Using an nsimage as a "green screen" - objective-c

Is it possible for me to take use an image as a green screen type thing like in photo booth where it takes the background out?

Yes, although exactly what you mean by “green screen” will affect the specific answer. Chroma-keying will require a custom CIFilter, as I don't believe Core Image comes with such a filter and I know NSImage by itself doesn't support it.

Related

How to achieve this image distortion effect in iOS?

I'm building an iphone photo app. I don't know how to achieve this kind of effect on an image:
When the user drags a finger across the arrow, I want the image to be distorted accordingly. How can I achieve this? Is there any framework that makes this process simple?
Thank you.
No clue about iOS, but the look reminds me of a thin-plate spline warp. It's quite easy to implement in OpenGL, a quick Google search for example code returns plenty of hits.
From the sound of it I think your looking for the CoreImage framework. Look into the various distortion effects...in particular CIBumpDistortionLinear.
I don't have any code so check out this tutorial and read up on CoreImage.

iOS: compare a slice of an image to library of options

I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!

iPhone Objective-C image manipulation

I am looking for a way to, in Objective-C, create a PNG from several smaller PNGs based on how the user sets things up. Is this possible using existing Apple classes, or do I need to use a 3rd party library? If 3rd party code is needed, can anyone recommend a good library? The simpler the better - simple filters (such as darkening/lightening the image) would be nice but not required.
Here is some pseudo-code, to give you a better idea of what I am looking for:
image = [myImageLibrary imageWithHeight:1024 width:768];
[image addImage:#"background.png" atX:0 andY:0 withRotation:0];
[image addImage:#"image2.png" atX:100 andY:200 withRotation:90];
[image saveAtLocation:#"output.png"];
At output.png we see image2.png placed on top of background.png and rotated 90 degrees
P.S. - I am sorry if this seems to be a duplicate of another question, I just have not found an answer that works for what I am trying to do.
Have you read the "Creating and Drawing Images" section of the Drawing and Printing Guide for iOS and the UIImage Class Reference docs?
What you're after is perfectly possible - with a well built class you could pretty much use that pseudo code as-is.
As a starter for ten, you could:
Create your own graphics context via UIGraphicsBeginImageContext.
Draw into that via the drawAtPoint: method of the UIImage class
Save the resultant image data out via UIGraphicsGetImageFromCurrentImageContext.
In terms of steps 1 and 3, see the UIKit Function Reference for more info. Additionally, the imageWithCGImage:scale:orientation: method of the UIImage class may prove useful for performing transformations, etc. as a part of step 2.
You'll want to look at CGContextDrawImage to draw your images, using a custom bitmap context, and then save it out using UIGraphicsGetImageFromCurrentImageContext(). The rotation can be done by applying CGAffineTransforms to your CGContext.
More information on Core Graphics here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html

UIImage change raw pixels from white to clear?

I've tried some code from each of these questions:
How to make one color transparent on a UIImage?
How to mask a UIImage so that white becomes transparent on iphone?
but have come up unsuccessful, unfortunately working with Core Graphics and images is not my strong suit.
How would I go about accessing a UIImage's raw data and changing the white pixels to clear?
How would I go about accessing a UIImage's raw data …?
Look at the documentation.
You'll find that there is no way to get the raw data behind a UIImage. The closest you can get is a CGImage. That will let you get its data provider, which you can ask for a copy of the raw data.
The problem with that solution is that you need to handle every possible configuration (RGBA, ARGB, RGB_, _RGB, RGB, 8-bpc, 16-bpc, etc.) that CGImage supports. That's a lot of work. If you don't do it, then someday, you'll get surprised by an image that somehow doesn't work with your code, or by an OS upgrade changing how the CGImage gets created.
The CGImageCreateWithMaskingColors function, suggested on one of the other questions you linked to, is the correct solution.
One thing that's tripping you up is that the values shown in the accepted answer on that question are generally bogus: They're out of range. The Quartz 2D Programming Guide has more details in at least two.places.
I also argue against including that answer's createMask: method, since it doesn't do what it says it does and is barely useful at all (it's only worth having if the source image may be CMYK, but how likely is that on an iPhone app?). Skip it and create the mask image from the UIImage's CGImage directly.
That answer will probably work just fine once you fix those two problems.

Cocoa API Image Manipulation

Is there any way to do simple image manipulation like adjusting brightness, contrast, exposure, etc. using Cocoa? Something like NSImage?
You want Core Image, I think.
If you want to present UI to allow the user to make these kinds of modifications, look at ImageKit.
Have you had a look at Core Image?
I'd check out the CoreImage FunHouse example. It pretty much shows you how to use most of what CoreImage can do.