Live Face Recognition iOS adding 3D object - objective-c

I need to create an iOS 5 application will run on iPad2 (I can use private API because the App will not be released in App Store) will show live stream from front camera, recognize eyes and render a pair of glasses (I have the 3D model) following face movements.
Which is the best approach and the best technology (e.g. OpenGL ES) I can use?

Just use the libraries included in XCode. I have a sample here. It's got everything you need.
It uses the AVFoundation, CoreImage, CoreMedia, and CoreVideo frameworks.

Related

Detect Winks in front facing camera using CIFaceFeature

I have an app which uses AVFoundation and tracks the face, eyes, and mouth position. I use the CIFaceFeature to detect these and mark them on the screen.
Is there a simple way to detect a wink using the framework?
For iOS 7, Yes, now you can do it with CoreImage.
Here is the API diff in iOS 7 Beta 2:
CoreImage
CIDetector.h
Added CIDetectorEyeBlink
Added CIDetectorSmile
Before iOS 7:
No, there is no way with iOS frameworks (AVFoundation or CoreImage) for now.
You can check out with OpenCV... but it's more of a researchy topic, not guarantee to work well in different situations:
First, you need to build a eye close/open classifier, afaik, there is no build-in eye wink classifier in OpenCV, so you need to collect enough "close" and "open" samples, and train a binary classifier. (I would suggest using Principle Component Analysis + Support Vector Machine. Both are available in OpenCV)
Then in iOS, use CoreImage to detect the locations of both eyes. And cut a square patch image around the eye center. The size of the patch should be normalized in terms of the detected face bounds rectangle.
And then you need to convert UIImage/CIImage to OpenCV IplImage or CvMat format, and feed them into your OpenCV classifier to determine the eyes are open or close.
Finally, determine if there is a wink based on the sequence of eye open and close.
(You also need to check if the processing frame rate is able to pick a wink action: say the wink happens within 0.5 frame... then you'll never detect it...)
It's a hard problem... otherwise Apple would have already included them in the framework.

Thumbnail MKMapView without Google Logo

I am in the process of developing a thumbnail MKMapView to show a singular point on the map. However, as the thumbnail is only 70x61px, the google logo takes up a large proportion of the map.
Please can you tell me a way of using the MKMapView so that the google logo is less visible or can't been seen, but avoiding app rejection, or any alternatives to using the MKMapView?
Thanks in advanced.
How it looks at the moment:
Have you looking into the Google Maps Static API? It returns regular jpeg maps rather than interactive ones. You might be able to craft a URL that gets you a small enough image for your thumbnail. I don't know whether that would be ok according to their license or not.
Start developing with the iOS 6 beta. There are significant changes to MapKit that removes Google as the data provider (and thus their logo). The final version of iOS 6 and it's SDK will be released in the next couple of weeks. So you will also be good to go submitting an iOS 6 app soon.

Objective-C, Methods for animating gui

I've created many types of interfaces using the Cocoa API — some of them using documented basic animation techniques and others simply by experimenting (such as placing an animated .gif inside an NSImage class) — which had somewhat catastrophic consequences. The question I have is what is the correct or the most effective way to create an animated and dynamic GUI so that it runs optimally and properly?
The closest example I can think of that would use a similar type of animation would be something one might see done in flash on any number of interactive websites or interfaces. I'm sure flash can be used in a Cocoa app, although if there is a way to achieve a similar result without re-inventing the wheel, or having to use 3rd party SDKs, I would love to get some input. Keep in mind I'm not just thinking of animation for games, iOS, etc. — I'm most interested in an animated GUI for Mac OS X, and making it 'flow' as one might interact in it.
If u wish to add many graphics animations, then go for OpenGLES based xcode project for iOS. That helps u to reduce performance problem. You can render each of the frames in gif as 2D texture.
I would recommend that you take a look at Core Animation. It is Apples framework for hardware accelerated animations for both OS X and iOS. It's built for making animated GUIs.
You can animate the property changes for things like position, opacity, color, transforms etc and also animate gradients with CAGradientLayer and animate non-rectagunal shapes using CAShapeLayer and a lot of other things.
A good resource to get you started is the Core Animation Programming Guide.

Augmented Reality Marker Reader iPhone

I want to use 'marker reader' technology in my iPhone application (you have a piece of paper, you point the iphone camera at it and information is bein drawn up from it). What API kits can i use/are availible?
The closest SDK I can think of is ARToolKit which should help in a Framework Solution for finding Markers and deriving information. There are a couple of other Frameworks which could be used and I've listed below:
Layar
ARToolKit
Popcode
SGAREnvironment

360 degree video in MPMoviePlayerController

I am trying to develop an iphone application which needs to show a 360 degree video like the one and rotate the video as per the phone movement. How can i do this? Is it possible to do this with normal MPMovieplayer controller?
I don't think you can do this with a normal MPMoviePlayerController, but there are several libraries out there to achieve this. Have a look here:
PanoramaGL
Panorama 360
They work with OpenGL and you can embed them in your Objective-C code.
EDIT:
As #Mangesh Vyas kindly pointed out those are intended to use with fixed images only. However they might be a suitable starting point for embedding video as well, if you modify the code accordingly. They already do the handling of direction, accelerometer etc. so you don't have to implement all that yourself.