I have an app which uses AVFoundation and tracks the face, eyes, and mouth position. I use the CIFaceFeature to detect these and mark them on the screen.
Is there a simple way to detect a wink using the framework?
For iOS 7, Yes, now you can do it with CoreImage.
Here is the API diff in iOS 7 Beta 2:
CoreImage
CIDetector.h
Added CIDetectorEyeBlink
Added CIDetectorSmile
Before iOS 7:
No, there is no way with iOS frameworks (AVFoundation or CoreImage) for now.
You can check out with OpenCV... but it's more of a researchy topic, not guarantee to work well in different situations:
First, you need to build a eye close/open classifier, afaik, there is no build-in eye wink classifier in OpenCV, so you need to collect enough "close" and "open" samples, and train a binary classifier. (I would suggest using Principle Component Analysis + Support Vector Machine. Both are available in OpenCV)
Then in iOS, use CoreImage to detect the locations of both eyes. And cut a square patch image around the eye center. The size of the patch should be normalized in terms of the detected face bounds rectangle.
And then you need to convert UIImage/CIImage to OpenCV IplImage or CvMat format, and feed them into your OpenCV classifier to determine the eyes are open or close.
Finally, determine if there is a wink based on the sequence of eye open and close.
(You also need to check if the processing frame rate is able to pick a wink action: say the wink happens within 0.5 frame... then you'll never detect it...)
It's a hard problem... otherwise Apple would have already included them in the framework.
Related
Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?
I've created many types of interfaces using the Cocoa API — some of them using documented basic animation techniques and others simply by experimenting (such as placing an animated .gif inside an NSImage class) — which had somewhat catastrophic consequences. The question I have is what is the correct or the most effective way to create an animated and dynamic GUI so that it runs optimally and properly?
The closest example I can think of that would use a similar type of animation would be something one might see done in flash on any number of interactive websites or interfaces. I'm sure flash can be used in a Cocoa app, although if there is a way to achieve a similar result without re-inventing the wheel, or having to use 3rd party SDKs, I would love to get some input. Keep in mind I'm not just thinking of animation for games, iOS, etc. — I'm most interested in an animated GUI for Mac OS X, and making it 'flow' as one might interact in it.
If u wish to add many graphics animations, then go for OpenGLES based xcode project for iOS. That helps u to reduce performance problem. You can render each of the frames in gif as 2D texture.
I would recommend that you take a look at Core Animation. It is Apples framework for hardware accelerated animations for both OS X and iOS. It's built for making animated GUIs.
You can animate the property changes for things like position, opacity, color, transforms etc and also animate gradients with CAGradientLayer and animate non-rectagunal shapes using CAShapeLayer and a lot of other things.
A good resource to get you started is the Core Animation Programming Guide.
I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!
I'm working on a simple iOS game that's always drawing 5 to 10 layers of 32bit png images which requires enough memory to crash on the ipod touch 4g when retina enabled. On other devices it works just fine. I'm not even getting memory warnings. So I was trying with lower quality images, like RGB5_A1 format, but it looks really bad because I need alpha transparency and lots of gradients.
Since all the images are exports from Illustrator I was thinking that maybe i could just export a vector image and draw in on iOS. From what i was researching hardly anyone tried this and the only option I've come across was to implement a SVG parser for Quartz.
Did I miss anything?
Also I'm worried about performance, but I couldn't find any benchmarks.
Without knowing specifics of your game, I'm going to make a few assumptions based on normal use...
You are not going to want to use straight vector graphics for this. Stick with your raster graphics.
If you are talking about 32 bit color space for your PNG images, then you need to scale back. iOS uses 24 bit images and that includes 8 bits each for red, green, blue, and alpha. As it stands, you have an extra byte for every pixel shown.
If you are using Adobe products, import the Illustrator file into Photoshop and use the "Save for Web..." option. Choose PNG-24 and you'll be all set.
Actually I am adding Image Processing Feature in my iPhone Application It should do Brightness, Contrast, Sharpen, Exposure....
But i am not able to Find any article/Tutorial on the Internet. Will you please help me to find any tutorial or tell me how can i implement the iPhone View Based Application.
I have found 1 link http://www.iphonedevsdk.com/forum/iphone-sdk-development/10094-adjust-image-brightness-contrast-fly.html its worked also for Brightness but its not working on iPad.
So Suggest something that i can start with my Image Processing Logic.
Thanks
Rick Jackson
I personally like the approach in the GLImageProcessing project from Apple's sample code. Check it out.
There are a few libraries that support image processing in Quartz. There are even a few categories on UIImage to do some basic stuff.
The following are a few examples:
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions
http://code.google.com/p/simple-iphone-image-processing/
But as said before by #Felz those libraries are slow because they use the quartz codebase, which isn't that fast (for example: changing the saturation of an image with a resolution of 1024x1024 might take up to 4 to 8 seconds, depending on which device your using).
If your project is iOS 5 or higher then you should definitely consider using CoreImage
You can try GPUImage framework created by Brad Larson. It includes awesome image filters and also easy to use.