recognizing facial expressions using Kinect SDK - kinect

I am trying to do some work using Kinect and the Kinect SDK.
I was wondering whether it is possible to detect facial expressions (e.g. wink, smile etc) using the Kinect SDK Or, getting raw data that can help in recognizing these.
Can anyone kindly suggest any links for this ? Thanks.

I am also working on this and i considered 2 options:
Face.com API:
there is a C# client library and there are a lot of examples in their documentation
EmguCV
This guy Talks about the basic face detection using EmguCV and Kinect SDK and you can use this to recognize faces
Presently i stopped developing this but if you complete this please post a link to your code.

This is currently not featured within the Kinect for Windows SDK due to the limitations of Kinect in producing high-resolution images. That being said, libraries such as OpenCV and AForge.NET have been sucessfuly used to detected finger and facial recognition from both the raw images that are returned from Kinect, and also RGB video streams from web cams. I would use this computer vision libraries are a starting point.

Just a note, MS is releasing the "Kinect for PC" along with a new SDK version in february. This has a new "Near Mode" which will offer better resolution for close-up images. Face and finger recognition might be possible with this. You can read a MS press release here, for example:
T3.com

The new Kinect SDK1.5 is released and contains the facial detection and recognition
you can download the latest SDK here
and check this website for more details about kinect face tracking

Related

Finger Position Detection using Kinect

Are there any open libraries or opensource codes available for finger position detection using Kinect ?
I have tried searching OpenNI and other libraries for Kinect but could'nt find one.
I was looking into this a few years ago, you can check out this post from then which includes a few options.
Links may be a bit outdates, for example Apple bought OpenNI, so the Forth ICS project can now be found here
You didn't mention which version of the kinect, so I'll assume it's the original kinect for xbox 360.
If you're not constrained to using kinect only, you might actually want to try the Intel RealSense SDK as it already includes hand tracking(pdf tutorial link) and the c++ sdk has wrappers for c#/java and makes the data available through websockets.

Opening Kinect datasets and/or SDK Samples

I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.

Kinect - 2 features simultaneously

I am trying to extract 2 features from the Kinect :
Captured video - I followed this guide:
http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/4ee6e7ca-123d-4838-82b6-e5816bf6529c
and succeeded to use the kinect as webcam and then used DirectShow in order to capture the video. Works just fine.
skeleton - I use the 1.7 Kinect SDK and the skeleton feature works sweet!
The Problem: Those 2 features don't work simultaneously
Each one of them works great by itself, but they just don't work together.
I have also tried checking the captured video in Skype's video settings section, while running the Skeleton Basics in the "Kinect for Windows Developer Toolkit 1.7"
Do you know why that happens and how can I fix that problem and enjoy the 2 features simultaneously?
Thanks a lot,
Guy.
this cannot be happen. I'm also working on a virtual dressing room concept and I could access the kinect joints and also the video stream too. I'm using xna so its cool with me to get the video buffer to the kinectSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
kinectSensor.SkeletonStream.Enable(new TransformSmoothParameters()
I don't know what's your approach

Is there a way to detect facial features on a video?

I'm trying to find out if there are any libraries or frameworks that will help with detecting facial features i.e. the eyes while video recording.
I tried using face.com api and THE CIDetector on IOS, but they only work on Images not video.
P.S. I'm developing for the iphone!
Why not simply extract frames from the video as it is playing and use those in the CIFaceDetector? This site has some good info on how to get frames from video files on iOS:
http://www.7twenty7.com/blog/2010/11/video-processing-with-av-foundation
I've never used this on iOS/Mac OSX, but you should check the OpenCV library.
Check this question for iOS support: iPhone and OpenCV
The library has built-in functions to detect faces, but I don't know if they are available on the iOS port.
You're looking for Object detection and I would recommend OpenCV.
If you want an out-of-the-box example just check out this link :) There is fully functional sample code attached to the tutorial. You can use OpenCV for a lot more stuff than just face tracking – just dig into the documentation and some tutorials.
You can finde several cascade classifier here for partial face detection.

Can the Kinect SDK be run with saved Depth/RGB videos, instead of a live Kinect?

This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.