Calculate user parameters using Microsoft kinect - kinect

I want to get the following information of a user that is captured using a Microsoft Kinect using a WPF application.
Shoulder width
Height
Waist width
Hip width
Arm length
Bust size
I couldn't find any standard way of doing this except calculating the x,y co-ordinates of the user. Is there any very efficient and accurate way of doing this?

You can follow the article # http://www.codeproject.com/Articles/380152/Kinect-for-Windows-Find-user-height-accurately

The easiest way to accomplish this task is using the Pythagorean theorem to compute the distance between two skeleton joints.
To get the shoulder width, you would use the joints JointType.ShoulderLeft and JointType.ShoulderRight. The get the length of the left arm, you would add the distance between JointType.ShoulderLeft and JointType.ElbowLeft to the distance between JointType.ElbowLeft and JointType.WristLeft.
Please note that the joint names above are from the Kinect for Windows SDK. On its own, OpenKinect does not provide a method for skeleton tracking, since it's specialized on accessing the device only. A popular alternative to the Kinect for Windows SDK is OpenNI.

Related

Dual kinect calibration using powerfull IR LED illuminator

i am using multiple Kinects within the scene. So I need to calibrate them and find the extrinsic parameters like translation and rotation world coordinate system. Once I have that information, i can reconstruct the scene at highest level of accuracy. the important point is : i want to get submillimeter accuracy and may be it would be nice if i could use powerfull IR projector in my system. But i do not have any Background about IR sensor and calibration methods. So i need to know about tow subject : 1- is it possible to add IR LED illuminator to kinect and manage it? 2- if i could add how to calibrate my new system?
Calibration (determining relative transforms (rotation, scale, position)) is only part of the problem. You also need to consider whether each Kinect can handle the interference of the other Kinect's projected IR reference patterns.
"Shake n Sense" (by Microsoft Research) is a novel approach that you may be able to use that has been demonstrated to work.
https://www.youtube.com/watch?v=CSBDY0RuhS4

It is possible to recognize all objects from a room with Microsoft Kinect?

I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.

how can I detect floor movements such as push-ups and sit-ups with Kinect?

I have tried to implement this using skeleton tracking provided by Kinect. But it doesn't work when I am lying down on a floor.
According to Blitz Games CTO Andrew Oliver, there are specific ways to implement with depth stream or tracking silhouette of a user instead of using skeleton frames from Kinect API. You may find a good example in video game Your Shape Fitness. Here is a link showing floor movements such as push-ups!
Do you guys have any idea how to implement this or detect movements and compare them with other movements using depth stream?
What if a 360 degree sensor was developed, one that recognises movements not only directly in front, to the left, or right of it, but also optimizes for movement above(?)/below it? The image that I just imagined was the spherical, 360 degree motion sensors often used in secure buildings and such.
Without another sensor I think you'll need to track the depth data yourself. Here's a paper with some details about how MS implements skeletal tracking in the Kinect SDK, that might get you started. They implement object matching while parsing the depth data to capture joints in the body, you may need to implement some object templates and algorithms to do the matching yourself. Unless you can reuse some of the skeletal tracking libraries to parse out objects from the depth data for you.

minimum working distance to track moving object with kinect

I'm wondering what the minimum working distance for the Kinect is.
I'd like to track a moving object (10cm x 10cm) from a distance of 1m. The area that the object will be moving in, is 120cm x 60cm.
Given Kinect's specs, will it be possible to track the object across the entire area?
wikipedia says:
The sensor has an angular field of view of 57° horizontally and 43°
vertically
so the answer to your question would be: no, 120cm would be too wide. Since the maximum horizontal viewing field at 1m would be tan(57deg/2)*2 = ~1.08m
though at a distance of ~1.2m it should work (tan(57deg/2)*2*1.2 = ~1.3m)
Distance range handled by the sensor is 850mm minimum and 4000mm maximum, so this should be possible.
I strongly recommend watching the Kinect SDK Quickstarts videos as it covers all basics to get you started. In fact the "Working with depth data" probably contains exactly the kind of info you're looking for.

Robot camera + motion detection

I have a project in which we (me and my student) will develop a system for robot.
In this robot we have a camera that capture.
My question is how to detect motions, movements.
Is there a solution?? Which technics and tools to use ??
Which language to use (possible for Java for example) ??
Thanks in advance.
Best regards.
Ali
Consider using OpenCV:
http://opencv.org
It has a lot of useful vision algorithms built in, and supports, C, C++ and Python, as well as GPU functionality.
I will suggest you Microsoft Visual Studio wich is an integrated development environment and c# programming language. Emgu CV library wich is a cross platform .Net wrapper to the OpenCV image processing library. A simple method from a static position is this:
Convert a single frame to grayscale.
Convert a new frames from real time into grayscale.
Make abstractions between the first frame and new frame from real time.
The result of this is a third, new frame comprised of the differences between the first two. Use erosion and thresholding for that to get a frame with white representing the motioned section and black representing the rest of the space.
If the objects you attempt to track have a distinct color, you should be able to target them adequately.
One way to accomplish this is to choose an appropriate space for color as RGB space. Keep in mind this may be too sensitive, even to small illumination variance. (It really depends on the objects you want to track and tracking scenario.)
You can use OpenCV
Here you can find a C++ tutorial: http://blog.cedric.ws/opencv-simple-motion-detection
using two cameras instead of one could be useful in your project, for detecting depth in the image and real distance of a motion (in a perspective vision)
Stereoscopic depth rendition
Real-time detection of independent motion using stereo