Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton? - kinect

I need a help about how to get skeleton data from my modified Depth Image using KinectSDK.
I have two Kinect. And I can get the DepthImage from both of them. I transform the two Depth Image into a 3D-coordinate and combine them together by using OpenGL. Finally, I reproject the 3D scene and get a new DepthImage. (The resolution is also 640*480, and the FPS of my new DepthImage is about 20FPS)
Now, I want to get the skeleton from this new DepthImage by using KinectSDK.
Anyone can help me about this?
Thanks
This picture is my flow path:

Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton?
No.
I tried to find some libraries (not made by Microsoft) to accomplish this task, but it is not simple at all.
Try to follow this discussion on ResearchGate, there are some useful links (such as databases or early stage projects) from where you can start if you want to develop your library and share it with us.

I was hoping to do something similar, feed post-processed depth image back to Kinect for skeleton segmentation, but unfortunately there doesn't seem to be a way of doing this with existing API.

Why would you reconstruct a skeleton based upon your 3d depth data ?
The kinect Sdk can record a skeleton directly without such reconstruction.
And one camera is enough to record a skeleton.
if you use the kinect v2, then out of my head i can track 3 or 4 skeletons or so.
kinect provides basically 3 streams, rgb video, depth, skeleton.

Related

How can I input a depth image to let kinect output a skeleton?

My research is to make the skeleton more stable, for now I want to try it by make the depth image better and then get a skeleton based on the processed image, but I don't know how to make it happen since Kinect only provide APIs to get the output data
Now you are trying to do the exact opposite of what the Kinect is capable of. You can't put any pre processing depth image but you can apply some image processing on the depth image stream it retrieves.

Object detection using Kinect V2

I know that object detection is not possible using Kinect v1. We need to use 3rd party libraries like open CV or pointclouds (pcl).
But was just curious to know does can it be achived using Kinect v2? Has anyone done any work on it?
Take a look at this project and use google with blob detection keyword.
The short answer is that object detection using the Kinect V2 is possible in two ways, but there isn't much by way of complete solutions out there right now (Nov. 2014) because of how new it is and because it hasn't been hacked yet. Currently, I am trying to implement PCL on Windows 8 with visual studio 2012, which are the bare minimum req.s for Kinect v2, and I will keep posted so that you can know how it goeshttp://cs.unc.edu/~kwaegel/pcl/pcl_build_notes.htmlRealistically, the fastest approach would likely be using the v2 SDK (sorry about the link above, don't have enough reputation to share more than 2 links, needed to think of a clever way to get it to you without StackExchange recognizing it as a link :P ). In a brief search I found that this guy acquired the color point clouds from the kinect v2 and was able to output them:
http://laht.info/kinect-v2-colored-point-clouds/
After that, you should be able to segment the point clouds with previous open source software by simply importing your newly acquired point cloud!
http://pointclouds.org/documentation/tutorials/random_sample_consensus.php
again, haven't gotten all of these moving parts working together in one environment yet, but it is definitely possible

is it possible to track an arbitrary skeleton model with the kinect?

I understand that the kinect is using some predefined skeleton model to return the skeleton based on the depth data. That's nice, but this will only allow you the get a skeleton for people. Is it possible to define a custom skeleton model? for example, maybe you want to track your dog while he's doing something. So, is there a way to define a model for four legs, a tail and a head and to track this?
Short answer, no. Using the Microsoft Kinect for Windows SDK's skeleton tracker you are stuck with the one they give you. There is no way inject a new set of logic or rules.
Long answer, sure. You are not able to use the pre-built skeleton tracker, but you can write your own. The skeleton tracker uses data from the depth to determine where a person's joints are. You could take that same data and process it for a different skeleton structure.
Microsoft does not provide access to all the internal functions that process and output the human skeleton, so we would be unable to use it as any type of reference for how the skeleton is built.
In order to track anything but a human skeleton you'd have to rebuild it all from the ground up. It would be a significant amount of work, but it is doable... just not easily.
there is a way to learn a bit about this subject by watching the dll exemple:
Face Tracking
from the sdk exemples :
http://www.microsoft.com/en-us/kinectforwindows/develop/

how can I detect floor movements such as push-ups and sit-ups with Kinect?

I have tried to implement this using skeleton tracking provided by Kinect. But it doesn't work when I am lying down on a floor.
According to Blitz Games CTO Andrew Oliver, there are specific ways to implement with depth stream or tracking silhouette of a user instead of using skeleton frames from Kinect API. You may find a good example in video game Your Shape Fitness. Here is a link showing floor movements such as push-ups!
Do you guys have any idea how to implement this or detect movements and compare them with other movements using depth stream?
What if a 360 degree sensor was developed, one that recognises movements not only directly in front, to the left, or right of it, but also optimizes for movement above(?)/below it? The image that I just imagined was the spherical, 360 degree motion sensors often used in secure buildings and such.
Without another sensor I think you'll need to track the depth data yourself. Here's a paper with some details about how MS implements skeletal tracking in the Kinect SDK, that might get you started. They implement object matching while parsing the depth data to capture joints in the body, you may need to implement some object templates and algorithms to do the matching yourself. Unless you can reuse some of the skeletal tracking libraries to parse out objects from the depth data for you.

Kinect: How to get the skeleton data from some depth data( geting from kinect but i modified some place)

I could get the depth frame from my Kinect and then modify data in the frame.
Now I want to use the modified depth frame to get the skeleton data.
How can I do it?
well, I find there's no way to do this with microsoft kinect sdks. Now, I find its ok to use OpenNI, an open sourse API by Primesense.