Kinect: How to get the skeleton data from some depth data( geting from kinect but i modified some place) - kinect

I could get the depth frame from my Kinect and then modify data in the frame.
Now I want to use the modified depth frame to get the skeleton data.
How can I do it?

well, I find there's no way to do this with microsoft kinect sdks. Now, I find its ok to use OpenNI, an open sourse API by Primesense.

Related

How can I input a depth image to let kinect output a skeleton?

My research is to make the skeleton more stable, for now I want to try it by make the depth image better and then get a skeleton based on the processed image, but I don't know how to make it happen since Kinect only provide APIs to get the output data
Now you are trying to do the exact opposite of what the Kinect is capable of. You can't put any pre processing depth image but you can apply some image processing on the depth image stream it retrieves.

Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton?

I need a help about how to get skeleton data from my modified Depth Image using KinectSDK.
I have two Kinect. And I can get the DepthImage from both of them. I transform the two Depth Image into a 3D-coordinate and combine them together by using OpenGL. Finally, I reproject the 3D scene and get a new DepthImage. (The resolution is also 640*480, and the FPS of my new DepthImage is about 20FPS)
Now, I want to get the skeleton from this new DepthImage by using KinectSDK.
Anyone can help me about this?
Thanks
This picture is my flow path:
Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton?
No.
I tried to find some libraries (not made by Microsoft) to accomplish this task, but it is not simple at all.
Try to follow this discussion on ResearchGate, there are some useful links (such as databases or early stage projects) from where you can start if you want to develop your library and share it with us.
I was hoping to do something similar, feed post-processed depth image back to Kinect for skeleton segmentation, but unfortunately there doesn't seem to be a way of doing this with existing API.
Why would you reconstruct a skeleton based upon your 3d depth data ?
The kinect Sdk can record a skeleton directly without such reconstruction.
And one camera is enough to record a skeleton.
if you use the kinect v2, then out of my head i can track 3 or 4 skeletons or so.
kinect provides basically 3 streams, rgb video, depth, skeleton.

Kinect Fusion - data format - object segmentation

I have started working with Kinect Fusion recently for my 3D reconstruction project. I have two questions in this field:
What is inside .STL file? is it the vertices for different objects in the scene?
How can I segment a specific object (e.g. my hand) in the reconstructed file? Is there a way to do so using Kinect Fusion ?
Thank you in advance!
Yes, there is a Wikipedia article on the STL format:
http://en.wikipedia.org/wiki/STL_format
Firstly, you would want your hand to be fixed, because Kinect Fusion will only reconstruct static scenes. Secondly, you could use the min and max depth threshold values to filter out the rest of the scene. Or if that does not work too well, you could reconstruct the whole scene, get the mesh, and then filter out the vertices that are beyond a certain depth for example.

how can I detect floor movements such as push-ups and sit-ups with Kinect?

I have tried to implement this using skeleton tracking provided by Kinect. But it doesn't work when I am lying down on a floor.
According to Blitz Games CTO Andrew Oliver, there are specific ways to implement with depth stream or tracking silhouette of a user instead of using skeleton frames from Kinect API. You may find a good example in video game Your Shape Fitness. Here is a link showing floor movements such as push-ups!
Do you guys have any idea how to implement this or detect movements and compare them with other movements using depth stream?
What if a 360 degree sensor was developed, one that recognises movements not only directly in front, to the left, or right of it, but also optimizes for movement above(?)/below it? The image that I just imagined was the spherical, 360 degree motion sensors often used in secure buildings and such.
Without another sensor I think you'll need to track the depth data yourself. Here's a paper with some details about how MS implements skeletal tracking in the Kinect SDK, that might get you started. They implement object matching while parsing the depth data to capture joints in the body, you may need to implement some object templates and algorithms to do the matching yourself. Unless you can reuse some of the skeletal tracking libraries to parse out objects from the depth data for you.

Knowing nothing about Kinect/NET how hard would it be to get body position data?

Friends asked me to help him on a project, and attempting to get a sense for how easy it will be using the Kinect platform to get body positions data using the .NET platform. For example, does Kinect just give a raw datastream of movements within the range of the sensors, or does it have the option to use a "smart" datastream that IDs people, elements of a person, and position changes of people/elements.
Every skeleton that is tracked has its own uniq ID. From skeleton you can get position (axis x,y,z) of every joint that is tracked.
It`s not a simple datastream but complex set of data
I know this has already been answered, but I just wanted to point out that if you see my answer on Kinect user Detection, you can also use the PlayerIndex which JuergeonD also explains on Kinect SDK player detection. Hope this helps!