how can I detect floor movements such as push-ups and sit-ups with Kinect? - tracking

I have tried to implement this using skeleton tracking provided by Kinect. But it doesn't work when I am lying down on a floor.
According to Blitz Games CTO Andrew Oliver, there are specific ways to implement with depth stream or tracking silhouette of a user instead of using skeleton frames from Kinect API. You may find a good example in video game Your Shape Fitness. Here is a link showing floor movements such as push-ups!
Do you guys have any idea how to implement this or detect movements and compare them with other movements using depth stream?

What if a 360 degree sensor was developed, one that recognises movements not only directly in front, to the left, or right of it, but also optimizes for movement above(?)/below it? The image that I just imagined was the spherical, 360 degree motion sensors often used in secure buildings and such.

Without another sensor I think you'll need to track the depth data yourself. Here's a paper with some details about how MS implements skeletal tracking in the Kinect SDK, that might get you started. They implement object matching while parsing the depth data to capture joints in the body, you may need to implement some object templates and algorithms to do the matching yourself. Unless you can reuse some of the skeletal tracking libraries to parse out objects from the depth data for you.

Related

Dual kinect calibration using powerfull IR LED illuminator

i am using multiple Kinects within the scene. So I need to calibrate them and find the extrinsic parameters like translation and rotation world coordinate system. Once I have that information, i can reconstruct the scene at highest level of accuracy. the important point is : i want to get submillimeter accuracy and may be it would be nice if i could use powerfull IR projector in my system. But i do not have any Background about IR sensor and calibration methods. So i need to know about tow subject : 1- is it possible to add IR LED illuminator to kinect and manage it? 2- if i could add how to calibrate my new system?
Calibration (determining relative transforms (rotation, scale, position)) is only part of the problem. You also need to consider whether each Kinect can handle the interference of the other Kinect's projected IR reference patterns.
"Shake n Sense" (by Microsoft Research) is a novel approach that you may be able to use that has been demonstrated to work.
https://www.youtube.com/watch?v=CSBDY0RuhS4

kinect SKD skeletonization method

I was wondering if there's a way to modify the depth map prior to sending it to the skeletonization algorithm used by the kinect, for example, if we want to run the skeletonization on the output of a segmented depth image. So far I have reviewed the methods in the sdk but I haven't been able to find a skeletonization method exposed. It's like you either turn the skeleton on or off but you have no control on its inputs.
If anyone has any idea regarding this topic I will be much obliged.
Shamita: skeletonization means tracking the joints of the user in real time. I edit because I can't comment (not enought reputation).
All the joints' give a depth coordinate and I don't think you can mess with the Kinect hardware input stream. But you can categorize the joints regarding to depth segments. For example with the live stream you categorize it with the corresponding category if it is below 10 and above five it is in category A. this can be done with the live stream itself because it is just a simple calculation.

It is possible to recognize all objects from a room with Microsoft Kinect?

I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.

Shape (preferably human) recognition API for use with standard webcam

I am interested in getting into user interaction/shape detection with a simple usb webcam. I can use multiple webcams, but don't want to be restricted to using something like the kinect sensor. My detection cameras need to be set up on either side of a helmet (or if an individual one, on top). I have found some, but they don't really have the functionality I need and most are angled towards facial recognition. I need to be able to detect a basic human skeletal structure and determine if something is obstructing it. I would really rather be able to do it without using any sort of marker system on the target person. I would like for it to be able to target multiple structures. Obviously I am willing to do tweaking if necessary, but want to see how close I can get to what I need before I rebuild the wheel. I am trying to design an ai system that can determine how many people are in an area and where they are.
Doubt there will be anything like this since Microsoft spent a ton of money on the R&D for Kinect and it's probably all locked behind an NDA. I'm also guessing there's a lot of hardware within the Kinect that is not available in a standard webcam.
The closest thing that I could find to what you're looking for is the OpenKinect project, might be a good place to start your research.

Kinect - Tracking people in a crowd - Sports Motion Tracking

I'm interested in programming the Kinect to track people over a largish area.
In particular, I'm looking to track players on a small sports field using gestures to record events in a sports game.
So far I have not found any examples of this being done before, other than Processing examples of tracking players on recorded video.
Could anybody please provide any examples of Microsoft's Kinect technology being applied to sport?
This is not what the Kinect is designed to do, and is not something that it will do for you.
The Kinect is capable to tracking no more then 6 people at a time, and only 2 people actively. It works best for people 6-8 feet away and will not track anyone much further then that.
For what you are proposing the Kinect would not benefit you. It would probably hinder you, since it is designed for 1-2 persons at a limited distance. You would be better off with a higher quality camera.
I never used or tried Kinect (but i really want to get one:). So i don't know if it suits what you need. But there is this amazing tutorial from Amnon Owed on Kinect and processing. The example is with one person but might be of your interest. The video is very cool.
It is possible. You will need to depart from the skeleton tracking available to you in the Kinect SDK and process the depth data yourself. You'll need multiple Kinects to handle a larger area. Topdown-mounted from the ceiling will let you more easily distinguish individuals that would be grouped together in a multi-person blob using side view.