How to access and use skeleton joint coordinates using OpenNI, ASUS Xtion, and ROS? - tracking

To develop a person follower robot, I am using ASUS Xtion and OpenNI. To obtain both RGB image and skeleton joints, I am using a skeleton tracker script (https://github.com/Chaos84/skeleton_tracker). Tracker publishes joints in "/tf"
But the thing is that I cannot use those joint coordinates in my script. I don't know how to access them. How can I access and use them in my script to make the robot move according to those coordinates?
Thanks.

To get joint coordinates and angles from a /tf topic, you need to right a tf listener which is explained in this link.
Also you can look at one of my ROS packages where I wrote tf listener using OpenNI and ASUS Xtion. Here is the link.

You can use another skeleton detection/tracker, the BodySkeletonTracker:
https://github.com/derzu/BodySkeletonTracker
Look how does it works:
You can get the joints points getting an object of the class SkeletonPoints.

Related

Increase resolution of kinect rgb camera on ros

i am doing a project using deep learning and for this i need to take pictures from the kinect and evaluate them. My problem is the resolution of the pictures are 640x860. Due to this i wanted to know if ros freekinect or some library can increase the resolution given a yaml file or something like that? Thank you guys and sorry for the english
Im currently working on a project with Kinect one sensor and the camera resolution is 1920x1080.
if i am not wrong you are currently using the old xbox 360 kinect from what i see here.
I have not heard of libraries that can increase resolutiono yet(this does not mean it do not exist)
But my suggestion is to use the latest hardware found in Microsoft Store here. It cost about $150.
Cheers!

Kinect IR Emitter Continuous or Pulsed?

I'm a student intern and have a work project using Kinect cameras. Unfortunately I can't go into project details due to confidentiality, but need to know if the IR dot array that is emitted from the IR blasters within the Kinect is a continuous stream or pulsed? Just the emitted IR light, not the reception from the IR camera. It would be shining on some other IR light sensors within the environment that detect when something passes through their IR field of view, but I have been told that it would not interfere as long as the stream is continuous.
I would appreciate any help/ information you guys could give.
The kinect 360 cameras has a static pattern of points that are unevenly distributed. The pattern is continuous and not pulsed as far as I know.

Is it possible to detect heat signatures using Kinect

Hi since kinect has a infrared camera which theoretically enables thermal imaging. Is it possible to detect specific heat signatures in a human body? Are there any API DOCS that can be useful to make this work?
No, in order to enable thermal imaging, you need an infrared camera with a wavelength that is far outside the Kinect's range. So no, no heat signatures with Kinect.
More info here:
Can I figure out skin tone or body temperature using kinect?
https://physics.stackexchange.com/questions/6869/what-is-the-difference-between-thermal-and-infrared-imaging
http://answers.ros.org/question/61076/kinect-thermal-imaging/

Can the Kinect SDK be run with saved Depth/RGB videos, instead of a live Kinect?

This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.

Can I use a Kinect to identify objects?

Pylons decide a path. I want the Kinect to detect pylons so that I can make my robot to stay within the path. Is Kinect capable of object detection and is there any tutorial on this.
The kinect itself is just a device which will get you some image data and depth values (via OpenKinect or the upcoming SDK). So you're looking for a library (or combination of several ones) which can do this based on the data. Not sure there is something which provides a direct solution, but it seems a combination with OpenCV has been successful: http://www.youtube.com/watch?v=cRBozGoa69s