Pylons decide a path. I want the Kinect to detect pylons so that I can make my robot to stay within the path. Is Kinect capable of object detection and is there any tutorial on this.
The kinect itself is just a device which will get you some image data and depth values (via OpenKinect or the upcoming SDK). So you're looking for a library (or combination of several ones) which can do this based on the data. Not sure there is something which provides a direct solution, but it seems a combination with OpenCV has been successful: http://www.youtube.com/watch?v=cRBozGoa69s
Related
The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"
I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.
I am trying to do some work using Kinect and the Kinect SDK.
I was wondering whether it is possible to detect facial expressions (e.g. wink, smile etc) using the Kinect SDK Or, getting raw data that can help in recognizing these.
Can anyone kindly suggest any links for this ? Thanks.
I am also working on this and i considered 2 options:
Face.com API:
there is a C# client library and there are a lot of examples in their documentation
EmguCV
This guy Talks about the basic face detection using EmguCV and Kinect SDK and you can use this to recognize faces
Presently i stopped developing this but if you complete this please post a link to your code.
This is currently not featured within the Kinect for Windows SDK due to the limitations of Kinect in producing high-resolution images. That being said, libraries such as OpenCV and AForge.NET have been sucessfuly used to detected finger and facial recognition from both the raw images that are returned from Kinect, and also RGB video streams from web cams. I would use this computer vision libraries are a starting point.
Just a note, MS is releasing the "Kinect for PC" along with a new SDK version in february. This has a new "Near Mode" which will offer better resolution for close-up images. Face and finger recognition might be possible with this. You can read a MS press release here, for example:
T3.com
The new Kinect SDK1.5 is released and contains the facial detection and recognition
you can download the latest SDK here
and check this website for more details about kinect face tracking
This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.
I was looking over the documentation trying to find anything that will
allow me the Kinect/device?
I'm trying to get accelerometer data, but not sure how. So far there
were two things I've spotted in the guide and docs:
XnModuleDeviceInterface/xn::ModuleDevice and
XnModuleLockAwareInterface/xn::ModuleLockAwareInterface.
I'm wondering if I can use the ModuleDevice Get/Set methods to talk to
the device and ask for accelerometer data.
If so, how can I get started?
Also, I was thinking, if it would be possible to 'lock' openni
functionality temporarily while I try to get accelerometer data via
freenect or something similar, then 'unlocking' after reading is
done.
Has anyone tried this before? Any tips?
I'm currently using the SimpleOpenNI wrapper and Processing, but have used OpenFrameworks and the C++ library, so the language wouldn't be very important.
The standard OpenNI Kinect drivers don't expose or allow access to any accelerometer, motor, or LED controls. All of these controls are done through the "NUI Motor" USB device (protocol reference), which the SensorKinect Kinect driver doesn't communicate with.
One way around this is to use a modified OpenNI SensorKinect driver, i.e., this one which does connect to the NUI Motor device, and exposes basic accelerometer and motor control via a "CameraAngleVertical" integer property. It appears that you should be able to read/write an arbitrary integer property using SimpleOpenNI and Processing.
If you're willing to use a non-OpenNI-based solution, you can use Daniel Shiffman's Kinect Processing library which is based on libfreenect. You'll get good accelerometer, motor, etc..., but will lose access to the OpenNI skeleton/gesture support. A similar library for OpenFrameworks is ofxKinect.
Regarding locking of OpenNI nodes, my understanding is that this just prevents properties from updating and does nothing at the USB driver level. Switching between drivers--PrimeSense-based SensorKinect and libusb-based libfreenect--at runtime is not possible. It may be possible (I haven't tried it) to configure OpenNI for the camera device, and to use freenect to communicate with the NUI Motor device. No locking/synchronization between these devices should be required.