I just want to know whether identification of objects like humans and gestures of body parts is done by Kinect or by the Xbox 360.
It's done on the Xbox.
The Kinect only sends a depth map, rgb video and sound, which are processed by Microsoft's algorithms on the Xbox.
OpenNI comes with some tools (e.g. NITE) that do skeleton tracking and some gesture recognition for you, if you plan to program for the Kinect.
Related
I am not sure if this has been tried before but I am trying to use Kinect and detect gestures made by the Nao robot.
I have made a Kinect application, a gesture based picture viewer and it detects humans fine(Obviously it does!) What I wanted to try was (lazy as I am), to see if I could use some (say, voice) command to tell the Nao to do a Swipe Right gesture and have my application identify that gesture. The Nao can easily identify my command and do some gesture. The problem however is, when I put the Nao in front of the Kinect sensor, the Kinect does not track it.
What I want to know is, are there some basics behind Kinect's human body motion tracking that essentially fails when a robot is placed in front of it instead of a human?
PS: I have kept the Nao at the right distance from the sensor. I have also checked if the entire robot is in the field of view of the sensor.
The NAO Robot doesn't have the same proportion as a human, and moreover its size is not of an human being (too short). For those reason, classic skeleton detection doesn't detect NAO as a human.
To do that you should take a current skeleton detection, than change threshold and constants. Sadly I don't hear about that kind of algorithm being opensource...
Just let me know...
I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps.
Yes, it is possible to capture color/depth at 30fps while capturing the skeleton.
See image below, just in case you think me dodgy. :) This is a raw Kinect Explorer running, straight from Visual Studio 2010. My work development platform is an i5 Dell laptop.
I'm looking for an iPhone based, preferably iOS5 with ARC project that uses the iPhone4's gyro to look around in spherical coordinate system. The phone is at the center of a sphere, and by looking at the sensor output, it can understand where the camera is pointing in spherical coordinates.
I'm not sure if what I'm thinking of can be accomplished with iOS5 CMAttitude which blends sensors of iPhone4 can it?
I intend to use the project to control a robotic turret and make it be able to "look" at a particular point within a spherical coordinate system.
What comes to mind is that a 360 panorama or a tour wrist like app would be a good starting point for such a project. Is there something that is similar, open source and uses native iOS Core Motion framework?
Thank you!
If you would like to license the TourWrist technology, please let me know. For example, we license the TourWrist capture and viewer APIs/SDKs.
Dan Smigrod
via: support#TourWrist.com
I'm building an augmented reality game in iOS5 on devices that support gyroscopes.
I want to use CMAttitudeReferenceFrameXTrueNorthZVertical to map the device orientation and find out which CLLocation the device is looking toward. This is a new orientation available in iOS5 based on sensor fusion algorithms. It is supposed to be much smoother than the accelerometer based code.
I see a lot of examples of pre-iOS5 code, which use accelerometer and older implementations of the AR that use accelerometer code. To rewrite such code, I need to understand how to map the new CMAttitude and current location into a vector from the current location to some other CLLocation defined by drawing a vector from the center of the screen, out the back of the iphone towards that reference point.
Thank you for any hints!
Look at the APple pARK sample.. it does a perspective transform that covers the screen then projects the 3D coordinate from your location to the other geo location. https://developer.apple.com/library/ios/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083
Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.