All frames from Kinect at 30FPS - kinect

I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps.

Yes, it is possible to capture color/depth at 30fps while capturing the skeleton.
See image below, just in case you think me dodgy. :) This is a raw Kinect Explorer running, straight from Visual Studio 2010. My work development platform is an i5 Dell laptop.

Related

Yellowstone tablet camera sensor

At a company I work for we're developing a 3D Reconstruction app that can be downloaded to common Android devices. In order for the reconstructions to work the best, we need to input the devices' camera sensor size (specifically the CCD width). Through the app's Play Store administration page we've seen several users using something called "Google ATAP Project Tanto Tablet (Yellowstone)". Can anyone help us know what sensor size does that tablet has? This would improve the reconstructions made on that device.
Thanks in advance!
As far as I understand, Tango Yellowstone tablet uses a combined RGB and IR camera from OmniVision, OV4682 RGB IR. You can find more specifications here:
http://www.ovt.com/products/sensor.php?id=145

Kinect (XBoxOne) SDK 2.0 - Depth Sensor Blind Spot (Sensor Misaligned?)

I have a black region on the right-hand side of the depth frame.
When I point the Kinect at a white wall, I get the following (see attached picture from Kinect Studio): the black region never goes away.
Depth Frame:
IR Frame:
SDK2.
It doesn't seem to matter where I point the Kinect or how the lighting situation is; the result is always the same.
I supsect the Depth Sensor/IR Emitter to be somehow misaligned.
Does anybody know if I can align or calibrate the Sensors somehow? Or is it a hardware issue?
Using Kinect for XBox One with the Kinect For Windows USB Adapter.

Kinect: How to obtain a skeleton from back view?

Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?

Green screen kinect

The green screen kinect sample which is available in SDK 1.5 is not showing the head part fully, whereas the hair part is coming with some blur effect.
I'm unable to see the hair portion in the sample. Is there any improved version of kinect green screen that is available?
That is not problem of the sample. See this: http://www.youtube.com/watch?v=6BaWwx5x7nM
The Kinect project a IR grid to detect the depth, but some "rays" pass through the space amoung the hair so can not detect that part.
It's possible "fill" that missing part, but you have to implement by yourself.

iPhone iOS open source gyroscope based spherical coordinate system viewer, like 360 panorama or tour wrist

I'm looking for an iPhone based, preferably iOS5 with ARC project that uses the iPhone4's gyro to look around in spherical coordinate system. The phone is at the center of a sphere, and by looking at the sensor output, it can understand where the camera is pointing in spherical coordinates.
I'm not sure if what I'm thinking of can be accomplished with iOS5 CMAttitude which blends sensors of iPhone4 can it?
I intend to use the project to control a robotic turret and make it be able to "look" at a particular point within a spherical coordinate system.
What comes to mind is that a 360 panorama or a tour wrist like app would be a good starting point for such a project. Is there something that is similar, open source and uses native iOS Core Motion framework?
Thank you!
If you would like to license the TourWrist technology, please let me know. For example, we license the TourWrist capture and viewer APIs/SDKs.
Dan Smigrod
via: support#TourWrist.com