Kinect (XBoxOne) SDK 2.0 - Depth Sensor Blind Spot (Sensor Misaligned?) - kinect

I have a black region on the right-hand side of the depth frame.
When I point the Kinect at a white wall, I get the following (see attached picture from Kinect Studio): the black region never goes away.
Depth Frame:
IR Frame:
SDK2.
It doesn't seem to matter where I point the Kinect or how the lighting situation is; the result is always the same.
I supsect the Depth Sensor/IR Emitter to be somehow misaligned.
Does anybody know if I can align or calibrate the Sensors somehow? Or is it a hardware issue?
Using Kinect for XBox One with the Kinect For Windows USB Adapter.

Related

How to transfer depth pixel to camera space using kinect sdk v2

I'm using Kinect v2 and Kinect SDK v2.
I have couple of questions about coordinate mapping:
How to transfer a camera space point (point in 3d coordinate system) to depth space with depth value?
Current MapCameraPointToDepthSpace method can only return the depth space coordinate.
But without depth value, this method is useless.
Did anyone know how to get the depth value?
How to get the color camera intrinsic?
There is only a GetDepthCameraIntrinsics methos to get depth camera intrinsic.
But how about color camera?
How to use the depth camera intrinsic?
Seems that the Kinect 2 consider the radial distortion.
But how to use these intrinsic to do the transformation between depth pixel and 3d point?
Is there any example code can do this?
Regarding 1: The depth value of your remapped world coordinate is the same as in the original world coordinate's Z value. Read the description of the depth buffer and world coordinate space: this value in both is simply the distance from the point to Kinect's plane, in meters. Now, if you want the depth value of the object being seen on the depth frame directly behind your remapped coordinate, you have to read the depth image buffer in that position.
Regarding 3: You use the camera's intrinsic when you have to manually construct a CoordinateMapper object (i.e. when you don't have a Kinect available). When you get the CoordinateMapper associated to a Kinect (using Kinect object's CoordinateMapper property), it already contains that Kinect's intrinsics... that's why you have a GetDepthCameraIntrinsics method which returns that specific Kinect's intrinsics (they can vary from device to device).
Regarding 2: There is now way to get the color camera intrinsic. You have to evaluate them by camera calibration.

Key Difference between Active IR image and depth image in Kinect V2

I just have a confusion in understanding the difference between the Active IR image and depth Image of Kinect v2. Can anyone tell me the what special features Active IR image have as compare to depth image?
In the depth image, the value of a pixels relates to the distance from the camera as measured by time-of-flight. For the active infrared image, the value of a pixel is determined by the amount of infrared light reflected back to the camera.
Sidenote:
I think there is only one sensor that does both these. The Kinect uses the reflected IR to calculate time of flight but then also makes it available as an IR image.

characteristics of Kinect V2 Infrared images

I want to use Kinect v2 and record Depth, IR, and RGB images. About the characteristics of depth image we all know that depth image shows distance of person from the Kinect sensor and as we move close or far from the sensor the depth values changes. However, I want to know about the characteristics of IR image. If the person is standing in front of sensor and after some time he forward does the IR image show any change?
The IR image is simply the intensity of reflected IR radiation as emitted by the Kinect. The IR image will not show any change in distance other than the illumination tapering off if you move too far away and the sensor saturating if you have are too close or have a reflective surface in front of the Kinect.

All frames from Kinect at 30FPS

I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps.
Yes, it is possible to capture color/depth at 30fps while capturing the skeleton.
See image below, just in case you think me dodgy. :) This is a raw Kinect Explorer running, straight from Visual Studio 2010. My work development platform is an i5 Dell laptop.

How to texture map point cloud from Kinect with image taken by another camera?

Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.