I want to use Kinect v2 and record Depth, IR, and RGB images. About the characteristics of depth image we all know that depth image shows distance of person from the Kinect sensor and as we move close or far from the sensor the depth values changes. However, I want to know about the characteristics of IR image. If the person is standing in front of sensor and after some time he forward does the IR image show any change?
The IR image is simply the intensity of reflected IR radiation as emitted by the Kinect. The IR image will not show any change in distance other than the illumination tapering off if you move too far away and the sensor saturating if you have are too close or have a reflective surface in front of the Kinect.
Related
I am working on human pose estimation work.
I am able to generate 2d coordinates of different joints of a person in an image.
But I need 3d coordinates to solve the purpose of my project.
Is there any library or code available to generate 3d coordinates of joints ?
Please help.
for 3d coordinates on pose estimation there is a limit for you. you cant get 3d pose with only one camera (monocular). you have 2 way to estimate those :
use RGBD ( red, green, blue and depth) cameras like Kinect
or use stereo vision with using at least two camera.
for RGBD opencv contrib has a library for that.
but if you want to use stereo vision you have some steps:
1.Get camera calibration parameters
for calibration you can follow this.
2.then you should get undistorted of your points with using calibration parameters.
3.then you should get projection matrix of your both cameras.
4.at last, you can use opencv triangulation for getting 3D coordinates.
for more info about each step, you can search about stereo vision, camera calibration, triangulation and etc.
I have a black region on the right-hand side of the depth frame.
When I point the Kinect at a white wall, I get the following (see attached picture from Kinect Studio): the black region never goes away.
Depth Frame:
IR Frame:
SDK2.
It doesn't seem to matter where I point the Kinect or how the lighting situation is; the result is always the same.
I supsect the Depth Sensor/IR Emitter to be somehow misaligned.
Does anybody know if I can align or calibrate the Sensors somehow? Or is it a hardware issue?
Using Kinect for XBox One with the Kinect For Windows USB Adapter.
I'm using Kinect v2 and Kinect SDK v2.
I have couple of questions about coordinate mapping:
How to transfer a camera space point (point in 3d coordinate system) to depth space with depth value?
Current MapCameraPointToDepthSpace method can only return the depth space coordinate.
But without depth value, this method is useless.
Did anyone know how to get the depth value?
How to get the color camera intrinsic?
There is only a GetDepthCameraIntrinsics methos to get depth camera intrinsic.
But how about color camera?
How to use the depth camera intrinsic?
Seems that the Kinect 2 consider the radial distortion.
But how to use these intrinsic to do the transformation between depth pixel and 3d point?
Is there any example code can do this?
Regarding 1: The depth value of your remapped world coordinate is the same as in the original world coordinate's Z value. Read the description of the depth buffer and world coordinate space: this value in both is simply the distance from the point to Kinect's plane, in meters. Now, if you want the depth value of the object being seen on the depth frame directly behind your remapped coordinate, you have to read the depth image buffer in that position.
Regarding 3: You use the camera's intrinsic when you have to manually construct a CoordinateMapper object (i.e. when you don't have a Kinect available). When you get the CoordinateMapper associated to a Kinect (using Kinect object's CoordinateMapper property), it already contains that Kinect's intrinsics... that's why you have a GetDepthCameraIntrinsics method which returns that specific Kinect's intrinsics (they can vary from device to device).
Regarding 2: There is now way to get the color camera intrinsic. You have to evaluate them by camera calibration.
I just have a confusion in understanding the difference between the Active IR image and depth Image of Kinect v2. Can anyone tell me the what special features Active IR image have as compare to depth image?
In the depth image, the value of a pixels relates to the distance from the camera as measured by time-of-flight. For the active infrared image, the value of a pixel is determined by the amount of infrared light reflected back to the camera.
Sidenote:
I think there is only one sensor that does both these. The Kinect uses the reflected IR to calculate time of flight but then also makes it available as an IR image.
Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.