I just have a confusion in understanding the difference between the Active IR image and depth Image of Kinect v2. Can anyone tell me the what special features Active IR image have as compare to depth image?
In the depth image, the value of a pixels relates to the distance from the camera as measured by time-of-flight. For the active infrared image, the value of a pixel is determined by the amount of infrared light reflected back to the camera.
Sidenote:
I think there is only one sensor that does both these. The Kinect uses the reflected IR to calculate time of flight but then also makes it available as an IR image.
Related
I'm trying to implement a depth camera on unreal engine 5.1's default camera using a post processing volume (not cinematic camera). The official tutorial describes a focal region, shown in the colour black in the image below. They describe that it's possible to increase the size of this black focal region but from what I see in terms of public variables it's only availale for mobile depth of field or cinematic cameras. Does anybody know how if it's possible?
https://docs.unrealengine.com/4.27/Images/RenderingAndGraphics/PostProcessEffects/DepthOfField/DOF_LayerImplementation1.webp
These are only controls that I see that are available in the default camera https://docs.unrealengine.com/5.0/Images/designing-visuals-rendering-and-graphics/post-process-effects/depth-of-field/DoFProperties.webp
I have a black region on the right-hand side of the depth frame.
When I point the Kinect at a white wall, I get the following (see attached picture from Kinect Studio): the black region never goes away.
Depth Frame:
IR Frame:
SDK2.
It doesn't seem to matter where I point the Kinect or how the lighting situation is; the result is always the same.
I supsect the Depth Sensor/IR Emitter to be somehow misaligned.
Does anybody know if I can align or calibrate the Sensors somehow? Or is it a hardware issue?
Using Kinect for XBox One with the Kinect For Windows USB Adapter.
I'm using Kinect v2 and Kinect SDK v2.
I have couple of questions about coordinate mapping:
How to transfer a camera space point (point in 3d coordinate system) to depth space with depth value?
Current MapCameraPointToDepthSpace method can only return the depth space coordinate.
But without depth value, this method is useless.
Did anyone know how to get the depth value?
How to get the color camera intrinsic?
There is only a GetDepthCameraIntrinsics methos to get depth camera intrinsic.
But how about color camera?
How to use the depth camera intrinsic?
Seems that the Kinect 2 consider the radial distortion.
But how to use these intrinsic to do the transformation between depth pixel and 3d point?
Is there any example code can do this?
Regarding 1: The depth value of your remapped world coordinate is the same as in the original world coordinate's Z value. Read the description of the depth buffer and world coordinate space: this value in both is simply the distance from the point to Kinect's plane, in meters. Now, if you want the depth value of the object being seen on the depth frame directly behind your remapped coordinate, you have to read the depth image buffer in that position.
Regarding 3: You use the camera's intrinsic when you have to manually construct a CoordinateMapper object (i.e. when you don't have a Kinect available). When you get the CoordinateMapper associated to a Kinect (using Kinect object's CoordinateMapper property), it already contains that Kinect's intrinsics... that's why you have a GetDepthCameraIntrinsics method which returns that specific Kinect's intrinsics (they can vary from device to device).
Regarding 2: There is now way to get the color camera intrinsic. You have to evaluate them by camera calibration.
I want to use Kinect v2 and record Depth, IR, and RGB images. About the characteristics of depth image we all know that depth image shows distance of person from the Kinect sensor and as we move close or far from the sensor the depth values changes. However, I want to know about the characteristics of IR image. If the person is standing in front of sensor and after some time he forward does the IR image show any change?
The IR image is simply the intensity of reflected IR radiation as emitted by the Kinect. The IR image will not show any change in distance other than the illumination tapering off if you move too far away and the sensor saturating if you have are too close or have a reflective surface in front of the Kinect.
Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.