Depth Calibration Target - kinect

I am trying to gather as much resources as possible regarding to depth calibration of Kinect camera.
One question which is not that trivial for me is the following:
What kind of calibration target should be used with Microsoft Kinect camera.
I'm thinking of having kind of checker-board based pattern, but not sure whether the depth sensor would be able to detect it. Am i right?
Maybe high precision 3-D objects would be more beneficial for calibration of the depth camera?
Thank you very much!
P.S Please consider the fact that i am going to use only depth sensor on Kinect and not RGB sensor, as i mentioned.

Related

Can I use sensor-fusion for multiple GPS receivers and better my position estimation?

I am wondering if it makes sense to fuse multiple GPS signals to improve my estimated result. This works fine for example for accelartion sensors, but this sensors have a white gaussian noise.
GPS sensors being mounted on the same board probably suffer from the same errors like drift or multi-path effects, which cannot be corrected by only fuse the sensor readings of this sensors. I imagine that like a constant offset in the same direction, which won t be correct just stays nearly the same.
Furthermore, I have diffrent sensor which I can mount on my drone, even RKT sensor. In my opinion, it makes no sense to fuse a d-GPS with readings from an RKT GPS.
Please correct my if I am wrong.
Thank you in advance and I hope this forum is the right spot to ask that question.
yes you can. Use EKF based approach with onboard multi GPS and multi IMU
The DJi is doing it, But it is can only prevent one of sensor failure, not the systematic drift patter. To avoid that, you need some more source such as visual odometry or lidar odometry to fuse in the EKF. GPS sate count is good meaure of how bad the position is. It ranges from 0 to 15. So when every one is 15, trust GPS more less variance. When everyone is lower than 6 add very high variance to GPS source.
Yes RTK might be better when you have direct line of sight. But once out of sight, then other GPS might be better. So totaly depends on your use case

Kinect 360 point cloud resolution increase - is it possible with more infrared projectors in the room?

If I scatter more infrared points around the room from separate infrared laser speckle projectors, and in doing so increase the point cloud resolution spread over objects in the room, will this result in a higher resolution 3d scan captured by the infrared camera on the Kinect 360? Or are there limitations on how much point cloud data the IR camera and/or Kinect software/hardware can process?
Found this paper which answered my question:
http://www.cs.cornell.edu/~hema/rgbd-workshop-2014/papers/Alhwarin.pdf

Kinect IR Emitter Continuous or Pulsed?

I'm a student intern and have a work project using Kinect cameras. Unfortunately I can't go into project details due to confidentiality, but need to know if the IR dot array that is emitted from the IR blasters within the Kinect is a continuous stream or pulsed? Just the emitted IR light, not the reception from the IR camera. It would be shining on some other IR light sensors within the environment that detect when something passes through their IR field of view, but I have been told that it would not interfere as long as the stream is continuous.
I would appreciate any help/ information you guys could give.
The kinect 360 cameras has a static pattern of points that are unevenly distributed. The pattern is continuous and not pulsed as far as I know.

Why does Kinect2 Fusion produce worst results, then Kinect1?

At my university we have several Kinect 1's and Kinect 2's. I am testing the quality of the Kinect Fusion results on both device and unexpectedly Kinect 2 produces worst results.
My testing environment:
Static camera scanning a static scene.
In this case if I check both results from Kinect 1 and 2, then it looks like Kinect 2 has a way smoother and nicer resulting point cloud, but if I check the scans from a different angle, then you can see the that Kinect 2 result is way worst even if the point cloud is smoother. As you can see on the pictures if I check the resulting point cloud from the same view as the camera was, then it looks nice, but as soon as I check it from a different angle then the Kinect 2 result is horrible, can't even tell that in the red circle there is a mug.
Moving camera scanning a static scene
In this case Kinect 2 has even worst results, then in the above mentioned case compared to Kinect 1. Actually I can't even reconstruct with Kinect 2 if I am moving it. On the other hand Kinect 1 does a pretty good job with moving camera.
Does anybody have any idea why is the Kinect 2 failing these tests against Kinect 1? As I mentioned above we have several Kinect cameras at my university and I tested more then one of them each, so this should not be a hardware problem.
I've experienced similar results when I was using Kinect for 3D reconstruction. Kinect 2 produced worse results compared to Kienct 1. In fact, I tried the InfiniTAM framework for doing 3D reconstruction. It too yielded similar results. What was different in my case compared to yours was that I was moving the camera around and the camera tracking was awful.
When I asked the authors of InfiniTAM about this, they provided the following likely explanation:
... the Kinect v2 has a time of flight camera rather than a structured
light sensor. Due to imperfections in the modulation of the active
illumination, it is known that most of these time of flight sensors
tend to have biased depth values, e.g. at a distance of 2m, everything
is about 5cm closer than measured, at a distance of 3m everything is
about 5cm further away than measured...
Apparently, this is not an issue with structured light cameras (Kinect v1 and the like). You can follow the original discussion here.

How to detect heart pulse rate without using any instrument in iOS sdk?

I am right now working on one application where I need to find out user's heartbeat rate. I found plenty of applications working on the same. But not able to find a single private or public API supporting the same.
Is there any framework available, that can be helpful for the same? Also I was wondering whether UIAccelerometer class can be helpful for the same and what can be the level of accuracy with the same?
How to implement the same feature using : putting the finger on iPhone camera or by putting the microphones on jaw or wrist or some other way?
Is there any way to check the blood circulation changes ad find the heart beat using the same or UIAccelerometer? Any API or some code?? Thank you.
There is no API used to detect heart rates, these apps do so in a variety of ways.
Some will use the accelerometer to measure when the device shakes with each pulse. Other use the camera lens, with the flash on, then detect when blood moves through the finger by detecting the light levels that can be seen.
Various DSP signal processing techniques can be used to possibly discern very low level periodic signals out of a long enough set of samples taken at an appropriate sample rate (accelerometer or reflected light color).
Some of the advanced math functions in the Accelerate framework API can be used as building blocks for these various DSP techniques. An explanation would require several chapters of a Digital Signal Processing textbook, so that might be a good place to start.