Why does Kinect2 Fusion produce worst results, then Kinect1? - kinect

At my university we have several Kinect 1's and Kinect 2's. I am testing the quality of the Kinect Fusion results on both device and unexpectedly Kinect 2 produces worst results.
My testing environment:
Static camera scanning a static scene.
In this case if I check both results from Kinect 1 and 2, then it looks like Kinect 2 has a way smoother and nicer resulting point cloud, but if I check the scans from a different angle, then you can see the that Kinect 2 result is way worst even if the point cloud is smoother. As you can see on the pictures if I check the resulting point cloud from the same view as the camera was, then it looks nice, but as soon as I check it from a different angle then the Kinect 2 result is horrible, can't even tell that in the red circle there is a mug.
Moving camera scanning a static scene
In this case Kinect 2 has even worst results, then in the above mentioned case compared to Kinect 1. Actually I can't even reconstruct with Kinect 2 if I am moving it. On the other hand Kinect 1 does a pretty good job with moving camera.
Does anybody have any idea why is the Kinect 2 failing these tests against Kinect 1? As I mentioned above we have several Kinect cameras at my university and I tested more then one of them each, so this should not be a hardware problem.

I've experienced similar results when I was using Kinect for 3D reconstruction. Kinect 2 produced worse results compared to Kienct 1. In fact, I tried the InfiniTAM framework for doing 3D reconstruction. It too yielded similar results. What was different in my case compared to yours was that I was moving the camera around and the camera tracking was awful.
When I asked the authors of InfiniTAM about this, they provided the following likely explanation:
... the Kinect v2 has a time of flight camera rather than a structured
light sensor. Due to imperfections in the modulation of the active
illumination, it is known that most of these time of flight sensors
tend to have biased depth values, e.g. at a distance of 2m, everything
is about 5cm closer than measured, at a distance of 3m everything is
about 5cm further away than measured...
Apparently, this is not an issue with structured light cameras (Kinect v1 and the like). You can follow the original discussion here.

Related

How to get almost one meter accuracy between mobile devices

I am in search of a method to get one meter accuracy between mobile devices, with no luck so far. I was looking at gps and location based geo-fencing as an option, but i don't think I am going to get a one meter accuracy. The other option is with Bluetooth, but not sure about that as well.
Can I build a react-native app that can see whether two mobile devices with the app installed to get an accuracy as little as one meter using gps or any other device specific sensors?
I am not sure about your requirement. But to answer your questions, yes you should be able to get 1-meter displacement measurement between 2 geo-location.
Maybe you can check how to achieve 1-meter accuracy in Android
There is other way, where you don't need 2 devices but you can find the distance of any object. That would require a phone with 2 cameras. If that fits the requirement of yours you can explore that area also.

Can I use sensor-fusion for multiple GPS receivers and better my position estimation?

I am wondering if it makes sense to fuse multiple GPS signals to improve my estimated result. This works fine for example for accelartion sensors, but this sensors have a white gaussian noise.
GPS sensors being mounted on the same board probably suffer from the same errors like drift or multi-path effects, which cannot be corrected by only fuse the sensor readings of this sensors. I imagine that like a constant offset in the same direction, which won t be correct just stays nearly the same.
Furthermore, I have diffrent sensor which I can mount on my drone, even RKT sensor. In my opinion, it makes no sense to fuse a d-GPS with readings from an RKT GPS.
Please correct my if I am wrong.
Thank you in advance and I hope this forum is the right spot to ask that question.
yes you can. Use EKF based approach with onboard multi GPS and multi IMU
The DJi is doing it, But it is can only prevent one of sensor failure, not the systematic drift patter. To avoid that, you need some more source such as visual odometry or lidar odometry to fuse in the EKF. GPS sate count is good meaure of how bad the position is. It ranges from 0 to 15. So when every one is 15, trust GPS more less variance. When everyone is lower than 6 add very high variance to GPS source.
Yes RTK might be better when you have direct line of sight. But once out of sight, then other GPS might be better. So totaly depends on your use case

Defining a tracking area for the Kinect

Is it possible to specify a (rectangular) area for skeleton tracking with the Kinect (using any of the available SDKs)? I want to make sure that only users inside that designated area are tracked and that the sensor is not distracted by people outside it. Think of a game zone, in which a player interacts with the Kinect and where bystanders outside of the zone should be ignored lest they confuse the sensor.
The reason I want this is that many times the Kinect "locks" onto someone or even something, whether it should or not, and then it's difficult for the sensor to track other individuals, who come into tracking range. I want to avoid that by defining this zone.
It's not possible to specify a target area for the skeleton tracking with Microsoft's official SDKs, but there are some potential workarounds.
(Note that I'm not familiar with other SDKs for the Kinect, and note that I'm not sure if you are using the Kinect v1 or v2.)
If you are using the Kinect v1, note that it can track 6 players simultaneously (with a skeleton body position), but it can only provide full-body skeletal tracking for 2 players at a time. It's possible to specify which 2 players you want full skeletal tracking for in the official SDK, and you can do this based on which skeletons are in your target game zone.
If this isn't the problem, and the problem is that the Kinect (v1 or v2) has already detected 6 players and it can't detect a 7th individual that's in your game zone, then that is a more difficult problem. With the official SDK, you have no control over which 6 players are selected to be tracked. The sensor will lock onto the first 6 players it finds, so if a 7th player walks in, there is no simple way to lock onto that player.
However, there are some possible workarounds that involve resetting the sensor to clear all skeletons to re-select the 6 tracked skeletons (see the thread Skeleton tracking in crowds - Kinect v2):
Kinect body tracking is always scanning and finding candidate bodies
to track. The body tracking only locks on when it detects head and
shoulders of the person facing the camera. You could do something like
look for stable blob points in the target area and if there isn't a
tracked body, reset the Kinect Monitor service.
The SDK is resilient to this type of failure of the runtime, but it is
a hard approach. Additionally, you could employ a way to cover the
depth camera (your hand) to reset the tracking since this will make
all depth/ir invalid and will need to rebuild.
-- Carmine Sirignano - MSFT
In the same thread, RobAcheson points out that restarting the sensor is another workaround:
I've been using the by-hand method successfully for a while and that
definitely works - when I'm in the crowd :)
I have started calling KinectSensor.Close() and KinectSensor.Open()
when there are >6 skeletons if none are in the target area. That seems
to be working well too. Now I just need a crowd to test with.
-- RobAcheson

Calculating orientation from 6-axis IMU without Magnetometer

Is it possible to perform quaternion/Euler angle calculations from only accelerometer and gyroscope readings?
I’d like to be able to detect orientation for a small pcb that I have which I designed and built with InvenSense ICM-20689 (SPI version of the popular MPU-6050/6000) but without a magnetometer. I can incorporate a magnetometer into the next revision, but I’d prefer not to if I can get away without it as it costs valuable PCB real estate on a wearable device which I’m trying to make very small. I’ve seen complimentary filters used to give 2 of 3 Euler angles in which no magnetometer is used, so I’d like to understand what the trade-offs are for not using a magnetometer.

Will Kinect v2 support multiple sensors?

Working with multiple Kinect v1 sensors is very difficult because of the IR interference between the sensors.
Based on what I read on this gamastura article, Microsoft got rid of the interference problem with the time-of-flight mechanism that Kinect v2 sensor uses to gauge depth.
Does that mean I could use multiple Kinect v2 sensors at the same time, or did I misunderstand the article?
Thanks for the help!
I asked this question, in person, of the dev team at the meetup in San Francisco in April. The answer I got was:
"This feature is 3+ months away. We want to prioritize single-Kinect features before working on multiple Kinects."
I'm a researcher, and my goal is to have a bunch of odd setups, so this is a frustrating answer, but I understand that they need to prioritize usage that will be immediately useful to a larger market.
Could you connect them to multiple computers and stream data back and forth?
As #escapecharacter mentioned not likely to have support for multiple kinect v2 sensors in the very near future.
I can also confirm, one of the Kinect V2 SDK samples has this comment:
// for Alpha, one sensor is supported
this.kinectSensor = KinectSensor.Default;
I think the hardware itself is capable of avoiding the interference problem. Hopefully the slightly larger amount of data (higher res RGB stream) won't be a problem with multiple sensors(and available USB bandwidth) and it would be a matter of enabling the SDK to safely handle multiple sensor instances in the future.
I wouldn't expect a fast/quick update to the SDK enabling though, so in the meantime, although not ideal you could try either:
Using multiple V2 sensors on multiple machines communicating over a
local network, passing only processed/minimal data (to keep the delay
as small as possible)
Using multiple V1 sensors using Shake'n'Sense (pdf link to paper) to reduce interference
At least you would to a certain extent make some progress testing some of your assumptions for your project with multiple sensors, and update the project when the updated SDK is out.
I realize I misread your question, and interpreted it as "how can I connect to Kinect 2's to a computer" when you were actually asking about how to avoid interference, and Kinect 2 was your hoped-for solution.
You can hack avoiding Kinect 1 interference by lighting shaking one of them independent of the other. See here:
http://channel9.msdn.com/coding4fun/kinect/Shaking-some-sense-into-using-multiple-Kinects-with-Shake-n-Sense
One of the craziest things I've ever seen that actually worked. I was at Microsoft Research when they figured this out, and it works quite well.
You can have a Kinect v1 viewing the same scene as a Kinect v2 without interference. I know this isn't exactly what you're looking for, but it could be useful.
2 Years later, and this still cannot be done.
See:
https://social.msdn.microsoft.com/Forums/en-US/8e2233b6-3c4f-485b-a683-6bacd6a74d53/how-to-prevent-interference-between-multiple-kinect-v2-sensors?forum=kinectv2sdk
https://github.com/OpenKinect/libfreenect2/issues/424
As stated in the second link,
What happens is this: Each Kinect v2 continuously switches between different modulation frequencies. When two Kinects switch to the same frequency range, the interference occurs. They typically gradually drift into the same range and after a while drift out of that range again. So, theoretically, you just have to wait a bit until the interference is gone. The only way I found to stop the interference immediately was to disconnect (and reconnect) the concerned Kinect from its power supply
...
Quite unfortunate that these modulation frequencies aren't controllable at this time. Let's hope MS surprises us with that custom firmware
IIRC, I came across a group at MIT that got custom firmware from MS which solved the problem, but I can't seem to find the reference. Unfortunately, it is not available to the public.
I think we cant use multiple Kinect v2 in same environment because they will interfere lot comparatively kinect v1. As Kinect v2 depth sensing based on time of flight principle, multiple kinect v2 will interfere lot. For kinect v1 interference is not that much severe.