Will Kinect v2 support multiple sensors? - kinect

Working with multiple Kinect v1 sensors is very difficult because of the IR interference between the sensors.
Based on what I read on this gamastura article, Microsoft got rid of the interference problem with the time-of-flight mechanism that Kinect v2 sensor uses to gauge depth.
Does that mean I could use multiple Kinect v2 sensors at the same time, or did I misunderstand the article?
Thanks for the help!

I asked this question, in person, of the dev team at the meetup in San Francisco in April. The answer I got was:
"This feature is 3+ months away. We want to prioritize single-Kinect features before working on multiple Kinects."
I'm a researcher, and my goal is to have a bunch of odd setups, so this is a frustrating answer, but I understand that they need to prioritize usage that will be immediately useful to a larger market.
Could you connect them to multiple computers and stream data back and forth?

As #escapecharacter mentioned not likely to have support for multiple kinect v2 sensors in the very near future.
I can also confirm, one of the Kinect V2 SDK samples has this comment:
// for Alpha, one sensor is supported
this.kinectSensor = KinectSensor.Default;
I think the hardware itself is capable of avoiding the interference problem. Hopefully the slightly larger amount of data (higher res RGB stream) won't be a problem with multiple sensors(and available USB bandwidth) and it would be a matter of enabling the SDK to safely handle multiple sensor instances in the future.
I wouldn't expect a fast/quick update to the SDK enabling though, so in the meantime, although not ideal you could try either:
Using multiple V2 sensors on multiple machines communicating over a
local network, passing only processed/minimal data (to keep the delay
as small as possible)
Using multiple V1 sensors using Shake'n'Sense (pdf link to paper) to reduce interference
At least you would to a certain extent make some progress testing some of your assumptions for your project with multiple sensors, and update the project when the updated SDK is out.

I realize I misread your question, and interpreted it as "how can I connect to Kinect 2's to a computer" when you were actually asking about how to avoid interference, and Kinect 2 was your hoped-for solution.
You can hack avoiding Kinect 1 interference by lighting shaking one of them independent of the other. See here:
http://channel9.msdn.com/coding4fun/kinect/Shaking-some-sense-into-using-multiple-Kinects-with-Shake-n-Sense
One of the craziest things I've ever seen that actually worked. I was at Microsoft Research when they figured this out, and it works quite well.

You can have a Kinect v1 viewing the same scene as a Kinect v2 without interference. I know this isn't exactly what you're looking for, but it could be useful.

2 Years later, and this still cannot be done.
See:
https://social.msdn.microsoft.com/Forums/en-US/8e2233b6-3c4f-485b-a683-6bacd6a74d53/how-to-prevent-interference-between-multiple-kinect-v2-sensors?forum=kinectv2sdk
https://github.com/OpenKinect/libfreenect2/issues/424
As stated in the second link,
What happens is this: Each Kinect v2 continuously switches between different modulation frequencies. When two Kinects switch to the same frequency range, the interference occurs. They typically gradually drift into the same range and after a while drift out of that range again. So, theoretically, you just have to wait a bit until the interference is gone. The only way I found to stop the interference immediately was to disconnect (and reconnect) the concerned Kinect from its power supply
...
Quite unfortunate that these modulation frequencies aren't controllable at this time. Let's hope MS surprises us with that custom firmware
IIRC, I came across a group at MIT that got custom firmware from MS which solved the problem, but I can't seem to find the reference. Unfortunately, it is not available to the public.

I think we cant use multiple Kinect v2 in same environment because they will interfere lot comparatively kinect v1. As Kinect v2 depth sensing based on time of flight principle, multiple kinect v2 will interfere lot. For kinect v1 interference is not that much severe.

Related

Defining a tracking area for the Kinect

Is it possible to specify a (rectangular) area for skeleton tracking with the Kinect (using any of the available SDKs)? I want to make sure that only users inside that designated area are tracked and that the sensor is not distracted by people outside it. Think of a game zone, in which a player interacts with the Kinect and where bystanders outside of the zone should be ignored lest they confuse the sensor.
The reason I want this is that many times the Kinect "locks" onto someone or even something, whether it should or not, and then it's difficult for the sensor to track other individuals, who come into tracking range. I want to avoid that by defining this zone.
It's not possible to specify a target area for the skeleton tracking with Microsoft's official SDKs, but there are some potential workarounds.
(Note that I'm not familiar with other SDKs for the Kinect, and note that I'm not sure if you are using the Kinect v1 or v2.)
If you are using the Kinect v1, note that it can track 6 players simultaneously (with a skeleton body position), but it can only provide full-body skeletal tracking for 2 players at a time. It's possible to specify which 2 players you want full skeletal tracking for in the official SDK, and you can do this based on which skeletons are in your target game zone.
If this isn't the problem, and the problem is that the Kinect (v1 or v2) has already detected 6 players and it can't detect a 7th individual that's in your game zone, then that is a more difficult problem. With the official SDK, you have no control over which 6 players are selected to be tracked. The sensor will lock onto the first 6 players it finds, so if a 7th player walks in, there is no simple way to lock onto that player.
However, there are some possible workarounds that involve resetting the sensor to clear all skeletons to re-select the 6 tracked skeletons (see the thread Skeleton tracking in crowds - Kinect v2):
Kinect body tracking is always scanning and finding candidate bodies
to track. The body tracking only locks on when it detects head and
shoulders of the person facing the camera. You could do something like
look for stable blob points in the target area and if there isn't a
tracked body, reset the Kinect Monitor service.
The SDK is resilient to this type of failure of the runtime, but it is
a hard approach. Additionally, you could employ a way to cover the
depth camera (your hand) to reset the tracking since this will make
all depth/ir invalid and will need to rebuild.
-- Carmine Sirignano - MSFT
In the same thread, RobAcheson points out that restarting the sensor is another workaround:
I've been using the by-hand method successfully for a while and that
definitely works - when I'm in the crowd :)
I have started calling KinectSensor.Close() and KinectSensor.Open()
when there are >6 skeletons if none are in the target area. That seems
to be working well too. Now I just need a crowd to test with.
-- RobAcheson

Getting a pointcloud using multiple Kinects

I am working on a project where we are going to use multiple Kinects and merge the pointclouds. I would like to know how to use two Kinects at the same time. Are there any specific drivers or embedded application?
I used Microsoft SDK but it only supports a single Kinect at a time. But for our project we cannot use multiple PCs. Now I have to find a way to overcome the problem. If someone who has some experience on accessing multiple Kinect drivers, please share your views.
I assume you are talking about Kinect v2?
Check out libfreenect2. It's an open source driver for Kinect v2 and it supports multiple Kinects on the same computer. But it doesn't provide any of the "advanced" features of the Microsoft SDK like skeleton tracking. But getting the pointcoulds is no problem.
You also need to make sure your hardware supports multiple Kinects. You'll need (most likely) a separate USB3.0 controller for each Kinect. Of course, those controllers need to be Kinect v2 compatible, meaning they need to be Intel or NEC/Renesas chips. That can easily be achieved by using PCIe USB3.0 expansion cards. But those can't be plugged into PCIe x1 slots.
A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.
See Requirements for multiple Kinects#libfreenect2.
And you also need a strong enough CPU and GPU. Depth processing in libfreenect2 is done on the GPU using OpenGL or OpenCL (CPU is possible as well, but very slow). RGB processing is done on the CPU. It needs quite a bit of processing power to give you the raw data.

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

How do you hack/decompile Camera firmware? (w/ decompiling tangent)

I wanted to know what steps one would need to take to "hack" a camera's firmware to add/change features, specifically cameras of Canon or Olympus make.
I can understand this is an involved topic, but a general outline of the steps and what I issues I should keep an eye out for would be appreciated.
I presume the first step is to take the firmware, load it into a decompiler (any recommendations?) and examine the contents. I admit I've never decompiled code before, so this will be a good challenge to get me started, any advice? books? tutorials? what should I expect?
Thanks stack as always!
Note : I know about Magic Lantern and CHDK, I want to get technical advise on how they were started and came to be.
http://magiclantern.wikia.com/wiki/Decompiling
http://magiclantern.wikia.com/wiki/Struct_Guessing
http://magiclantern.wikia.com/wiki/Firmware_file
http://magiclantern.wikia.com/wiki/GUI_Events/550D
http://magiclantern.wikia.com/wiki/Register_Map/Brute_Force
I wanted to know what steps one would need to take to "hack" a
camera's firmware to add/change features, specifically cameras of
Canon or Olympus make.
General steps for this hacking/reverse engineering:
Gathering information about the camera system (main CPU, Image coprocessor, RAM/Flash chips..). Challenges: Camera system makers tend to hide such sensitive information. Also, datasheets/documentation for proprietary chips are not released to public at all.
Getting firmware: through dumping Flash memory inside the camera or extracting the firmware from update packages used for camera firmware update. Challenges: Accessing readout circuitry for flash is not a trivial job specially with the fact that camera systems have one of the most densely populated PCBs. Also, Proprietary firmware are highly protected with sophisticated encryption algorithms when embedded into update packages.
Dis-assembly: getting a "bit" more readable instructions out of the opcode firmware. Challenges: Although dis-assemblers are widely available, they will give you the "operational" equivalent assembly code out of the opcode with no guarantee for being human readable/meaningful.
Customization: Just after understanding most of the code functionalities, you can make modifications that need not to harm normal operation of the camera system. Challenges: Not an easy task.
Alternatively, I highly recommend you to look for an already open source camera software (also HW). You can learn a lot about camera systems.
Such projects are: Elphel and AXIOM

How to detect heart pulse rate without using any instrument in iOS sdk?

I am right now working on one application where I need to find out user's heartbeat rate. I found plenty of applications working on the same. But not able to find a single private or public API supporting the same.
Is there any framework available, that can be helpful for the same? Also I was wondering whether UIAccelerometer class can be helpful for the same and what can be the level of accuracy with the same?
How to implement the same feature using : putting the finger on iPhone camera or by putting the microphones on jaw or wrist or some other way?
Is there any way to check the blood circulation changes ad find the heart beat using the same or UIAccelerometer? Any API or some code?? Thank you.
There is no API used to detect heart rates, these apps do so in a variety of ways.
Some will use the accelerometer to measure when the device shakes with each pulse. Other use the camera lens, with the flash on, then detect when blood moves through the finger by detecting the light levels that can be seen.
Various DSP signal processing techniques can be used to possibly discern very low level periodic signals out of a long enough set of samples taken at an appropriate sample rate (accelerometer or reflected light color).
Some of the advanced math functions in the Accelerate framework API can be used as building blocks for these various DSP techniques. An explanation would require several chapters of a Digital Signal Processing textbook, so that might be a good place to start.