Getting a pointcloud using multiple Kinects - kinect

I am working on a project where we are going to use multiple Kinects and merge the pointclouds. I would like to know how to use two Kinects at the same time. Are there any specific drivers or embedded application?
I used Microsoft SDK but it only supports a single Kinect at a time. But for our project we cannot use multiple PCs. Now I have to find a way to overcome the problem. If someone who has some experience on accessing multiple Kinect drivers, please share your views.

I assume you are talking about Kinect v2?
Check out libfreenect2. It's an open source driver for Kinect v2 and it supports multiple Kinects on the same computer. But it doesn't provide any of the "advanced" features of the Microsoft SDK like skeleton tracking. But getting the pointcoulds is no problem.
You also need to make sure your hardware supports multiple Kinects. You'll need (most likely) a separate USB3.0 controller for each Kinect. Of course, those controllers need to be Kinect v2 compatible, meaning they need to be Intel or NEC/Renesas chips. That can easily be achieved by using PCIe USB3.0 expansion cards. But those can't be plugged into PCIe x1 slots.
A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.
See Requirements for multiple Kinects#libfreenect2.
And you also need a strong enough CPU and GPU. Depth processing in libfreenect2 is done on the GPU using OpenGL or OpenCL (CPU is possible as well, but very slow). RGB processing is done on the CPU. It needs quite a bit of processing power to give you the raw data.

Related

How'd multi-GPU programming work with Vulkan?

Would using multi-GPUs in Vulkan be something like making many command queues then dividing command buffers between them?
There are 2 problems:
In OpenGL, we use GLEW to get functions. With more than 1 GPU, each GPU has its own driver. How'd we use Vulkan?
Would part of the frame be generated with a GPU & the others with other GPUs like use Intel GPU to render UI & AMD or Nvidia GPU to render game screen in labtops for example? Or would a frame be generated in a GPU & the next frame in an another GPU?
Updated with more recent information, now that Vulkan exists.
There are two kinds of multi-GPU setups: where multiple GPUs are part of some SLI-style setup, and the kind where they are not. Vulkan supports both, and supports them both in the same computer. That is, you can have two NVIDIA GPUs that are SLI-ed together, and the Intel embedded GPU, and Vulkan can interact with them all.
Non-SLI setups
In Vulkan, there is something called the Vulkan instance. This represents the base Vulkan system itself; individual devices register themselves to the instance. The Vulkan instance system is, essentially, implemented by the Vulkan SDK.
Physical devices represent a specific piece of hardware that implements the interface to a GPU. Each piece of hardware that exposes a Vulkan implementation does so by registering its physical device with the instance system. You can query which physical devices are available, as well as some basic properties about them (their names, how much memory they offer, etc).
You then create logical devices for the physical devices you use. Logical devices are how you actually do stuff in Vulkan. They have queues, command buffers, etc. And each logical device is separate... mostly.
Now, you can bypass the whole "instance" thing and load devices manually. But you really shouldn't. At least, not unless you're at the end of development. Vulkan layers are far too critical for day-to-day debugging to just opt out of that.
There are mechanisms, core in Vulkan 1.1, that allow individual devices to be able to communicate some information to other devices. In 1.1, only certain kinds of information can be shared across physical devices (namely, fences and semaphores, and even then, only on Linux through sync files). While these APIs could provide a mechanism for sharing data between two physical devices, at present, the restriction on most forms of data sharing is that both physical devices must have matching UUIDs (and therefore are the same physical device).
SLI setups
Dealing with SLI is covered by two Vulkan 1.0 extensions: KHR_device_group and KHR_device_group_creation. The former is for dealing with "device groups" in Vulkan, while the latter is an instance extension for creating device-grouped devices. Both of these are core in Vulkan 1.1.
The idea with this is that the SLI aggregation is exposed as a single VkDevice, which is created from a number of VkPhysicalDevices. Each internal physical device is a "sub-device". You can query sub-devices and some properties about them. Memory allocations are specific to a particular sub-device. Resource objects (buffers and images) are not specific to a sub-device, but they can be associated with different memory allocations on the different sub-devices.
Command buffers and queues are not specific to sub-devices; when you execute a CB on a queue, the driver figures out which sub-device(s) it will run on, and fills in the descriptors that use the images/buffers with the proper GPU pointers for the memory that those images/buffers have been bound to on those particular sub-devices.
Alternate-frame rendering is simply presenting images generated from one sub-device on one frame, then presenting images from a different sub-device on another frame. Split-frame rendering is handled by a more complex mechanism, where you define the memory for the destination image of a rendering command to be split among devices. You can even do this with presentable images.
In vulkan you need to enumerate the devices and select the one you want to work with. There will be nothing stopping you from trying to work with 2 different ones separately. Each vulkan call needs at least 1 parameter as context. The loader layer will then forward the call to the correct driver. Or you can load the functions for each device separately to avoid the loader's trampoline.
A generated frame will need to be forwarded to the card that is connected to the screen for display. So it's more likely that 1 GPU is responsible for graphics and the others are used for physics.
Only a single device can be connected to a specific surface at a time so that device needs to get the rendered frame to copy it into the renderable image that gets pushed to the screen.
Device group is the way to go. Look at the vulkan specification for documentation. Vulkan handle all the dispatch to the others GPUs (when they are connected by sli/crossfire). All you need to do is to tell vulkan how the dispatch is done (for example dispatch one frame on a GPU and the next on another one). If you need to do compute work you will need to address each GPU individually. Please find a link for a reference: https://www.ea.com/seed/news/khronos-munich-2018-halcyon-vulkan

Kinect depth data ONLY

Is there a way in linux (raspbian) to capture only the depth data stream from a kinect? I'm trying to reduce the amount of processing needed to capture Kinect information so I want to ship the data stream to another computer to assemble the data.
Note:
I have freenect installed but anything that requires opengl will not run on rasbian.
I have installed this example which captures the data stream with a b+w visual depth display.
librekinect is a Linux kernel module that lets you use the depth image like a standard webcam. It's known to work with the Raspberry Pi.
But if you want to use libfreenect for full video/depth/motor support, you'll need a more powerful board like the ODROID XU-3 Lite. By the way, libfreenect only requires opengl for some examples. The rest of the project compiles and runs fine without.

Will Kinect v2 support multiple sensors?

Working with multiple Kinect v1 sensors is very difficult because of the IR interference between the sensors.
Based on what I read on this gamastura article, Microsoft got rid of the interference problem with the time-of-flight mechanism that Kinect v2 sensor uses to gauge depth.
Does that mean I could use multiple Kinect v2 sensors at the same time, or did I misunderstand the article?
Thanks for the help!
I asked this question, in person, of the dev team at the meetup in San Francisco in April. The answer I got was:
"This feature is 3+ months away. We want to prioritize single-Kinect features before working on multiple Kinects."
I'm a researcher, and my goal is to have a bunch of odd setups, so this is a frustrating answer, but I understand that they need to prioritize usage that will be immediately useful to a larger market.
Could you connect them to multiple computers and stream data back and forth?
As #escapecharacter mentioned not likely to have support for multiple kinect v2 sensors in the very near future.
I can also confirm, one of the Kinect V2 SDK samples has this comment:
// for Alpha, one sensor is supported
this.kinectSensor = KinectSensor.Default;
I think the hardware itself is capable of avoiding the interference problem. Hopefully the slightly larger amount of data (higher res RGB stream) won't be a problem with multiple sensors(and available USB bandwidth) and it would be a matter of enabling the SDK to safely handle multiple sensor instances in the future.
I wouldn't expect a fast/quick update to the SDK enabling though, so in the meantime, although not ideal you could try either:
Using multiple V2 sensors on multiple machines communicating over a
local network, passing only processed/minimal data (to keep the delay
as small as possible)
Using multiple V1 sensors using Shake'n'Sense (pdf link to paper) to reduce interference
At least you would to a certain extent make some progress testing some of your assumptions for your project with multiple sensors, and update the project when the updated SDK is out.
I realize I misread your question, and interpreted it as "how can I connect to Kinect 2's to a computer" when you were actually asking about how to avoid interference, and Kinect 2 was your hoped-for solution.
You can hack avoiding Kinect 1 interference by lighting shaking one of them independent of the other. See here:
http://channel9.msdn.com/coding4fun/kinect/Shaking-some-sense-into-using-multiple-Kinects-with-Shake-n-Sense
One of the craziest things I've ever seen that actually worked. I was at Microsoft Research when they figured this out, and it works quite well.
You can have a Kinect v1 viewing the same scene as a Kinect v2 without interference. I know this isn't exactly what you're looking for, but it could be useful.
2 Years later, and this still cannot be done.
See:
https://social.msdn.microsoft.com/Forums/en-US/8e2233b6-3c4f-485b-a683-6bacd6a74d53/how-to-prevent-interference-between-multiple-kinect-v2-sensors?forum=kinectv2sdk
https://github.com/OpenKinect/libfreenect2/issues/424
As stated in the second link,
What happens is this: Each Kinect v2 continuously switches between different modulation frequencies. When two Kinects switch to the same frequency range, the interference occurs. They typically gradually drift into the same range and after a while drift out of that range again. So, theoretically, you just have to wait a bit until the interference is gone. The only way I found to stop the interference immediately was to disconnect (and reconnect) the concerned Kinect from its power supply
...
Quite unfortunate that these modulation frequencies aren't controllable at this time. Let's hope MS surprises us with that custom firmware
IIRC, I came across a group at MIT that got custom firmware from MS which solved the problem, but I can't seem to find the reference. Unfortunately, it is not available to the public.
I think we cant use multiple Kinect v2 in same environment because they will interfere lot comparatively kinect v1. As Kinect v2 depth sensing based on time of flight principle, multiple kinect v2 will interfere lot. For kinect v1 interference is not that much severe.

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

suitable Embedded system to be used for image processing and gps/gsm

i am working on a project, where i would like to install an embedded system in a certain location , the system is provided with a camera , the system has to perform image processing functions on the images obtained from the camera.
The system must be attached with gps and gsm modules.
i am in the process of choosing the hardware needed, i am thinking of using a beagle board or FPGA , which one is more suitable for my application ? do you recommend other boards? do you know any gsm or gps modules that can be interfaced with these modules?
Thank you
If your image processing algorithms are too CPU intensive I'll suggest you consider FPGAs. Otherwise, Beagle board is fine.
What is the interface to your camera? USB / FireWire / I2C / other? If the Beagle Board supports what you need, and can handle the processing, that's probably the easiest way to go - FireWire and USB interfaces are not exactly trivial to do on an FPGA, unless you can get a board and a matching Linux distro for it, where everything is configured and working out of the box (and it's probably going to be expensive then...).
GPS modules typically connect over a simple serial connection, so that shouldn't be an issue for either solution.