Can I use Microsoft Kinect SDK v2.0 to get Skeletal Tracking information from TWO Kinects connected to the same PC? - kinect

How can I connect and use multiple Kinect sensors (v2.0) with MS Kinect SDK on the same PC?

You can't. The official Microsoft SDK only supports one Kinect an the same PC.
The open source driver (libfreenect2) supports multiple Kinects on the same PC, but doesn't have skeletal tracking.
But you can run each Kinect on its own PC and stream the data to one central processing PC. There are multiple projects going that direction:
KV2Streamer allows you to stream all Kinect data (including skeletal tracking) from one PC to another.
LiveScan3D builds point clouds out of the data of multiple Kinects connect over LAN. They don't include the skeletal tracking data yet, but they said they are working on including it. They also take care of the calibration for you, so all Kinects work in the same coordinate system.
There is also Micorsoft's RoomAliveToolkit that builds an augmented reality using multiple Kinects and multiple projectors.

Related

is there any specific sensor nodes in ns3 simulator? what types of sensors does ns3 have?

i want to do a simulation on IoT sensors in ns3. does ns3 has any specific sensors such as temp sensor or other sensors? or it just specifies as sensor nodes?
There are no IoT-type sensor classes in ns-3 at the moment, but it appears there was desire for such functionality in 2013.
As an aside, ns-3 is a network simulator. It's not meant to simulate specific devices – rather, it simulates the network traffic of devices. If you can determine the traffic pattern of a device, than you can build an Application that generates this type of traffic, and install that Application on a Node. You may find inspiration for how to wrap your own Application by looking at the existing Applications. You might also be able to mimic the traffic pattern of IoT devices using an existing Application.

LabVIEW 2018 USB Webcam Image Grab

I'm looking to capture an image from my usb webcam in LabVIEW 2018. I've looked at older posts (the one from Lava working with the 'free' portions of V&M Toolkit, another webcam test that hangs my computer when trying to run and a few others). What is the best way to do this in the newer LabVIEWs? All the examples I've seen (none of which run correctly or well) are all from 2011-ish timeframe.
It depends on the task (like, for what you are going to use camera), but you could use NI Vision Acquisition Software - which provides set of functions to access the camera, acquire images and videos and process them (basically, IMAQ drivers is what you need). Or, if you are going to use your camera for some kind of test application (vision inspection) - then you'd better check Vision Builder for Automated Inspection.
Those are the easiest (but not the cheapest) ways to acquire images from the various cameras using LabVIEW.
UPDATE:
License scheme for the software could be found here - Licensing National Instruments Vision Software. Description of each software component is also here - Does My Camera Use NI-IMAQ, NI-IMAQdx or NI-IMAQ I/O?. So in order to use 3rd party USB camera, one need to have NI-IMAQdx, which requires license.

Getting a pointcloud using multiple Kinects

I am working on a project where we are going to use multiple Kinects and merge the pointclouds. I would like to know how to use two Kinects at the same time. Are there any specific drivers or embedded application?
I used Microsoft SDK but it only supports a single Kinect at a time. But for our project we cannot use multiple PCs. Now I have to find a way to overcome the problem. If someone who has some experience on accessing multiple Kinect drivers, please share your views.
I assume you are talking about Kinect v2?
Check out libfreenect2. It's an open source driver for Kinect v2 and it supports multiple Kinects on the same computer. But it doesn't provide any of the "advanced" features of the Microsoft SDK like skeleton tracking. But getting the pointcoulds is no problem.
You also need to make sure your hardware supports multiple Kinects. You'll need (most likely) a separate USB3.0 controller for each Kinect. Of course, those controllers need to be Kinect v2 compatible, meaning they need to be Intel or NEC/Renesas chips. That can easily be achieved by using PCIe USB3.0 expansion cards. But those can't be plugged into PCIe x1 slots.
A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.
See Requirements for multiple Kinects#libfreenect2.
And you also need a strong enough CPU and GPU. Depth processing in libfreenect2 is done on the GPU using OpenGL or OpenCL (CPU is possible as well, but very slow). RGB processing is done on the CPU. It needs quite a bit of processing power to give you the raw data.

Kinect depth data ONLY

Is there a way in linux (raspbian) to capture only the depth data stream from a kinect? I'm trying to reduce the amount of processing needed to capture Kinect information so I want to ship the data stream to another computer to assemble the data.
Note:
I have freenect installed but anything that requires opengl will not run on rasbian.
I have installed this example which captures the data stream with a b+w visual depth display.
librekinect is a Linux kernel module that lets you use the depth image like a standard webcam. It's known to work with the Raspberry Pi.
But if you want to use libfreenect for full video/depth/motor support, you'll need a more powerful board like the ODROID XU-3 Lite. By the way, libfreenect only requires opengl for some examples. The rest of the project compiles and runs fine without.

Opening Kinect datasets and/or SDK Samples

I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.