Acquire raw data using OpenNI - kinect

I'm using OpenNI library to work with Kinect devices and my question is the following:
Is there any way to acquire 11 bit raw depth values (0-2047) instead of real measured distance values represented in mm?

Related

Obtaining a fast ADC sample rate in embedded linux with an external ADC

I've been given the task of getting ADC samples onto an embedded linux computer at the highest rate I can (up to about 300kSPS). I am playing with several different platforms (odroid, edison) but easrly on I realized the limitations of using the build in ADCs from within linux and timing (I am relativly new to this).
Right now I am reliably getting 150kSPS using a teensy 3.2 with a very basic swapping buffer, a PDB, and the USB connection. USB writes take 2.5usec no matter my buffer size so any faster and the ADC read interrupt collides with the USB and I get nothing.
My question is: Would using an external ADC chip enable faster speeds? I see chips on Digikey and Mouser advertising 600kSPS and higher with SPI and even parallel outputs... but I fell like the bottleneck is the teensy with USB writes. Even if it could (and I am sure it could) read values 600k times a second how do you get it onto the computer without falling behind?
also, it is for long term collection so I can't just store everything and write it once the collection is over. The edison has a built in microcontroller, but no SPI implemented yet.
Edit:
To clarify, my question is weather there is any way to get large amounts of data very fast into my embedded linux device programmatically or is there some layer between a fast SPI device and the comptuer that I don't know about. So far my mentors have suggested I 1) learn to write a device driver for the SPI device or 2) recompile an image with RT_PREEMPT.

Laptop requirements with kinect xbox1

I am using kinect xbox1 for window camera, for computing skeleton data and rgb data.I am retrieving 30 frames per second. Also calculating joint values of human body and then calculate angle between joints. I want that my laptop/system compute faster values of joints and angle. And store into directory.But recently i am using my laptop which compute the joint values and angle very slowly.
Specification of my laptop are:
500GB Hard
600GB RAM
1.7GHZ processor
Kindly tell me which system i am used to calculate faster calculation. I want really fast system/laptop to calculate very fast calculations. Anyone have idea please tell me.
And also tell me the complete specifications of system. I want to use latest fastest technology or any machine which resolve my issue.
Your computer must have the following minimum capabilities:
32-bit (x86) or 64-bit (x64) processors
Dual-core, 2.66-GHz or faster processor
USB 2.0 bus dedicated to the Kinect
2 GB of RAM
Graphics card that supports DirectX 9.0c
Source: MSDN
Anyway I suggest you:
A Desktop PC
with a Processor with 3Ghz (More are usually better) multi-core processor
with a GPU compatible with DirectX 11 and C++ AMP

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

Record raw data on Labview

I have this VI in Labview that streams video from a webcam (Logitech C300) and processes the colored layers of each image as arrays. I am trying to get raw Bayer data from the webcam using Logitech's program (http://web.archive.org/web/20100830135714/http://www.quickcamteam.net/documentation/how-to/how-to-enable-raw-streaming-on-logitech-webcams) and the Vision Acquisition tool but I only get as much data as with regular capture, instead of four times more.
Basically, I get 1280x1024 24-bit pixels where I want 1280*1024 32-bit or 2560*2048 8-bit pixels.
Has anyone had any experience with this and knows a way for Labview to process the camera's raw output, or how to actually record a raw file from the camera?
Thank you!
The driver flag you've enabled simply packs the raw pixel value (8/10 bpp) into the least significant bits of the 24bit values. Assuming that the 8bpp mode is used, the raw values can be extracted from the blue color plane as in the following example. It can then be debayered to obtain RGB values for example.
Unless you can improve on the debayer algorithms in the firmware, or have very specific needs this is not very useful. Normally, one can at least reduce the amount of data transferred by enabling raw mode - which is not the case here.
The above assumes that the raw video mode isn't being overwritten by the LabVIEW IMAQdx driver. If that is the case, you might be able to enable raw mode from LabVIEW through property nodes. This requires to manually configure the acquision, as the configurability of express VIs are limited. Use the EnumStrings property to get all possible attributes, and then see if there is something like the one specified outside of the diagram disable structure (this is from a different camera).

Recording compressed Kinect data

I'm working with a new Kinect v2 sensor, and using Kinect Studio to record the Kinect stream data during some experiments. The problem is our experiments are expected to last ~10 minutes, which including the uncompressed video would be equivalent to ~80gb. In addition, the buffer fills up quite fast and around 2 minutes in and the remainder of the data ends up stuttering at around 2fps instead of the smooth 25fps.
Is there any way I can record all the data I need in compressed form? Would it be easy to create an app similar to kinect studio that just prints out a video file and a .xed file containing all the other sensor data?
Kinect Studio does have APIs that can be used to programmatically record particular data streams into an XEF file. Additionally, it's possible to have multiple applications using the sensor simultaneously, so in theory you should be able to have three applications collecting data from the sensor (you could combine these into one application as well):
Your application;
An application using the Kinect Studio APIs, or Kinect Studio itself, to record the non-RGB streams;
Another application that collects the RGB data stream and performs compression and then saves the data.
However, the latency and buffer issue is likely to be a problem here. Kinect Studio data collection is extremely resource-intensive and it may not be possible to do real-time video compression while maintaining 25fps. Depending on the network infrastructure available you might be able to offload the RGB data to another machine to compress and store, but this would need to be well tested. This is likely to be a lot of work.
I'd suggest that you first see whether switching to another high-spec machine, with a fast SSD drive and good CPU and GPU, makes the buffering issue go away. If this is the case you could then record using Kinect Studio and then post-process the XEF files after the sessions to compress the videos (using the Kinect Studio APIs to open the XEF files).