Questions about the way D435 avoids obstacles - camera

Required Info
Camera Model
D435
Firmware Version
05.12.13.50
Operating System & Version
Linux (Ubuntu18.04.5)
Kernel Version (Linux Only)
4.9.201
Platform
NVIDIA JetsonNano B01
SDK Version
2.41.0
Language
ROS packages }
Segment
Robot
Hello, I need to use the obstacle avoidance function in the process of using D435. At present, there are two ways I have inquired:
1、Use depthimage_to_laserscan to convert the depth information into a lidar signal, but the current problem is that there is also a lidar on my robot. Now these two topics are scan, so there is a conflict. I don't know how to solve it.
2、I want to know whether the two lidar signals can be fused, and what configuration is needed to be fused. Is there any relevant information or code?
3、Using plotcloud2 point cloud information, I don't understand how to do this at present. Although the point cloud image can be seen on the map now, it does not have the effect of avoiding obstacles. And does this point cloud information need to be passed to AMCL? If so, how does it need to be delivered? So I hope someone can help me.

Related

Getting a pointcloud using multiple Kinects

I am working on a project where we are going to use multiple Kinects and merge the pointclouds. I would like to know how to use two Kinects at the same time. Are there any specific drivers or embedded application?
I used Microsoft SDK but it only supports a single Kinect at a time. But for our project we cannot use multiple PCs. Now I have to find a way to overcome the problem. If someone who has some experience on accessing multiple Kinect drivers, please share your views.
I assume you are talking about Kinect v2?
Check out libfreenect2. It's an open source driver for Kinect v2 and it supports multiple Kinects on the same computer. But it doesn't provide any of the "advanced" features of the Microsoft SDK like skeleton tracking. But getting the pointcoulds is no problem.
You also need to make sure your hardware supports multiple Kinects. You'll need (most likely) a separate USB3.0 controller for each Kinect. Of course, those controllers need to be Kinect v2 compatible, meaning they need to be Intel or NEC/Renesas chips. That can easily be achieved by using PCIe USB3.0 expansion cards. But those can't be plugged into PCIe x1 slots.
A single lane doesn't have enough bandwidth. x8 or x16 slots usually work.
See Requirements for multiple Kinects#libfreenect2.
And you also need a strong enough CPU and GPU. Depth processing in libfreenect2 is done on the GPU using OpenGL or OpenCL (CPU is possible as well, but very slow). RGB processing is done on the CPU. It needs quite a bit of processing power to give you the raw data.

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

Details on USB- no luck so far

I've been looking for a detailed description for how USB protocol and cabling works for a long time with no luck. I am looking for a detailed yet not overcomplicated explanation of how things work on the software and hardware side of USB. Links and explanations would be appreciated. I've really run out of ideas, so it would be great if you can help me out.
This is what I do know:
USB hardware carries 4 lines- 5V power, ground, and 2 full duplex lines.
When connecting, the device can ask for a specified amount of current.
The transfer speeds for USB are quite fast compared to traditional serial connections.
When connecting, a device will output descriptors to the host describing itself. These descriptors will also be used for data.
What I don't know:
How does a program in C/C++ write directly to a USB port? Does it write to an address in the port?
How do some devices describe themselves as HID?
How do drivers work?
Everything else...
Thank you!
Identification
Every device has a (unique) Vendor and Product ID. These are provided (sold) by usb.org to identify a device. You can use a library like libusbx to enumerate all connected devices and select the one with the Vendor and Product ID you are looking for.
HID Descriptors
The point of HID descriptors is actually to do away with drivers. HID descriptors are a universal way of describing your device so you don't need to waste time on a driver for every system/architecture/etc/. (Same concept as the JVM.)
Reports
You will use either the input, output, or feature reports to read or write to your device. You send a stream to your device on the input or feature report. This is typically 8 bytes I believe. Only one of which is a single character you wish to write. The HID descriptor contains all the information you need to put together a report. Although I'm struggling to find a related link to clarify this.
Potential Libraries
In an effort to be open-minded here are all the libraries I am familiar with and some info about them.
libusb-0.1
First off is libusb-0.1. This used to be the go to and was built in to many Linux kernels and Windows I believe. It is very easy to use and there is a lot of documentation. However, the owner never updated and it wasn't edited for many years. It supports only synchronous transfers. (If an error occurs, the program can wait infinitely while it expects a transfer.)
libusbx
Next is libusbx. This is what most people would suggest today and I agree. It was published by those frustrated by the owner of libusb-0.1. The code is much more lightweight, up-to-date, and importantly does not require root privileges like libusb-0.1 and libusb-1.0 (Discussed in a second). It supports synchronous or asynchronous transfers.
libusb-1.0
Then there is libusb-1.0. This was the first update to libusb-0.1 in some number of years. It is not compatible with libusb-0.1. This was published the same day as libusbx as a retaliation (I assume) and an attempt to rectify the lack of updated content and conserve a user-base. It supports synchronous or asynchronous transfers.
hid.h
Finally, there is the hid library. This was built on top of libusb as another layer of abstraction. But honestly, I think it's just really confusing and it just adds more overhead than necessary.
Some Good Resources
Understanding HID Descriptors
Control Message Transfer Documentation (Very Good Link IMO)
Rolling Your Own HID Descriptor
Good Visual of HID Reports for Transfers
Great List of bmRequestType constants (You will need this or similar)
A simple terminal app for speaking with DigiSpark using libusbx and libusb-0.1
I know this isn't exactly what you are looking for, but maybe it will get you started!
This website has a general overview of how USB devices work:
https://www.beyondlogic.org/usbnutshell/usb1.shtml
Particular sections give answers to things from the list of things you don't know yet about USB.
E.g. to find out how USB devices identify themselves, read about USB descriptors:
https://www.beyondlogic.org/usbnutshell/usb5.shtml#DeviceDescriptors
To learn how a C/C++ program can talk to a USB device, see examples on using the libusb library:
https://github.com/libusb/libusb/tree/master/examples
To learn how USB drivers work, see a tutorial from Bootlin:
https://bootlin.com/blog/usb-slides/

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

How do you hack/decompile Camera firmware? (w/ decompiling tangent)

I wanted to know what steps one would need to take to "hack" a camera's firmware to add/change features, specifically cameras of Canon or Olympus make.
I can understand this is an involved topic, but a general outline of the steps and what I issues I should keep an eye out for would be appreciated.
I presume the first step is to take the firmware, load it into a decompiler (any recommendations?) and examine the contents. I admit I've never decompiled code before, so this will be a good challenge to get me started, any advice? books? tutorials? what should I expect?
Thanks stack as always!
Note : I know about Magic Lantern and CHDK, I want to get technical advise on how they were started and came to be.
http://magiclantern.wikia.com/wiki/Decompiling
http://magiclantern.wikia.com/wiki/Struct_Guessing
http://magiclantern.wikia.com/wiki/Firmware_file
http://magiclantern.wikia.com/wiki/GUI_Events/550D
http://magiclantern.wikia.com/wiki/Register_Map/Brute_Force
I wanted to know what steps one would need to take to "hack" a
camera's firmware to add/change features, specifically cameras of
Canon or Olympus make.
General steps for this hacking/reverse engineering:
Gathering information about the camera system (main CPU, Image coprocessor, RAM/Flash chips..). Challenges: Camera system makers tend to hide such sensitive information. Also, datasheets/documentation for proprietary chips are not released to public at all.
Getting firmware: through dumping Flash memory inside the camera or extracting the firmware from update packages used for camera firmware update. Challenges: Accessing readout circuitry for flash is not a trivial job specially with the fact that camera systems have one of the most densely populated PCBs. Also, Proprietary firmware are highly protected with sophisticated encryption algorithms when embedded into update packages.
Dis-assembly: getting a "bit" more readable instructions out of the opcode firmware. Challenges: Although dis-assemblers are widely available, they will give you the "operational" equivalent assembly code out of the opcode with no guarantee for being human readable/meaningful.
Customization: Just after understanding most of the code functionalities, you can make modifications that need not to harm normal operation of the camera system. Challenges: Not an easy task.
Alternatively, I highly recommend you to look for an already open source camera software (also HW). You can learn a lot about camera systems.
Such projects are: Elphel and AXIOM