How to display RGB and depth image from Kinect on odroid-XU4? - kinect

I installed Lubuntu 14.04 on my odroid-XU4. I installed ROS Indigo along with the openni_camera and openni_launch. They seemed to work properly because I was able to subscribe to some of its published topics. Now, I want to display the RGB and Depth images from kinect using ROS packages. How should I proceed further?

Type: "roslaunch openni_launch openni.launch" from command line. You can use Rviz to visualize RGB and Depth images. Open rviz from command line, typing: "rviz" then click the add button and go to the tab "By Topic". You should find the topic in which the images are published.

Related

Export Kinect Data RGB

I'm using kinect v1, ubuntu 14.04 and ROS Indigo. I would like to get kinect data each 1 second and export it to yaml file,because i will send it automatically to other PC to some processing.
Yaml file will help me to plot the pictures.
Thanks !

Background Removal from Images

Are there any open source APIs out there can can remove a background from an image automatically as soon as a photo is taken?
To remove the background from an image you need an image processing or computer vision library.
OpenCV offers a lot of functionality to manipulate images and offers libraries for iOS and Android.
Here is an example of how a background subtraction can be achieved.
There are also other possibilities to achieve this.
For iOS i found an example using Quartz2D (Masking an Image with Color):
https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
and for Android:
https://xjaphx.wordpress.com/2011/09/28/image-processing-pixel-color-replacement/
Remove.bg Seems to work really well. For free you can get 50 calls per months, you have to pay if you want more.
There is this program, https://github.com/nadermx/backgroundremover which is free and open source, and you can install via pip
pip install --upgrade pip
pip install backgroundremover
Then just script it to do
backgroundremover -i "/path/to/image.jpeg" -o "output.png"
source: author of the project

coastlines don't display using cartopy

I am new to cartopy and tried the examples in the documentation (e.g. http://scitools.org.uk/cartopy/docs/latest/examples/waves.py ). The colour plot displays fine, however, the coastlines are missing when I run the example.
I'm using anaconda on windows and tried installing cartopy via Christoph Gohlke's binaries as well as Rich Signell's conda package on binstar (both of which seem to be the same, resulting in version '0.11.x'). My matplotlib version is '1.3.1'.
How can I get the coastlines to display? Is there anything missing in my installation?
Thanks!
This has been fixed and the package uploaded to the scitools binstar channel. This issue reflects the change in the url of files on the natural earth website (i.e. changing under our feet).

How to start beaglebone black without OS?

I got a beaglebone black version and downloaded the starterware for example code. I can build .bin file but I have no idea how to make it work in the board. I put the MLO and modify 'gpioLedBlink.bin' name to 'app' and put them into SD card. Open the power, I know beaglebone didn't get into the original linux, but it only open USER LED 012, but there no led blink. I think the program didn't really work.
How should I solve this problem? And how can I use gdb to debug the program?
Another question is that there is no ttyUSB* when I plug the usb. How should I get the linux information when beagle black get into the original linux. THX.=)
Use Uboot and loady to load your binary over serial (default address is 0x8020 0000). Once it has loaded, go 0x80200000.
GL.

Raspberry Pi with Kinect

Could anyone get the camera data from the Kinect using a Raspberry Pi ?
We would like to make a wireless Kinect connecting it using Ethernet or WiFi. Otherwise, let me know if you have a working alternative.
To answer your question, yes it is possible to get Image and depth on the raspberry pi!
Here is how to.
If you want to use just video (color, not depth) there is already a driver in the kernel! You can load it like this:
modprobe videodev
modprobe gspca_main
modprobe gspca_kinect
You get a new /dev/videoX and can use it like any other webcam!
If you need depth (which is why you want a kinect), but have a kernel older than 3.17, you need another driver that can be found here: https://github.com/xxorde/librekinect. If you have 3.17 or newer, then librekinect functionality is enabled by toggling the gspca_kinect module's command-line depth_mode flag:
modprobe gspca_kinect depth_mode=1
Both work well on the current Raspbian.
If you can manage to plug your kinect camera to the raspberry Pi, install guvcview first to see if it does works.
sudo apt-get install guvcview
Then, typeguvcview in the terminal and it should open an option panel and the camera control view. If all of that does works and that you want to get the RAW data to do some image treatments, you will need to compile OpenCV (it takes 4 hour of compiling) and after that, you just will need to program whatever you want. To compile it, just search on Google, there are lots of tutorial.
Well, as far as I know there are no successful stories about getting images from Kinect on RaspberryPi.
On github there is an issue in libfreenect repository about such problem. In this comment user zarvox say that RPi haven't enough power to handle data from Kinect.
Personally I tried to connect Kinect with RPi using OpenNI2 and Sensor, but have no success. And that was not a clever decision because it's impossible to work with Microsoft Kinect on Linux using OpenNI2 due to licensing restrictions (Well, actually it is not so impossible. You can use OpenNI2-FreenectDriver + OpenNI2 on Linux to hookup Kinect. But anyway this workaround is not suitable for RaspberryPi, because OpenNI2-FreenectDriver uses libfreenect).
But anyway there are some good tutorials about how to connect ASUS Xtion Live Pro to RaspberryPi: one, two. And how to connect Kinect to more powerfull arm-based CubieBoard2: three.
If you intend to do robotics the simplest thing is to use the Kinect library on ROS Here
Oderwise you can try OpenKinect, They provide the libfreenect library that let you acess to the accelerometers the image & much more
OpenKinect on Github here
OpenKinect Wiki here
Here is a good exemple with code & all the details you need to connect to the Kinect & operate the motors using libfreenect.
You will need a powered USB hub to power the Kinect & to install libusb.
A second possiblity is to use the OpenNI library which provides a SDK to develop midleware libraries to interface to your application there is even an OpenNi lib for processing here.
yes, you can use Kinect with raspberry pi in a small robotic project.
I have done this work with the openkinect library.
my experience is you should check your raspberry pi and monitoring pi voltage, not time does to low voltage.
you should accuracy your coding to use lower processing and run your code faster.
because if your code had got a problem, your image processing would be the slower response to the objects.
https://github.com/OpenKinect/libfreenect https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv2_threshold.py