I'm using kinect v1, ubuntu 14.04 and ROS Indigo. I would like to get kinect data each 1 second and export it to yaml file,because i will send it automatically to other PC to some processing.
Yaml file will help me to plot the pictures.
Thanks !
Related
I am attempting to plot fields from a GRIB2 file of GFS model data (example file: https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.20220202/12/atmos/gfs.t12z.pgrb2.0p25.f006 ). Normally I would just use PyGRIB and I'd have this problem solved yesterday, but I am on Windows (because it's what my employer uses, so I'm stuck with it and have to make this work on a Windows environment) and Windows and PyGRIB don't play nice. I am able to open the GRIB2 file and even plot variables over the entire domain using GDAL. The only problem is I need a way to get an array of the latitude and longitude values at each grid point (similar to in PyGRIB doing .latlons() on a GRIB message) so I can plot a subset of the domain.
Basically, I'm trying to replicate what is being done in this video, and need the data (got it using dataset.GetRasterBand(269).ReadAsArray()), then the lat/lon information.
I also tried using xarray, but Windows doesn't play nice with xarray either.
Given your comfort with PyGRIB, I'd say the solution is to use Conda and install it on Windows. You can use conda-forge's miniforge to install conda. Then, however you get Conda, install pygrib with:
conda install -c conda-forge pygrib
I want to use my Kinect v2 as a webcam for some tools which requires a webcam as an input device as example cheese.
However, I am able to start protonect from libfreenect2 like described at the bottom of this page: https://github.com/OpenKinect/libfreenect2/blob/master/README.md#linux
Protonect will display 4 streams, which is fine, but cheese and other tools tell me there is no device found. The same is true for different other tools I tested like VLC media player.
I read the different guide but most I found seems only to be true for Kinect v1.
According to the openkinect FAQ, the Kinect should be able to use as the webcam with a Kernel above 3.0. Mine is 4.10 while Ubuntu is 17.04 and I use an AMD GPU with newest Pro driver.
Thank you in advance for your help.
I installed Lubuntu 14.04 on my odroid-XU4. I installed ROS Indigo along with the openni_camera and openni_launch. They seemed to work properly because I was able to subscribe to some of its published topics. Now, I want to display the RGB and Depth images from kinect using ROS packages. How should I proceed further?
Type: "roslaunch openni_launch openni.launch" from command line. You can use Rviz to visualize RGB and Depth images. Open rviz from command line, typing: "rviz" then click the add button and go to the tab "By Topic". You should find the topic in which the images are published.
I am really new to the whole Kinect world.
I have the XBox One Kinect with the Windows Adapter, have installed the last version of the Kinect SDK sucessfully and recorded some videos with the Kinect Studio v2.0 and got some .xef files, which I cannot use, since I need .oni files for a certain program.
So I tried to record with the OpenNI 2's NIVIEWER program, but it does not recognize the Kinect. I tested the NIVIEWER with the ASUS xtion Pro and it did work. I even reinstalled OPENNi2.2/NITE2/KINECT SDK but it still does not work.
Am I doing something wrong?
Kinect v2 and the kinect for xbox one, doesn't work with OpenNI directly. You have a couple of options:
Work with Kinect v1
Work with Kinect SDK only
Work with experimental drivers such as this one. Note that it only works in windows.
You may try to save the rgb and depth images and create the oni by any other means (I think this option is not that easy)
Hope this helps you
Could anyone get the camera data from the Kinect using a Raspberry Pi ?
We would like to make a wireless Kinect connecting it using Ethernet or WiFi. Otherwise, let me know if you have a working alternative.
To answer your question, yes it is possible to get Image and depth on the raspberry pi!
Here is how to.
If you want to use just video (color, not depth) there is already a driver in the kernel! You can load it like this:
modprobe videodev
modprobe gspca_main
modprobe gspca_kinect
You get a new /dev/videoX and can use it like any other webcam!
If you need depth (which is why you want a kinect), but have a kernel older than 3.17, you need another driver that can be found here: https://github.com/xxorde/librekinect. If you have 3.17 or newer, then librekinect functionality is enabled by toggling the gspca_kinect module's command-line depth_mode flag:
modprobe gspca_kinect depth_mode=1
Both work well on the current Raspbian.
If you can manage to plug your kinect camera to the raspberry Pi, install guvcview first to see if it does works.
sudo apt-get install guvcview
Then, typeguvcview in the terminal and it should open an option panel and the camera control view. If all of that does works and that you want to get the RAW data to do some image treatments, you will need to compile OpenCV (it takes 4 hour of compiling) and after that, you just will need to program whatever you want. To compile it, just search on Google, there are lots of tutorial.
Well, as far as I know there are no successful stories about getting images from Kinect on RaspberryPi.
On github there is an issue in libfreenect repository about such problem. In this comment user zarvox say that RPi haven't enough power to handle data from Kinect.
Personally I tried to connect Kinect with RPi using OpenNI2 and Sensor, but have no success. And that was not a clever decision because it's impossible to work with Microsoft Kinect on Linux using OpenNI2 due to licensing restrictions (Well, actually it is not so impossible. You can use OpenNI2-FreenectDriver + OpenNI2 on Linux to hookup Kinect. But anyway this workaround is not suitable for RaspberryPi, because OpenNI2-FreenectDriver uses libfreenect).
But anyway there are some good tutorials about how to connect ASUS Xtion Live Pro to RaspberryPi: one, two. And how to connect Kinect to more powerfull arm-based CubieBoard2: three.
If you intend to do robotics the simplest thing is to use the Kinect library on ROS Here
Oderwise you can try OpenKinect, They provide the libfreenect library that let you acess to the accelerometers the image & much more
OpenKinect on Github here
OpenKinect Wiki here
Here is a good exemple with code & all the details you need to connect to the Kinect & operate the motors using libfreenect.
You will need a powered USB hub to power the Kinect & to install libusb.
A second possiblity is to use the OpenNI library which provides a SDK to develop midleware libraries to interface to your application there is even an OpenNi lib for processing here.
yes, you can use Kinect with raspberry pi in a small robotic project.
I have done this work with the openkinect library.
my experience is you should check your raspberry pi and monitoring pi voltage, not time does to low voltage.
you should accuracy your coding to use lower processing and run your code faster.
because if your code had got a problem, your image processing would be the slower response to the objects.
https://github.com/OpenKinect/libfreenect https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv2_threshold.py