I bought a new Raspberry Pi 2 to take advantage of the faster speed. A program using evdev that I have used on my old Raspberry Pi also runs on my new Raspberry Pi.
But a steering wheel that I use evdev to access is recognized on my old Raspberry Pi but not on my new one.
Both Raspberry Pis run Raspbian. Is there anything I need to install on my new Raspberry Pi to get it to recognize the steering wheel?
I can't remember installing anything like that on my old Raspberry Pi, but the steering wheel works fine with it.
UPDATE: I tried installing the newest version of Raspbian on my old Raspberry Pi, and the steering wheel didn't work with that. So it's not the new Raspberry Pi that is a problem. It is the new version of Raspbian. It must lack a driver or something. But I don't know how to fix it.
UPDATE2: I found that in the new version of Raspbian the steering wheel is being recognized during booting as a USB device but not as a /dev/input/event* or a /dev/input/js*.
Related
We are trying to get the Intel Realsense D435i to work on our Raspberry Pi with the Raspbian OS and ROS Melodic.
After we configured our Raspberry Pi with Raspbian and installed ROS Melodic on it, we installed the realsense-ros package on our Raspberry Pi. When we connect our Realsense camera to the Raspberry and run the following command:
$ roslaunch realsense2_camera rs_camera.launch
We get the following error:
error: Time is out of Dual 32-bit range
We get some ros topics from the camera but they don't publish any data. Also not all of the topics are displayed. When we plug the camera in to a pc it works fine.
We already googled it and tried the following:
full clean and rebuild of catkin workspace
update and upgrade of all ros packages
delete ros_comm package
But non off the above worked for us.
Is there anybody who has any ideas?
Thanks in advance!
Does the Google Assistant SDK also work on a "Pi 2 Model B" (ARM Cortex-A7, a 32bit processor), or is a "Pi 3 Model B" ( ARM Cortex-A53, 64bit processor) essential to get the SDK up and running ?
It's working properly on a Raspberry Pi 2. I'm using it with my Logitech c920's mic.
I am running the Voice Kit that was in the MagPi Magazine on a Raspberry PI Zero W, so it should run fine on almost all the Pi's https://aiyprojects.withgoogle.com/voice/
I creating SD card by this guide: Archlinux Raspberry-pi-2 installation.
Previously everything working fine. 2016-01-11 I Archlinux OS with command "pacman -Syu". And it started to fail. And throw errors.
I think that it is becouse of SD card model and it corrupdet (see note "1").
So I tried to reinstall Archlinux and preapere again clean SD card.
Downloaded latest Archlinux and installed be using same guide.
But it on launch still throws same error.
Then I tried to install Archlinux to another newly bought SD card. But same error :(.
2.1 I tried to install Raspbian to same card, and it worked perfectly.
It looks to me for now that the latest Archlinux version is faulty. What is your suggestions, how to fix this, or something now about latest Archlinux release and if it is faulty, maybe know when next version (fixed) will be release?
See attached images with errors.
Note:
My SD card slot (the locking mechanism) in PI is broken so I have some tape to handle it in place.
I am using memory Transcend micro SDHC 32GB w/adapt/class10 ts32gusdu1 SD card.
I tried different 8 GB card and it worked. So this is latest Archlinux version problem with that Transcend micro SDHC 32GB SD card drivers (becouse I tried two new same vendor cards and Archlinux was failing to boot). Previously (installed , updated one month before) the OS worked fine with that card.
I have a BeagleBone Black development board. When I initially bought it, I set it up on my Mac and was able to ssh into it without any problem. Then, I followed a tutorial once for sharing the internet of my Mac with the BeagleBone using USB and since then I was unable to SSH into my BeagleBone from my Mac. I tried updating the HornDis driver and it didn't solve anything.
What happens is that my Mac (Mavericks) detects the BeagleBone drive, but it doesn't show up in the network interface. Hence, I can't ssh into the BeagleBone at all. I tried installing both the FTDI and HornDis driver over and over and it didn't solve the problem.
I really need it to work on my Mac and I'm kind of lost at this point. Any help would be really appreciated. I can't reinstall the OS in the BeagleBone because I have some very important project work installed and working in that BeagleBone and don't want to reinstall all those packages again.
Thanks.
I have solved this problem by resetting the SMC and the PRAM. Here are the details if someone needs it:
Reset the SMC and PRAM
SMC Reset:
Shut down the MacBook Pro.
Plug in the MagSafe power adapter to a power source, connecting it to the Mac if it's not already connected.
On the built-in keyboard, press the (left side) Shift-Control-Option keys and the power button at the same time.
Release all the keys and the power button at the same time.
Press the power button to turn on the computer.
PRAM:
Shut down the MacBook Pro.
Locate the following keys on the keyboard: Command, Option, P, and R.
Turn on the computer.
Press and hold the Command-Option-P-R keys. You must press this key combination before the gray screen appears.
Hold the keys down until the computer restarts and you hear the startup sound for the second time.
Release the keys.
After following the above two steps I plugged in the BeagleBone and it was detected in the network interface. I was then able to successfully ssh into it.
Could anyone get the camera data from the Kinect using a Raspberry Pi ?
We would like to make a wireless Kinect connecting it using Ethernet or WiFi. Otherwise, let me know if you have a working alternative.
To answer your question, yes it is possible to get Image and depth on the raspberry pi!
Here is how to.
If you want to use just video (color, not depth) there is already a driver in the kernel! You can load it like this:
modprobe videodev
modprobe gspca_main
modprobe gspca_kinect
You get a new /dev/videoX and can use it like any other webcam!
If you need depth (which is why you want a kinect), but have a kernel older than 3.17, you need another driver that can be found here: https://github.com/xxorde/librekinect. If you have 3.17 or newer, then librekinect functionality is enabled by toggling the gspca_kinect module's command-line depth_mode flag:
modprobe gspca_kinect depth_mode=1
Both work well on the current Raspbian.
If you can manage to plug your kinect camera to the raspberry Pi, install guvcview first to see if it does works.
sudo apt-get install guvcview
Then, typeguvcview in the terminal and it should open an option panel and the camera control view. If all of that does works and that you want to get the RAW data to do some image treatments, you will need to compile OpenCV (it takes 4 hour of compiling) and after that, you just will need to program whatever you want. To compile it, just search on Google, there are lots of tutorial.
Well, as far as I know there are no successful stories about getting images from Kinect on RaspberryPi.
On github there is an issue in libfreenect repository about such problem. In this comment user zarvox say that RPi haven't enough power to handle data from Kinect.
Personally I tried to connect Kinect with RPi using OpenNI2 and Sensor, but have no success. And that was not a clever decision because it's impossible to work with Microsoft Kinect on Linux using OpenNI2 due to licensing restrictions (Well, actually it is not so impossible. You can use OpenNI2-FreenectDriver + OpenNI2 on Linux to hookup Kinect. But anyway this workaround is not suitable for RaspberryPi, because OpenNI2-FreenectDriver uses libfreenect).
But anyway there are some good tutorials about how to connect ASUS Xtion Live Pro to RaspberryPi: one, two. And how to connect Kinect to more powerfull arm-based CubieBoard2: three.
If you intend to do robotics the simplest thing is to use the Kinect library on ROS Here
Oderwise you can try OpenKinect, They provide the libfreenect library that let you acess to the accelerometers the image & much more
OpenKinect on Github here
OpenKinect Wiki here
Here is a good exemple with code & all the details you need to connect to the Kinect & operate the motors using libfreenect.
You will need a powered USB hub to power the Kinect & to install libusb.
A second possiblity is to use the OpenNI library which provides a SDK to develop midleware libraries to interface to your application there is even an OpenNi lib for processing here.
yes, you can use Kinect with raspberry pi in a small robotic project.
I have done this work with the openkinect library.
my experience is you should check your raspberry pi and monitoring pi voltage, not time does to low voltage.
you should accuracy your coding to use lower processing and run your code faster.
because if your code had got a problem, your image processing would be the slower response to the objects.
https://github.com/OpenKinect/libfreenect https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv2_threshold.py