Is it possible to generate CAN message in one device and dump CAN message in another device using SocketCAN? - embedded

I am using SocketCAN and CANtact toolkit to send and receive CAN message. I am using two CANtact toolkits one to send CAN message and other to receive the CAN message both the CANtact toolkits are connected by DB9 female to DB9 female and the other ends are connected to the USB port of the laptop.
I used the following SocketCAN commands to first configure them
sudo modprobe can
sudo modprobe can_raw
sudo modprobe slcan
sudo slcand -o -s6 -t hw -S 3000000 /dev/ttyACM0 slcan0
sudo ip link set slcan0 up
The above commands are for the first CANtact toolkit then I connected the second CANtact toolkit and configured it with the following commands
sudo modprobe can
sudo modprobe can_raw
sudo modprobe slcan
sudo slcand -o -s6 -t hw -S 3000000 /dev/ttyACM1 slcan1
sudo ip link set slcan1 up
I performed these steps in two different terminals
In the first terminal, I gave
cangen -v slcan0
In the second terminal,I gave
candump slcan1
I don't receive any CAN messages in terminal 2 but if i give
cangen -v slcan0
in the first terminal and
candump slcan0
in second terminal I am able to view the CAN messages sent
This means the CAN message is not communicated between the two CANtact toolkits How can this be resolved? Or am I committing any mistake?
]5

Related

Artemis: can't create broker: function not implemented

I used to create brokers in Artemis on both Windows, Linux and in WSL. There was never a problem.
Except on one of my machine having Windows and running WSL2.
I did everything the same when installing artemis:
sudo groupadd artemis
sudo useradd -s /bin/false -g artemis -d /opt/artemis artemis
cd /opt
sudo wget https://archive.apache.org/dist/activemq/activemq-artemis/2.12.0/apache-artemis-2.12.0-bin.tar.gz
sudo tar -xvzf apache-artemis-2.12.0-bin.tar.gz
sudo mv apache-artemis-2.12.0 artemis
sudo chown -R artemis: artemis
sudo chmod o+x /opt/artemis/bin/
sudo rm apache-artemis-2.12.0-bin.tar.gz
It installs, but when I try to create my own broker instance:
/opt/artemis/bin/artemis create --user app --password pwd --allow-anonymous test
I've got the following error message:
Cannot initialize queue:Function not implemented
I've tried it several times, even uninstalled artemis and removed the user and group and started the whole process again, but the result was always the same.
I can't figure out what the difference would be or how to fix the problem. Any help would be highly appreciated!
UPDATE 1:
There is not much log, but turning on verbose mode gives the following lines:
Executing org.apache.activemq.artemis.cli.commands.Create create --verbose --user app --password pwd --allow-anonymous test
Home::/opt/artemis, Instance::null
Cannot initialize queue:Function not implemented
As far as I can tell the message "Cannot initialize queue:Function not implemented" comes from the AIO integration layer. I recommend you try creating the instance using --nio to force the broker to use the Java-based NIO storage interface.

Systems programming qemu: unknown keycodes `(unnamed)'

I am trying to run qemu with code that my teacher provided so that we are able to work on our assignment.
This is being run in Ubuntu 18.04
LIBPATH=/usr/lib/gcc/arm-none-eabi/6.3.1/
arm-none-eabi-as -mcpu=arm926ej-s -g ts.s -o ts.o
arm-none-eabi-gcc -c -mcpu=arm926ej-s -g t.c -o t.o
arm-none-eabi-ld -T t.ld ts.o t.o -o t.elf
arm-none-eabi-ld -T t.ld -L $LIBPATH ts.o t.o -o t.elf -lgcc #-lstr
arm-none-eabi-objcopy -O binary t.elf t.bin
rm *.o *.elf
echo ready to go?
read dummy
qemu-system-arm -M realview-pbx-a9 -m 128M -kernel t.bin \
-serial mon:stdio -serial /dev/pts/2 -serial /dev/pts/2 -serial /dev/pts/2
And the numbers in the last line `-serial /dev/pts/#' are from running ps in the terminal and grabbing the number. All of this is in an executable file, and when I run the file the qemu screen does display, but when I press enter again I recieve this error message
unknown keycodes `(unnamed)', please report to qemu-devel#nongnu.org
I cannot seem to find any clear answer on how to solve this problem. I have tried uninstalling and reinstalling qemu a couple of time.
QEMU's "unknown keycodes" message is about key handling in its graphics window, and means that the host keyboard mapping you're using has some odd setup that it doesn't entirely understand. Usually this means that a few keys won't work right in the graphics window, and you can ignore it unless you're actually having a problem with them. The whole keycode system was completely rewritten in a newer version of QEMU, and this message doesn't even exist any more.
If your test program isn't expecting to use the graphical screen, then you can definitely ignore the message (indeed you could turn off the graphics screen entirely with -display none).
The command line options to QEMU you're using for the serial port look really odd -- you seem to be trying to connect multiple serial ports to the same host tty, which I'm pretty sure won't work right. Unless you're actually using serial ports 1 through 3, just drop those and use the serial port 0 that is set up with "-serial mon:stdio".

Installing video4linux Beaglebone

I'm working on Beaglebone black on a project on image processing. For this purpose, I shall need video4linux application.
However, I'm not able to share my internet with the beaglebone because of which "sudo apt-get install v4l-utils" isn't working. Whenever I change the internet sharing settings, I'm unable to ssh into the local ip of the beaglebone.
Hence, I want a method where I can install video4linux without internet connectivity.
It will be very complicated if you want install package without internet as you need to install all the dependent package before the actual package.
Each package have dependency of other package.
Still you can download the package from :v4l-utils
And using WinSCP tool you can copy in Beagle-bone.
Other dependent package for debian you can find at:Debian Packages
You can also share internet from your host to beagle-bone via USB.
Follow this sh code:
On the BeagleBone
ifconfig usb0 192.168.7.2
route add default gw 192.168.7.1
On Linux computer:
sudo su
#eth0 is my internet facing interface, eth3 is the BeagleBone USB connection
ifconfig eth3 192.168.7.1
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth3 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

GPS Daemon (gpsd) On RaspberryPi Claims NO FIX

I've connected an adafruit ultimate gps v3 to a raspberry pi using a USB adaptor. The gps unit seems to have a fix because the led (on the gps unit) blinks at a slow rate (may be every 10 s). If I do sudo cat /dev/ttyUSB0 I get NMEA data with location.
But when I install the gpsd, meaning:
sudo apt-get install gpsd gpsd-clients python-gps
sudo gpsd /dev/ttyUSB0 -F /var/run/gpsd.sock
and run the daemon (cgps -s), it says no fix found and GPS times out. I tried to kill gpsd and run it again:
sudo killall gpsd
sudo gpsd /dev/ttyUSB0 -F /var/run/gpsd.sock
but that didn't help. Do you have any idea why is that?
sudo nano /etc/default/gpsd
change it to look like this
START_DAEMON="true"
GPSD_OPTIONS="/dev/ttyUSB0"
DEVICES=""
USBAUTO="true"
GPSD_SOCKET="/var/run/gpsd.sock"
then reboot. CGPS should work then.

ssh client (dropbear on a router) does no output when put in background

I'm trying to automate some things on remote Linux machines with bash scripting on Linux machine and have a working command (the braces are a relict from cmd concatenations):
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"')
But if an ampersand is concatenated to execute it in background, it seems to execute, but no output is printed, neither on stdout, nor on stderr, and even a redirection to a file (inside the braces) does not work...:
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"') &
By the way, I'm running the ssh client dropbear v0.52 in BusyBox v1.17.4 on Linux 2.4.37.10 (TomatoUSB build on a WRT54G).
Is there a way to get the output either? What's the reason for this behaviour?
EDIT:
For convenience, here's the plain ssh help output (on my TomatoUSB):
Dropbear client v0.52
Usage: ssh [options] [user#]host[/port][,[user#]host/port],...] [command]
Options are:
-p <remoteport>
-l <username>
-t Allocate a pty
-T Don't allocate a pty
-N Don't run a remote command
-f Run in background after auth
-y Always accept remote host key if unknown
-s Request a subsystem (use for sftp)
-i <identityfile> (multiple allowed)
-L <listenport:remotehost:remoteport> Local port forwarding
-g Allow remote hosts to connect to forwarded ports
-R <listenport:remotehost:remoteport> Remote port forwarding
-W <receive_window_buffer> (default 12288, larger may be faster, max 1MB)
-K <keepalive> (0 is never, default 0)
-I <idle_timeout> (0 is never, default 0)
-B <endhost:endport> Netcat-alike forwarding
-J <proxy_program> Use program pipe rather than TCP connection
Amendment after 1 day:
The braces do not hurt, with and without its the same result. I wanted to put the ssh authentication to background, so the -f option is not a solution. Interesting side note: if an unexpected option is specified (like -v), the error message WARNING: Ignoring unknown argument '-v' is displayed - even when put in background, so getting output from background processes generally works in my environment.
I tried on x86 Ubuntu regular ssh client: it works. I also tried dbclient on x86 Ubuntu: works, too. So this problem seems to be specific to the TomatoUSB build - or inside the "dropbear v0.52" was an unknown fix between the build in TomatoUSB and the one Ubuntu provides (difference in help output is just the double-sized default receive window buffer on Ubuntu)... how can a process know if it was put in background? Is there a solution to the problem?
I had the similar problem on my OpenWRT router. Dropbear SSH client does not write anything to output if there is no stdin, e.g. when run by cron. I presume that & has the same effect on process stdin (no input).
I found some workaround on author's bugtracker. Try to redirect input from /dev/zero.
Like:
ssh -i yourkey user#remotehost "echo 123" </dev/zero &
It worked for me as I tried to describe at my blog page.