when running mininet topology, we can use ovs or ovsk for --switch argument in mininet's mn command, for instance:
mn --custom topo.py --topo topo --mac --switch ovs --controller remote
mn --custom topo.py --topo topo --mac --switch ovsk --controller remote
So I wonder if there is any difference between these two commands?
As openvswitch can run on user space or kernel space, I thought it might be related to that, ovsk means ovs kernel space. However I couldn't find any information about this on documentations and google.
Anyone can help with this?
From the mn code in mn git:
ovsk and ovs point to the same class of object OVSSwitch, "ovsk" is still existing for compatibility reason, but actually they are the same.
SWITCHDEF = 'default'
SWITCHES = { 'user': UserSwitch,
'ovs': OVSSwitch,
'ovsbr' : OVSBridge,
# Keep ovsk for compatibility with 2.0
'ovsk': OVSSwitch,
'ivs': IVSSwitch,
'lxbr': LinuxBridge,
'default': OVSSwitch }
You can verify Giuseppe's answer from Mininet's python code as well where inside the node.py file on line 1253, OVSKernelSwitch = OVSSwitch is written.
Related
I am trying to connect my custom topology to the ODL controller using the command:
sudo mn --topo torus,3,3 --controller=remote,ip=$OPENDAYLIGHTIP,port=6653 --switch ovsk,protocols=OpenFlow13
However, it gives 100% dropped on the ping test;
I have installed these features:
feature:install odl-openflowplugin-flow-services-rest odl-openflowplugin-app-table-miss-enforcer
Could you help me please which features do I need to install? I'm able to run these steps with ODL 0.5.3. I need to update my SDN controller.
As far as I know, the L2Switch feature which was responsible for L2 switching is not supporting after the Fluorine version. There is no module for arp handling l2 switching. It may visible to write a code to do L2 forwarding.
P.S Regarding your topo
"This topology has LOOPS and WILL NOT WORK with the default controller or any Ethernet bridge without STP turned on! It can be used with STP, e.g"
I am using the ubiquityrobotics Raspberry Pi image for the RPi 3B+, which is Ubuntu Xenial and ROS Kinetic. My base computer is running Ubuntu 18.04 and has ROS Melodic installed.
I created subo_base workspace in the base PC and subo_rpi workspace in the RPi (assessing the RPi via ssh).
Then I created a package in both the base PC and RPi and added the Publisher and Subscriber (http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29) files in each of the packages.
When I run the publisher from the RPi, the base PC is able to subscribe but when I publish from the base PC, the RPi does not show any output and remains stuck (even though the Topic is visible on RPi using rostopic list).
Base PC is able to subscribe to RPi
RPi unable to subscribe to topic from PC
Some of the code is used in base PC
aakash#aakash:~$ mkdir -p ~/subo_base/src
aakash#aakash:~$ cd ~/subo_base/
aakash#aakash:~/subo_base$ catkin_make
aakash#aakash:~/subo_base$ source devel/setup.bash
aakash#aakash:~/subo_base$ echo $ROS_PACKAGE_PATH
aakash#aakash:~/subo_base$ cd ~/subo_base/src/
aakash#aakash:~/subo_base/src$ catkin_create_pkg motion_plan std_msgs rospy roscpp
To connect to RPi
aakash#aakash:~/subo_base/src/motion_plan/scripts$ export ROS_MASTER_URI=http://ubiquityrobot.local:11311
aakash#aakash:~/subo_base/src/motion_plan/scripts$ export ROS_IP='hostname -I'
Further, I am able to transfer files from and to the base PC via ssh scp so I guess netwkr might not be the issue?
The issue is most likely the hostname resolution and/or ROS network variable configuration.
I dislike using the hostname in the variables, so I will give the examples using just IPs.
Also the 'hostname -I' is definitely not suitable for setting your ROS_IP variable in all cases. So that might also be one source of your problem.
From hostname man page
-I, --all-ip-addresses>
Display all network addresses of the host. This option enumerates all configured addresses on all network interfaces. The loopâback interface and IPv6 link-local addresses are omitted. Contrary to option -i, this option does not depend on name resolution. Do not make any assumptions about the order of
the output.
You will want to use whatever specific ip address you need, so just use that or find a better way to determine which ip to set. echo $ROS_IP or printenv | grep ROS will tell you what your variables are currently set to so you can verify it is set correctly.
For minimal proof that things are working you could try the following:
Lets say your RPi ip is 192.168.0.2 and PC ip is 192.168.0.3
You will need to decide which machine will be the master, for this example I will assume the PC will be the master.
In a terminal on the PC run the following commands:
roscore
in a different terminal run (this is used instead of the subscriber/publisher node to test if things work)
rostopic pub /test/topic std_msgs/String 'Hello World from PC' -r 1
Now on the SSH terminal on the RPi run:
export ROS_MASTER_URI=http://192.168.0.3:11311 && export ROS_IP=192.168.0.2
now you should be able to echo the topic published on the PC from the SSH window.
rostopic echo /test/topic
ctrl+c out of the echo and you can try publishing some message on the RPi like:
rostopic pub /test2/topic std_msgs/String 'Hello World from RPi' -r 1
Now open a new terminal on the PC and try to echo the topic from RPi, any terminal sourced with the ROS installspace, usually source /opt/ros/kinetic/setup.bash, should work:
rostopic echo /test2/topic
ROS wiki page on running ROS on multiple machines
ROS answer regarding setting up multiple machines
ROS1
Machine1 [MASTER]:
Will run roscore, but don't run it yet till configurations are done.
Has an IP of 192.168.1.10.
1- Run the following in the terminal:
1.1- export ROS_MASTER_URI=http://192.168.1.10:11311.
1.2- export ROS_IP=192.168.1.10.
2- Now, run roscore.
Machine2 [SLAVE]
Will NOT run roscore.
Has and IP 192.168.1.15.
1- Run the following in the terminal:
1.1- export ROS_MASTER_URI=http://192.168.1.10:11311.
1.2- export ROS_IP=192.168.1.15.
2- Now, you are connected to the Master.
ROS2
ROS2 will populate the information on the LAN out of the box without any configuration.
I made a topology in miniedit, I saved it with the name topo2.py, when I called it in the mininet the topology didn't appear as I made it,
I have tried solutions from Can't see custom topology on DLUX but still can't see it
sudo mn --custom ~/mininet/custom/topo2.py --controller=remote,ip=192.168.56.103
I faced the same problem month ago, then I found my topology created on mininet on Opendaylight DLUX using the following way:
sudo mn --custom testbed.py --topo testbed --controller=remote --switch ovsk,protocols=OpenFlow13
You've to mention the protocols and topology name according to your setup.
I am using salt-stack to manage my production machine.
The minions run Raspbian and my and I have configured a systemd service. The services config file is located at /lib/systemd/system/my_service.service
When I run the following command:
sudo salt my_minion service.stop my_service
The following error is returned:
ERROR: Unable to run command ['/etc/init.d/my_service', 'stop'] with the context {'with_communicate': True, 'shell': False, 'env': {'LANG': 'en_GB.UTF-8', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'LC_ALL': 'C'}, 'stdout': -1, 'close_fds': True, 'stdin': None, 'stderr': -2, 'cwd': '/root'}, reason: [Errno 2] No such file or directory
I understand that salt tries to use sysvinit instead of systemd.
Is there any way to tell salt to use systemd?
EDIT:
Tried adding
providers:
service: systemd
to /etc/salt/minion as suggested by Eric. Still getting the same error
EDIT 2
The issue was fixed by using Erics suggestion + upgrading salt-minion to 2015.8.8 from 2015.8.3
This is almost certainly because newer Raspbian is based off of Debian 8, and Salt's systemd execution module does not properly detect newer Raspbian as needing systemd. Can the OP please reply to this message with the output from sudo salt my_minion grains.items? Please redact any grains which you feel have personally-identifiable information, I'm mainly interested in the grains that deal with OS name and version.
EDIT: One more thing. Please confirm that /run/systemd/system exists on the Raspbian box. What I think is happening here is two modules are both claiming to be the ones to provide the service module.
https://github.com/saltstack/salt/pull/32421 should fix this, but you can work around this immediately (without waiting for a new Salt release) by adding the following to /etc/salt/minion on your Raspbian minions:
providers:
service: systemd
I've used buildroot to compile a firmware targetting the LPC EA3250 board, I'm trying to get this to run using qemu so that I can test changes to the firmware on my machine. I've tried commands such as:
qemu-system-arm -M virt -kernel uImage -hda rootfs.ext2 -boot c -m 128M -append "root=/dev/sda rw console=ttyS0,38400n8"
But I keep getting similar errors no matter which -M option I apply. It seems that somehow I need to get a new machine option to pass qemu which will correspond to my board. I've found this config file which seems to be the configuration needed for the board I'm looking at.
What I would like to know is how to insert this config into qemu. Do I have to place this config somewhere and then recompile everything? If I do where do I need to put it?
On further investigation it seems that the config file I found is for something else entirely. The LPC EA3250 is not supported by qemu and adding in support for additional machines is an extensive task.