I made a topology in miniedit, I saved it with the name topo2.py, when I called it in the mininet the topology didn't appear as I made it,
I have tried solutions from Can't see custom topology on DLUX but still can't see it
sudo mn --custom ~/mininet/custom/topo2.py --controller=remote,ip=192.168.56.103
I faced the same problem month ago, then I found my topology created on mininet on Opendaylight DLUX using the following way:
sudo mn --custom testbed.py --topo testbed --controller=remote --switch ovsk,protocols=OpenFlow13
You've to mention the protocols and topology name according to your setup.
Related
when running mininet topology, we can use ovs or ovsk for --switch argument in mininet's mn command, for instance:
mn --custom topo.py --topo topo --mac --switch ovs --controller remote
mn --custom topo.py --topo topo --mac --switch ovsk --controller remote
So I wonder if there is any difference between these two commands?
As openvswitch can run on user space or kernel space, I thought it might be related to that, ovsk means ovs kernel space. However I couldn't find any information about this on documentations and google.
Anyone can help with this?
From the mn code in mn git:
ovsk and ovs point to the same class of object OVSSwitch, "ovsk" is still existing for compatibility reason, but actually they are the same.
SWITCHDEF = 'default'
SWITCHES = { 'user': UserSwitch,
'ovs': OVSSwitch,
'ovsbr' : OVSBridge,
# Keep ovsk for compatibility with 2.0
'ovsk': OVSSwitch,
'ivs': IVSSwitch,
'lxbr': LinuxBridge,
'default': OVSSwitch }
You can verify Giuseppe's answer from Mininet's python code as well where inside the node.py file on line 1253, OVSKernelSwitch = OVSSwitch is written.
I am trying to connect my custom topology to the ODL controller using the command:
sudo mn --topo torus,3,3 --controller=remote,ip=$OPENDAYLIGHTIP,port=6653 --switch ovsk,protocols=OpenFlow13
However, it gives 100% dropped on the ping test;
I have installed these features:
feature:install odl-openflowplugin-flow-services-rest odl-openflowplugin-app-table-miss-enforcer
Could you help me please which features do I need to install? I'm able to run these steps with ODL 0.5.3. I need to update my SDN controller.
As far as I know, the L2Switch feature which was responsible for L2 switching is not supporting after the Fluorine version. There is no module for arp handling l2 switching. It may visible to write a code to do L2 forwarding.
P.S Regarding your topo
"This topology has LOOPS and WILL NOT WORK with the default controller or any Ethernet bridge without STP turned on! It can be used with STP, e.g"
I am using the ubiquityrobotics Raspberry Pi image for the RPi 3B+, which is Ubuntu Xenial and ROS Kinetic. My base computer is running Ubuntu 18.04 and has ROS Melodic installed.
I created subo_base workspace in the base PC and subo_rpi workspace in the RPi (assessing the RPi via ssh).
Then I created a package in both the base PC and RPi and added the Publisher and Subscriber (http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29) files in each of the packages.
When I run the publisher from the RPi, the base PC is able to subscribe but when I publish from the base PC, the RPi does not show any output and remains stuck (even though the Topic is visible on RPi using rostopic list).
Base PC is able to subscribe to RPi
RPi unable to subscribe to topic from PC
Some of the code is used in base PC
aakash#aakash:~$ mkdir -p ~/subo_base/src
aakash#aakash:~$ cd ~/subo_base/
aakash#aakash:~/subo_base$ catkin_make
aakash#aakash:~/subo_base$ source devel/setup.bash
aakash#aakash:~/subo_base$ echo $ROS_PACKAGE_PATH
aakash#aakash:~/subo_base$ cd ~/subo_base/src/
aakash#aakash:~/subo_base/src$ catkin_create_pkg motion_plan std_msgs rospy roscpp
To connect to RPi
aakash#aakash:~/subo_base/src/motion_plan/scripts$ export ROS_MASTER_URI=http://ubiquityrobot.local:11311
aakash#aakash:~/subo_base/src/motion_plan/scripts$ export ROS_IP='hostname -I'
Further, I am able to transfer files from and to the base PC via ssh scp so I guess netwkr might not be the issue?
The issue is most likely the hostname resolution and/or ROS network variable configuration.
I dislike using the hostname in the variables, so I will give the examples using just IPs.
Also the 'hostname -I' is definitely not suitable for setting your ROS_IP variable in all cases. So that might also be one source of your problem.
From hostname man page
-I, --all-ip-addresses>
Display all network addresses of the host. This option enumerates all configured addresses on all network interfaces. The loopâback interface and IPv6 link-local addresses are omitted. Contrary to option -i, this option does not depend on name resolution. Do not make any assumptions about the order of
the output.
You will want to use whatever specific ip address you need, so just use that or find a better way to determine which ip to set. echo $ROS_IP or printenv | grep ROS will tell you what your variables are currently set to so you can verify it is set correctly.
For minimal proof that things are working you could try the following:
Lets say your RPi ip is 192.168.0.2 and PC ip is 192.168.0.3
You will need to decide which machine will be the master, for this example I will assume the PC will be the master.
In a terminal on the PC run the following commands:
roscore
in a different terminal run (this is used instead of the subscriber/publisher node to test if things work)
rostopic pub /test/topic std_msgs/String 'Hello World from PC' -r 1
Now on the SSH terminal on the RPi run:
export ROS_MASTER_URI=http://192.168.0.3:11311 && export ROS_IP=192.168.0.2
now you should be able to echo the topic published on the PC from the SSH window.
rostopic echo /test/topic
ctrl+c out of the echo and you can try publishing some message on the RPi like:
rostopic pub /test2/topic std_msgs/String 'Hello World from RPi' -r 1
Now open a new terminal on the PC and try to echo the topic from RPi, any terminal sourced with the ROS installspace, usually source /opt/ros/kinetic/setup.bash, should work:
rostopic echo /test2/topic
ROS wiki page on running ROS on multiple machines
ROS answer regarding setting up multiple machines
ROS1
Machine1 [MASTER]:
Will run roscore, but don't run it yet till configurations are done.
Has an IP of 192.168.1.10.
1- Run the following in the terminal:
1.1- export ROS_MASTER_URI=http://192.168.1.10:11311.
1.2- export ROS_IP=192.168.1.10.
2- Now, run roscore.
Machine2 [SLAVE]
Will NOT run roscore.
Has and IP 192.168.1.15.
1- Run the following in the terminal:
1.1- export ROS_MASTER_URI=http://192.168.1.10:11311.
1.2- export ROS_IP=192.168.1.15.
2- Now, you are connected to the Master.
ROS2
ROS2 will populate the information on the LAN out of the box without any configuration.
I'm "extremely" new to Kubernetes, and I wanted to try it out on my local machine, which is running Windows 10 along with HyperV. I saw that minikube is used for local development, and I was able to find in on Chocolatey, so I installed it using that:
choco install minikube -y
(I think this also installs kubectl)
The problem I have is that I'm not able to start it; I'm running the following command:
minikube start --vm-driver=hyperv
I have an external switch configured in HyperV (I found it as a suggestion somewhere), but when I run the command, it's stuck in Creating VM ...
I thought maybe it would give me a clue if I look at the VM created in HyperV, and when I open that, I see the following:
So, it seems that it's waiting for input, and that's why it's stuck! I tried searching for the problem, but to no avail.
I would appreciate any help
PS: It seems to me that if I wait long enough, the following message appears on the console:
Temporary Error: provisioning: error getting ssh client: Error dialing
tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate,
attempted methods [none publickey], no supported methods remain
So, somehow by chance, I think I found how to resolve the issue.
First thing is that: the fact the VM is displaying that prompt (minikube login) seems to be normal, and it does NOT prevent the minikube start from succeeding.
To resolve the issue, this is what I did:
Delete ~/.kube directory
Delete ~/.minikube directory (in case it exists)
The MOST IMPORTANT step: stop/start the Hyper-V Virtual Machine Management Windows service
These steps seem to have solved the issue for me
PS: I used this command to start minikube and enable verbose logging:
minikube start --vm-driver hyperv -v 7 --alsologtostderr
Farzad, what resources have you used for setting up the minikube? Can you please clarify what do you mean by "unable to start". Are the regular kubectl commands working?
For example kubectl get nodes? That is of course if below steps won't help you.
The screenshot you shared shows a running VM:
Minikube runs a single-node Kubernetes cluster inside a VM on your
laptop for users looking to try out Kubernetes or develop with it
day-to-day.
You mentioned that you've created the vSwitch, you should be using a flag that is pointing minikube to use that external vSwitch:
minikube start --vm-driver hyperv --hyperv-virtual-switch "vSwitch name"
You also mentioned choco, did you install kubernetes-cli (as you did not mention it in the question)? It might be the reason why your commands do not work (seems like the new version downloads kubectl with choco install minikube):
kubectl is a command line interface for running commands against
Kubernetes clusters
At this moment I recommend stoping the minikube VM:
minikub stop
Delete the cluster
minikube delete
Sometimes regular minikube stop, minikube delete does not work so you might have to manually turn off the minikubeVM in Hyper-V, then I recommend to go to c:\users\%username%\ and delete .kube and .minikube.
Use cuninst minikube
Restart and install again as specified in the minikube documentation:
choco install minikube
choco install kubernetes-cli
As for the error you mentioned, let's try to run the cluster properly, and if this persists, we will take care of it.
Try this:
kubectl config use-context minikube
I encountered the same issue. The reason was I chose the wrong disk file to start my VM after creating it in Virtual Box.
This solved my issue.
minikube delete
minikube start --vm-driver hyperv -v 7 --alsologtostderr
I am new to pox and I don't know how to run the components in pox. Currently I'm stuck with the host_tracker.py taken from https://github.com/CPqD/RouteFlow/blob/master/pox/pox/host_tracker/host_tracker.py
I've tried something like this:
./debug-pox.py host_tracker
And got the output as
POX 0.3.0 (dart) / Copyright 2011-2014 James McCauley, et al.
DEBUG:core:POX 0.3.0 (dart) going up...
DEBUG:core:Running on CPython (2.7.6/Mar 22 2014 22:59:56)
DEBUG:core:Platform is Linux-3.13.0-53-generic-x86_64-with-Ubuntu-14.04-trusty
DEBUG:core:host_tracker still waiting for: openflow
WARNING:core:Still waiting on 1 component(s)
INFO:core:POX 0.3.0 (dart) is up.
Not sure what it means :( Kindly tell me how to run components in pox.
Thanks :)
Assuming that you have mininet up and running you should use the host_tracker along with the openflow.discovery module. In addition you should load an example controller (stock component) included in your pox version.
First load a sample mininet
sudo mn --controller remote
Then run pox like this
python pox.py forwarding.l2_pairs host_tracker openflow.discovery
When all is up and running in the terminal you launched mininet issue a
pingall
and monitor the terminal in which you run pox to observe host_tracker info
forwarding.l2_pairs is a sample controller (stock component) that will handle the network and flows modifications. host_tracker is the host tracker module and the openflow.discovery is the discovery module of pox.
To find more stock components go to https://openflow.stanford.edu/display/ONL/POX+Wiki#POXWiki-StockComponents
To read more about host_tracker https://openflow.stanford.edu/display/ONL/POX+Wiki#POXWiki-host_tracker