I'm new to mininet, I want to see the network topology using opendaylight(carbon) controller. I have tried command:
sudo mn --topo linear,3 --mac \
--controller=remote,ip=10.109.253.152,port=6633 \
--switch ovs,protocols=OpenFlow13,stp=1
And the opendaylight can successfully show the whole topology. And Then, I want to show the same result by using python code solely. However, it doesn't work.
#!/usr/bin/python
from mininet.net import Mininet
from mininet.node import RemoteController, OVSSwitch
from mininet.log import info, setLogLevel
from mininet.cli import CLI
def RemoteCon():
net = Mininet(controller=RemoteController, switch=OVSSwitch)
c1 = net.addController('c1', ip='10.109.253.152',port=6633)
h1 = net.addHost('h1')
h2 = net.addHost('h2')
s1 = net.addSwitch('s1')
net.addLink(s1, h1)
net.addLink(s1, h2)
net.build()
net.start()
CLI(net)
net.stop()
if __name__ == '__main__':
setLogLevel('info')
RemoteCon()
Oh, by the way, does the switches have default forwarding functionality? Sometimes, I have hosts and switch connected to each other and hosts can ping each other while running above code, h1 cannot ping h2 and vice versa.
Thanks in advance.
I'm assuming you are using the l2switch feature in OpenDaylight.
if you search this forum, you'll find others complaining of inconsistent
connectivity when using l2switch. You are probably hitting bugs, but
after a restart of OpenDaylight, it might be ok. By default, with l2switch
it should learn the links of the topology, and create the flows to allow
all hosts to ping every other host.
as for your python script to run mininet, I don't see anything obvious.
Can you look in the OpenDaylight karaf.log for any clues? Or check the
OVS logs for other clues? If you are just simply not seeing anything
in the topology viewer, then my guess is that the OVS is not connecting
to OpenDaylight at all.
One thing to double check. I don't know how the python script is deciding
which openflow version to use, but maybe it's using 1.0 and that's the
big difference from your command line, which sets it to 1.3?
I see that you missed starting your switch to communicate with the controller. Try
s1.start([c1])
This defines which controller the switch is connected to. Hope this helps.
You should give protocols parameter to addSwitch function as command line:
s1 = net.addSwitch('s1',switch=OVSSwitch,protocols='OpenFlow10')
Related
I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.
I want to run the following
# configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
However, I can't seem to figure out how to achieve this with paramiko.SSHClient().
When I try to use sshclient.exec_command("show vlan") it works great, it runs this command and exits. However, I don't know how to run more than one command with a single exec_command.
If I run sshclient.exec_command("configure") to access the configuration shell, the command completes and I believe the channel is closed, since my next command sshclient.exec_command("interface vlan ...") is not successful since the switch is no longer in configure mode.
If there is a way to establish a persistent channel with exec_command that would be ideal.
Instead I have resorted to a function as follows
chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
Oddly, this works when I run it from a Python terminal one command at a time.
However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?
Please advise if there is a more reasonable way to do this
Regarding sending commands to the configure mode started with SSHClient.exec_commmand, see:
Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko
Though it's quite common that "devices" do not support the "exec" channel at all:
Executing command using Paramiko exec_command on device is not working
Regarding your problem with invoke_shell, it's quite possible that the server needs some time to get ready for the next command.
Quick-and-dirty solution is to "sleep" shortly between the individual send calls.
Better solution to is to wait for command prompt before sending the next command.
I setup a ScyllDB on my Debian 9.6 machine. When I run cqlsh I can connect to it and create tables, do queries etc..
Now I tried to write a simple program in C++ using the Datstax driver and it can't connect. It always blocks when it tries to connect.
The scylla package I installed is:
scylla | 3.0.11-0.20191126.3c91bad0d-1~stretch
cpp_driver is the current master from github: https://github.com/datastax/cpp-driver
Now I tried to run the examples/simple project which is included in the driver, so I assume that it should work, but it shows the same problem. I don't get any errors it just blocks
CassCluster* cluster = cass_cluster_new();
CassSession* session = cass_session_new();
char* hosts = "127.0.0.1";
cass_cluster_set_contact_points(cluster, hosts);
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
connect_future = cass_session_connect(session, cluster);
// here it blocks now forever...
er = cass_future_error_code(connect_future);
I also tried to run it on an Ubuntu 16.04 but it shows the same problem. Since the connect works, using the cqlsh I think it shouldn't be a configuration problem, but rather something with the cpp_driver.
I also traced the TCP connection, and I can see that the cpp_driver talks to the server, which looks similar to cqlsh conversation.
I finally found the solution for this issue. We were using cpp_driver 2.15.1 which apparently got some change in the even handling according to their release notes. When I downgraded to 2.15.0 the problem was gone and connection could be successfully established.
I have setup Spark SQL on Jypterhub using Apache Toree SQL kernel. I wrote a Python function to update Spark configuration options in the kernel.json file for my team to change configuration based on their queries and cluster configuration. But I have to shutdown the running notebook and re-open or restart the kernel after running Python function. In this way, I'm forcing the Toree kernel to read the JSON file to pick up the new configuration.
I thought of implementing this shutdown and restart of kernel in a programmatic way. I got to know about the Jupyterhub REST API documentation and am able implement it by invoking related API's. But the problem is, the single user server API port is set randomly by the Spawner object of Jupyterhub and it keeps changing every time I spin up a cluster. I want this to be fixed before launching the Jupyterhub service.
Here is a solution I tried based on Jupyterhub docs:
sudo echo "c.Spawner.port = 35289
c.Spawner.ip = '127.0.0.1'" >> /etc/jupyterhub/jupyterhub_config.py
But this did not work as the port was again set by the Spawner randomly. I think there is a way to fix this. Any help on this would be greatly appreciated. Thanks
I'm trying to check disk status of client ubuntu 16.04 instance using icinga2 master server. In here I tried to use nrpe plugin for check disk status. I faced trouble When I'm going to define service in service.conf file. Please, can someone tell me what the correct files that should be changed when using nrpe are. Because I'm new to Icinga and nrpe.
I was able to find the solution to my problem. I hope to put it here because It may help someone's need.
Here I carried check_load example to the explain.
First of all, you need to create .conf file (name: 192.168.30.40-host.conf)regarding the client-server that you are going to monitor using icinga2. It should be placed on /etc/icinga2/conf.d/ folder
/etc/icinga2/conf.d/192.168.30.40-host.conf
object Host "host1" {
import "generic-host"
display_name = "host1"
address = "192.168.30.40"
}
you should create a service file for your client.
/etc/icinga2/conf.d/192.168.30.40-service.conf
object Service "LOAD AVERAGE" {
import "generic-service"
host_name = "host1"
check_command = "nrpe"
vars.nrpe_command = "check_load"
}
This is an important part of the problem. You should add this line to your nrpe.cfg file in Nagios server.
/etc/nagios/nrpe.cfg file
command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 20,15,10
4.make sure to restart icinga2 and Nagios servers after making any change.
You could also use an icinga2 agent instead of nrpe. The agent will be able to receive its configuration from a master or satellite, and perform local checks on the server.
Preface
I'm writing an web-server that gives to users an access to some program written on C (I'm using an Python wrapper over this C-program, it is PyCLIPS). To serve a lot of users, the web-server has to start a lot of copies of this C-program, because one copy can serve very few users in the same time, about 1-3 users. In addition, each user should work only with his own copy, therefore should be a lot of copies of the C-program.
This C-program is a CLIPS engine if it'll help to understand.
So, to solve this design problem, I want write a Twisted TCP server that will be like a pool of long-running-processes. Each of long-running-processes is a small Twisted TCP server that gives an access to one copy of the C-program.
In example, a user ask the pool-server to reserve a long-running-process for him, then the pool-server create and run a long-running-process that starts listening on some port, then the pool-server return the host and port of this long-running-process to user. Now, user can communicate with this long-running-process directly.
Questions
How start these long-running-process from the pool-server? The pool-server and each of long-running-processes are should be separate Twisted servers.
Is Twisted a good choice for these aims?
Maybe there are another ways how solve this design problem?
Thanks a lot.
Using Twisted for this sounds like it makes sense. The low-level API that supports running processes in Twisted is reactor.spawnProcess. Some examples of its usage are given in the process howto document.
This API is well suited for dealing with many long running processes, and as with all I/O APIs in Twisted, it works well when combined with another event source, such as a web server users can use to request new processes be started up.
reactor.spawnProcess is more like the standard library subprocess module than like the multiprocessing package. It gives you a way to launch a child process running a particular executable, with particular arguments, etc. It does not provide a high-level API for running a particular Python function in another process. However, it's not too hard to build such a thing (at least for a particular case). Consider this approach:
from sys import executable
from os import environ
from twisted.internet import reactor
implementation = """\
from yourapp import somefunction
somefunction()
"""
reactor.spawnProcess(executable, [executable, "-c", implementation], env=environ)
reactor.run()
This just launches a new Python interpreter (whichever one you happen to be running) and uses the -c option to specify a program on the command line for it to run.