Connect 2 (or more) Open vSwitch bridge on separate servers - openflow

I have some server with OVS bridge on each. Each server has several VMs inside, all connected to the OVS bridge.
All bridges connect to the FloodLight OpenFlow controller and VMs inside 1 host can reach other. This is example of 2 host, A and B. VMs inside HOST A may get different or same subnet with VMs inside HOST B:
++++++++++++++++++++++++++++++
| +------+ +------+ |-----+
| | VM-1 | | VM-2 | | brA |
| +------+ +------+ |-----+------>--------+
+++++++++++ HOST A +++++++++++ \|/
+---------------+
| OF controller |
+---------------+
++++++++++++++++++++++++++++++ /|\
| +------+ +------+ |-----+------>--------+
| | VM-3 | | VM-4 | | brB |
| +------+ +------+ |-----+
+++++++++++ HOST B +++++++++++
Success: VM-1 reach VM-2.
How to: VM-1 reach VM-3???
Update: before connect to OF controller, VMs can reach the internet with POSTROUTING masquerade rule but failed after connected to OF controller.
Thank you for your reply.

Open vSwitch 2.5 and above can not work with the latest FloodLight controller. Version 2.4.1 should work ok.

Related

remote ssh command issue

Team,
I am facing some difficulties running commands on a remote machine. I am unable to understand why ssh is trying to think that the command I pass is a host.
ssh -tt -i /root/.ssh/teamuser.pem teamuser#myserver 'cd ~/bin && ./ssh-out.sh'
|-----------------------------------------------------------------|
| This system is for the use of authorized users only. |
| Individuals using this computer system without authority, or in |
| excess of their authority, are subject to having all of their |
| activities on this system monitored and recorded by system |
| personnel. |
| |
| In the course of monitoring individuals improperly using this |
| system, or in the course of system maintenance, the activities |
| of authorized users may also be monitored. |
| |
| Anyone using this system expressly consents to such monitoring |
| and is advised that if such monitoring reveals possible |
| evidence of criminal activity, system personnel may provide the |
| evidence of such monitoring to law enforcement officials. |
|-----------------------------------------------------------------|
ssh: Could not resolve hostname cd: No address associated with hostname
Connection to myserver closed.
It works absolutely fine if I don't pass a command. It simply logs me in. Any ideas?
Man ssh says:
If command is specified, it is executed on the remote host instead of
a login shell.
The thing is that cd is a bash built-in (run type cd in your terminal). So, ssh tries to run cd as a shell, but can not find it in PATH.
You should invoke ssh something like this:
ssh user#host -t 'bash -l -c "cd ~/bin && ./ssh-out.sh"'

rabbitmqadmin list vhosts show messages but there are no queues

rabbitmqadmin list vhosts show messages but there are no queues. Why it is possible?
When I run Celery it still somehow receives messages. How can I see the name of the queue where the messages are stored? What do I miss?
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ rabbitmqadmin -u guest -p guest list vhosts
+---------+----------+----------------+-------------------------+----------+----------+---------+
| name | messages | messages_ready | messages_unacknowledged | recv_oct | send_oct | tracing |
+---------+----------+----------------+-------------------------+----------+----------+---------+
| myvhost | 1 | 1 | 0 | 231903 | 229228 | False |
+---------+----------+----------------+-------------------------+----------+----------+---------+
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ rabbitmqadmin -u guest -p guest list queues
No items
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ sudo rabbitmqctl list_queues
Listing queues ...
...done.
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ rabbitmqadmin -u guest -p guest -V myvhost get queue=celery requeue=true count=10
*** Access refused: /api/queues/myvhost/celery/get
Please, suggest what extra information is required to answer the question.
I met the similar problem. the difference is: my message is "unacknowledged"
e.g. I found my queue has a message:
$ rabbitmqadmin list queues name node messages
+----------------------------+----------------+----------+
| name | node | messages |
+----------------------------+----------------+----------+
| my_queue_name | rabbit#xx-2 | 1 |
but when I run "get" command to show its content, rabbitmq tells me "there's no item"
so, I query it with this command:
$ rabbitmqadmin list queues name node messages messages_ready messages_unacknowledged
+----------------------------+----------------+----------+----------------+-------------------------+
| name | node | messages | messages_ready | messages_unacknowledged |
+----------------------------+----------------+----------+----------------+-------------------------+
| my_queue_name | rabbit#xxxxx-2 | 1 | 0 | 1 |
+----------------------------+----------------+----------+----------------+-------------------------+
I don't know why. just restart the rabbitmq server and everything seems goes fine.

The Difference between ovs-vsctl and ovs-dpctl

If I am setting up an switch device to be controlled via OpenFlow, what are the conditions to use ovs-dpctl versus ovs-vsctl? The man page for ovs-dpctl says to use ovs-vsctl if ovs-vswitchd is used.
So what circumstances would you uses ovs-dpctl? What does it do that you can't do otherwise?
One follow-up question is where the OF "datapath" value comes from. This would be the 64-bit number in the OF spec that the OF controller uses to identify OF switches. Is this value automatically computed or do you have to enter it?
Thanks for any help with this.
ovs-dpctl:
A tool to create, modify, and delete Open vSwitch data‐paths.
Here are some examples (commands are random):
– ovs-dpctl add-dp dp1
– ovs-dpctl add-if dp1 eth0
– ovs-dpctl show
– ovs-dpctl dump-flows
ovs-vsctl:
A utility for querying and updating the configuration of ovs-vswitchd (with the help of ovsdb-server). Port configuration, bridge additions/deletions, bonding, and VLAN tagging are just some of the options that are available with this command.
Here are some examples (commands are random):
– ovs-vsctl –V : Prints the current version of openvswitch.
– ovs-vsctl show : Prints a brief overview of the switch database configuration.
– ovs-vsctl list-br : Prints a list of configured bridges
– ovs-vsctl list-ports <bridge> : Prints a list of ports on a specific bridge.
– ovs-vsctl list interface : Prints a list of interfaces.
– ovs-vsctl add-br <bridge> : Creates a bridge in the switch database.
ovs-ofctl:
I think it worth mentioning this tool too. A command line tool for monitoring and administering OpenFlow switches. It is used to list implemented flows in the OVS kernel module
- ovs-ofctl add-flow <bridge> <flow>
- ovs-ofctl add-flow <bridge> <match-field> actions=all
- ovs-ofctl del-flows <bridge> <flow>
To me it seems that ovs-vsctl is used to configure the open vswitch itself like configuring ports, bridges, etc. While ovs-dpctl is used to work with datapaths and interfaces.
Sources:
openvswitch and ovsdb
OpenVSwitch slides
openvswitch cheat sheet
Your second question -> OF datapath: To me datapath in context of openflow is an object denoting the connection between controller and switch. I believe OF controller figures that out but it depend on the OF controller.
ovs-vsctl is used to manage the openvswitch and ovs-dpctl can be used to manage datapaths within an openvswitch.
A relevant comment explaining datapaths can be found in dpif-provider.h:
A datapath is a collection of physical or virtual ports that are
exposed over OpenFlow as a single switch. Datapaths and the
collections of ports that they contain may be fixed or dynamic.
Openvswitch provides the capability for different datapath implementations. The following diagram from the OVS porting guide shows how different datapaths fit into the OVS architecture.
_
| +-------------------+
| | ovs-vswitchd |<-->ovsdb-server
| +-------------------+
| | ofproto |<-->OpenFlow controllers
| +--------+-+--------+ _
| | netdev | |ofproto-| |
userspace | +--------+ | dpif | |
| | netdev | +--------+ |
| |provider| | dpif | |
| +---||---+ +--------+ |
| || | dpif | | implementation of
| || |provider| | ofproto provider
|_ || +---||---+ |
|| || |
_ +---||-----+---||---+ |
| | |datapath| |
kernel | | +--------+ _|
| | |
|_ +--------||---------+
||
physical
NIC

How does one dynamically set HAProxy IP configs?

I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP?
I have the following setup.
+----+ +----+ +----+ +----+
| W1 | | W2 | | W3 | | W4 | Web application servers
+----+ +----+ +----+ +----+
\ | | /
\ | | /
\ | | /
+---------+
| HAProxy |
+---------+
/ \ \
+----+ +----+ +----+
| P1 | | P2 | | P3 | K8S pods = Redis + Sentinel
+----+ +----+ +----+
Which is very similar to the setup described on the haproxy blog.
According to https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution.
In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script.
Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master.

2nd server in RabbitMQ cluster not participating, shows no uptime

I have a two server RabbitMQ cluster behind a load balancer, but right now only the first nodes seems to be fielding traffic.
When I do:
> rabbitmqadmin list nodes name type running uptime
+----------------------+------+---------+------------+
| name | type | running | uptime |
+----------------------+------+---------+------------+
| rabbit#n2-rabbitmq-1 | disc | True | 3899164848 |
| rabbit#n2-rabbitmq-2 | disc | True | |
+----------------------+------+---------+------------+
The second node shows no uptime. A cluster_status shows:
> sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#n2-rabbitmq-1' ...
[{nodes,[{disc,['rabbit#n2-rabbitmq-1','rabbit#n2-rabbitmq-2']}]},
{running_nodes,['rabbit#n2-rabbitmq-2','rabbit#n2-rabbitmq-1']},
{cluster_name,<<"rabbit#n2-rabbitmq-1">>},
{partitions,[]}]
...done.
What am I doing wrong or what should I look for?
Maybe for some reasons, one of your node went down and when it's up again, it does not sync with the 'master' node( which is the one running). To do so, set in the configuration file, /etc/rabbitmq/rabbitmq.config :
{cluster_partition_handling, autoheal}
I recommend that you should be using web management plugin for better observation:
$ rabbitmq-plugins enable rabbitmq_management
From the main (overview) page, you could see the status of your nodes in the cluster(connected, partitioned ...)
Anyway you'd better show your procedure of configuring (every step) your cluster with more information. If my above guess is wrong, please show me some information you get from the web interface.