Issues in configuring OpenVSwitch on Ubuntu 16.04 - ssh

I'm using OpenStack to help me virtualize my infrastructure.
You can see how my topology looks like --> My Topology in Openstack
I face issues in configuring the 2 switches.
Here is what I have done (I'm in sudo mode) :
1) Installing openvswitch paquets :
apt-get install openvswitch-switch
2) Creating a bridge named br0 :
ovs-vsctl add-br br0
3) Turn up mybridge interface :
ifconfig br0 up
4) Add the physical interface ens4 to the bridge (I'm connecting through the switch via SSH using the interface ens3) :
ovs-vsctl add-port br0 ens4
5) Remove ens4's IP addressing :
ifconfig ens4 0
6) Add to br0 the former ens4's IP adressing (I take the switch 1 for instance) :
ifconfig br0 192.168.1.18
7) Add a default gateway in the routing table :
route add default gw 192.168.1.1 br0
Unfortunately, after all those steps, I'm still unable to ping from Host_1 (whose IP address is 192.168.1.12) to my Switch_1 (whose IP address is 192.168.1.18, the IP address 192.168.0.30 is used for configuring the Switch via SSH connection) and vice-versa.
Any ideas ?
Thank you in advance
P.S. : If the image is not readable, please tell me, I'll make a new one.

I'm assuming those switches represent VMs, basically because in OpenStack you can't create switches.
That being said, due to ARP reasons, you have to change the MAC addresses. Try giving the bridge the same MAC address as ens4 and change the MAC address of ens4. The script should look like this:
NIC="ens4"
MAC=$(ifconfig $NIC | grep "HWaddr\b" | awk '{print $5}')
ovs-vsctl add-br br0 -- set bridge br0 other-config:hwaddr=$MAC
ovs-vsctl add-port br0 $NIC > /dev/null 2>&1
ifconfig $NIC 0.0.0.0
LAST_MAC_CHAR=${MAC:(-1)}
AUX="${MAC:0:${#MAC}-1}"
if [ "$LAST_MAC_CHAR" -eq "$LAST_MAC_CHAR" ] 2>/dev/null; then
NL="a"
else
NL="1"
fi
NEW_MAC="$AUX$NL"
ifconfig $NIC hw ether $NEW_MAC
Also, check you allow ICMP traffic in the security groups of the VMs.

Related

Cant access OpenStack instance from other devices

I have done a DevStack installation of OpenStack on a server.
I have added ICMP and SSH rules to the security group. And have made instances on it.
I can ssh and ping these instances from the host machine.
Now the problem is that I'm unable to ssh or even ping my instances from other machines on this network. And the fun part is that these instances can ssh/ping other machines and even ping my other server and ssh VM's on this server.
I hope I made sense but if you have more to ask, please let me know
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=192.168.4.72
enable_service s-proxy s-object s-container s-account
SWIFT_REPLICAS=1
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
enable_service h-eng h-api h-api-cfn h-api-cw
enable_plugin heat git://git.openstack.org/openstack/heat
FLOATING_RANGE=192.168.4.240/29
FLAT_INTERFACE=eno1
Doing this worked out fine for me
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

Unable to set correctly a firewall in mininet with sdn and opeflow ovs (UDP Version )

I'm experimenting with mininet in ubuntu 14 in order to create a basic firewall which blocks the udp packets from one host ( h1= 10.0.0.1 ) to another ( h4= 10.0.0.4 ).
Those hosts are in the same vlan and in different switchs (if that can be of any help). Also I would like to block it the udp packets which the destination port as 5001.
To do it so, i have launch two xterm in h1(in mininet) in order to check the ping is working correctly and also launch the packets to h4. xterm h1: "iperf -u -c10.0.0.4 -p 5001 -i 5 -b 200K -t 360".
In mininet I also have open a xterm h4 to set it up as a server listening in the port 5001. xterm h4: "iperf -s -u -p 5001 -i 5​".
When I guess the rule I have to introduce is this one "sh ovs-ofctl add-flow s1 udp_dst=5001,nw_proto=17,actions=drop"
But, it doesnt work due to the packets are still arriving. The ping works fine, but ( and here comes the main problem) the packet arrives at the server and it shouldn't.
Any help please?
Thank you very much
Here I leave you the screenshots of the network topology and also what I appear in the xterm windows.

Not able create ports in OVS

I have an Ubuntu Host with two VM's and I am trying to create a bridge between the two VM's. I have a bridge say br0 and I am trying to create a port say tap0 and tap1 for the two VM's. So far I was able to create the bridge but when I do try to create the port, I get the below error.
root#dpdk:~# ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
root#dpdk:~# ovs-vsctl add-port br0 tap1
ovs-vsctl: Error detected while setting up 'tap1'. See ovs-vswitchd log for details.
root#dpdk:~# sudo ovs-vsctl show
4c3a769e-f900-4c8d-81a7-ba685d4e364a
Bridge "br0"
Port "tap1"
Interface "tap1"
error: "could not open network device tap1 (No such device)"
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.5.2"
I am doing this to run a DPDK pktgen application.
You need to create a tap device first.
You can either create it yourself:
$ tunctl -t tap0
$ ip link set tap0 up
$ ovs-vsctl add-port br0 tap0
or let QEMU/KVM create it for you:
$ cat << 'EOF' > /etc/ovs-ifup
#!/bin/sh
switch='br0'
ip link set $1 up
ovs-vsctl add-port ${switch} $1
EOF
$ cat << 'EOF' > /etc/ovs-ifdown
#!/bin/sh
switch='br0'
ip addr flush dev $1
ip link set $1 down
ovs-vsctl del-port ${switch} $1
EOF
$ kvm -m 512 -net nic,macaddr=00:11:22:EE:EE:EE -net \
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -drive \
file=/path/to/disk-image,boot=on
(The first two commands create two utility scripts as callbacks for kvm. See Open vSwitch's documentation.)

Setting controller IP in Ryu for physical switch

I am new to Ryu and trying to set it up with a physical switch connected to a VM on my computer. The switch's controller is set to 10.0.1.8 and I am trying to set the same on ryu controller. I used the following commands:
sudo ovs-vsctl add-br br0
sudo ovs-vsctl add-port br0 eth2
sudo ovs-vsctl set bridge br0 10.0.1.8 protocols=OpenFlow13
Doing a netstat shows that ryu controller is still listening on 0.0.0.0 as per the output below. Can someone please assist me here?
State PID/Program name
tcp 0 0 0.0.0.0:6633 0.0.0.0:*
It seems I had to include the --ofp-listen-host parameter and specify the controller IP there, as follows:
PYTHONPATH=. ./bin/ryu-manager --verbose --ofp-listen-host 10.0.1.8 ryu/app/simple_switch.py
The commands I was using earlier apply only to a Mininet topology.

iptables/ebtables/bridge-utils: PREROUTING/FORWARD to another server via single NIC

We have a number of iptables rules for forwarding connections, which are solid and work well.
For example, port 80 forwards to port 8080 on the same machine (the webserver). When a given webserver is restarting, we forward requests to another IP on port 8080 which displays a Maintenance Page. In most cases, this other IP is on a separate server.
This all worked perfectly until we installed bridge-utils and changed to using a bridge br0 instead of eth0 as the interface.
The reason we have converted to using a bridge interface is to gain access to the MAC SNAT/DNAT capabilities of ebtables. We have no other reason to add a bridge interface on the servers, as they don't actually bridge connections over multiple interfaces.
I know this is a strange reason to add a bridge on the servers, but we are using the MAC SNAT/DNAT capabilities in a new project and ebtables seemed to be the safest, fastest and easiest way to go since we are already so familiar with iptables.
The problem is, since converting to a br0 interface, iptables PREROUTING forwarding to external servers is no longer working.
Internal PREROUTING forwarding works fine (eg: request comes in on port 80, it forwards to port 8080).
The OUTPUT chain also works (eg: we can connect outwards from the box via a local destination IP:8080, and the OUTPUT chain maps it to the Maintenance Server IP on a different server, port 8080, and returns a webpage).
However, any traffic coming into the box seems to die after the PREROUTING rule if the destination IP is external.
Here is an example of our setup:
Old Setup:
iptables -t nat -A PREROUTING -p tcp --dport 9080 -j DNAT --to-destination $MAINTIP:8080
iptables -a FORWARD --in-interface eth0 -j ACCEPT
iptables -t nat -A POSTROUTING --out-interface eth0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
New Setup: (old setup in various formats tried as well..., trying to log eth0 and br0 packets)
iptables -t nat -A PREROUTING -p tcp --dport 9080 -j DNAT --to-destination $MAINTIP:8080
iptables -a FORWARD --in-interface br0 -j ACCEPT
iptables -t nat -A POSTROUTING --out-interface br0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
Before changing to br0, the client request would go to server A at port 9080, and then be MASQUERADED off to a different server $MAINTIP.
As explained above, this works fine if $MAINTIP is on the same machine, but if it's on another server, the packet is never sent to $MAINTIP under the new br0 setup.
We want the packets to go out the same interface they came in on, MASQUERADED, as they did before we switched to using a single-NIC bridge (br0/bridge-utils).
I've tried adding logging at all stages in iptables. For some reason the iptables TRACE target doesn't work on this setup, so I can't get a TRACE log, but the packet shows up in the PREROUTING table, but seem to be silently dropped after that.
I've gone through this excellent document and have a better understanding of the flow between iptables and ebtables:
http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html
From my understanding, it seems that the bridge is not forwarding the packets out the same interface they came in, and is dropping them. If we had a second interface added, I imagine it would be forwarding them out on that interface (the other side of the bridge) - which is the way bridges are meant to work ;-)
Is it possible to make this work the way we want it to, and PREROUTE/FORWARD those packets out over the same interface they came in on like we used to?
I'm hoping there are some ebtables rules which can work in conjunction with the iptables PREROUTING/FORWARD/POSTROUTING rules to make iptables forwarding work the way it usually does, and to send the routed packets out br0 (eth0) instead of dropping them.
Comments, flames, any and all advice welcome!
Best Regards,
Neale
I guess you did, but just to be sure, did you add eth0 to the bridge?
Although, I am not sure what the problem is, I will give some debugging tips which might assist you or other when debugging bridge/ebtables/iptables issues:
Make sure that "/proc/sys/net/bridge/bridge-nf-call-iptables" is enabled (1)
This cause bridge traffic to go through netfilter iptables code.
Note that this could affect performance.
Check for packet interception by the ebtabels/iptables rules,
Use the commands:
iptables -t nat -L -n -v
ebtables -t nat -L –Lc
This might help you to understand if traffic is matched and intercepted or not.
Check that IP NAT traffic appears in the conntrack table:
conntrack –L (if installed)
Or
cat /proc/net/nf_conntrack
Check MAC learning of the bridge
brctl showmacs br0
Use tcpdump on the eth0 and on br0 to check if packets seen at both as expected
Use the –e option to see MAC address as well.
For debugging, try to put the bridge interface in promiscuous mode, maybe the bridge receives packets with different MAC address (in promiscuous mode it will accept different MAC as well)
Maybe set bridge forward delay to 0
brctl setfd br0 0
And disable stp (spanning tree protocol)
brctl stp br0 off
What is your routing table looks like?
Try adding specific or default route rule with "dev br0"
I hope it helps a bit…
Good luck
Well only 1.5 years old but could be useful for later lookups. Looking at your link just now, it says specifically there brouting will ignore the packet, if MAC is on same side of bridge (and not another port or the bridge itself (see fig 2.b in your link).
Adding to that, I simply quote from this link: http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxBridge
"... ebtables DROP vs iptables DROP
In iptables which in most cases is being used to filter network traffic the DROP target means "packet disappear".
In ebtables a "-j redirect --redirect-target DROP" means "packet be gone from the bridge into the upper layers of the kernel such as routing\forwarding" (<-- relevant bit!)
Since the ebtables works in the link layer of the connection in order to intercept the connection we must "redirect" the traffic to the level which iptables will be able to intercept\tproxy.
ANd therein is your answer (bet added for future visitors of course, unless you are still at it ,p)