networkmanager an issue managing bridge - bridge

I'm currently have this:
br0: connected to bridge-br0
"br0"
bridge, 02:81:A4:CF:CE:9F, sw, mtu 1500
ip4 default
inet4 192.168.1.10/24
route4 192.168.1.0/24
route4 0.0.0.0/0
inet6 fe80::b274:abc7:1ff9:ecc5/64
route6 fe80::/64
route6 ff00::/8
eth0: connected to bridge-slave-eth0
"eth0"
ethernet (dwmac-sun8i), 02:81:A4:CF:CE:9F, hw, mtu 1500
master br0
route6 ff00::/8
lo: unmanaged
"lo"
loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536
I'm trying to create a tap interface and add it to br0:
# sudo -sE
# nmcli con add type tun ifname tap0 mode tap ip4 10.10.10.10/24
Connection 'tun-tap0' (02a77364-3dc1-4bed-8b46-e8be91fcbb93) successfully added.
# nmcli con add type bridge-slave ifname tap0 master br0
Connection 'bridge-slave-tap0' (831d750b-3d86-40dd-8ab5-da4bdccc5f9e) successfully added.
# nmcli c
NAME UUID TYPE DEVICE
bridge-br0 4dfb2c88-b861-49c4-aee0-9449a9b33b9d bridge br0
tun-tap0 02a77364-3dc1-4bed-8b46-e8be91fcbb93 tun tap0
bridge-slave-eth0 e0a65d4f-d062-4959-8343-1aa11a72dff8 ethernet eth0
bridge-slave-tap0 831d750b-3d86-40dd-8ab5-da4bdccc5f9e ethernet --
wire a4c2b672-2617-3f07-bc25-53b3e415dba5 ethernet --
I'm getting an error when trying to enable it
# nmcli con up bridge-slave-tap0
Error: Connection activation failed: No suitable device found for this connection (device eth0 not available because profile is not compatible with device (mismatching interface name)).
Am I doing something wrong?

ok, spent like 2 days and was about to give up on NetworkManager
The following command is redundant nmcli con add type tun ifname tap0 mode tap ip4 10.10.10.10/24
Then instead of nmcli con add type bridge-slave ifname tap0 master br0 it should be nmcli con add type tun mode tap ifname tap0 master br0 since bridge-slave is an alias to ethernet profile

Related

Unable to capture traffic greater than MTU 1500 in ovs tunnel

Created a bridge
ovs-vsctl add-br br0
Added a port of type vxlan in bridge br0
ovs-vsctl add-port br0 tun1 \
-- set Interface tun1 type=vxlan \
options:remote_ip=10.2.3.204 options:key=10 options:df_default=False
Added an internal port in bridge br0
ovs-vsctl add-port br0 iface1 \
-- set Interface iface1 type=internal options:df_default=False
Set the interfaces up
ip link set vxlan_sys_4789 up
ip link set iface1 up
I am receiving traffic in interface iface1 and I am expecting the same traffic encapsulated with the given tunnel.
I send packets with frame size 1472 bytes, I receive the same with the encapsulation done at the remote host (10.2.3.204). But when the frame size exceeds 1472 bytes, the packets get fragmented in interface iface1 and all the fragmented packets pass through the flow. But, I receive in remote host (10.2.3.204) only the last fragment of the traffic where more fragment bit is not set.
On further debugging, I found that in the tunnel interface, vxlan_sys_4789, I see that only the last fragment of the traffic is received, while others are dropped.
Is there any explicit condition in ovs to drop these packets?
Despite fragment flag is set true, why are the fragments not passing through the tunnel?
By default Open vSwitch overrides the internal interfaces (e.g. br0) MTU. If you have just an internal interface (e.g. br0) and a physical interface (e.g. eth0), then every change in MTU to eth0 will be reflected to br0. Any manual MTU configuration using ip on internal interfaces is going to be overridden by Open vSwitch to match the current bridge minimum.
Sometimes this behavior is not desirable, for example with tunnels. The MTU of an internal interface can be explicitly set using the following command:
ovs-vsctl set int br0 mtu_request=1450
After this, Open vSwitch will configure br0 MTU to 1450. Since this setting is in the database it will be persistent (compared to what happens with ip).
The MTU configuration can be removed to restore the default behavior with:
$ ovs-vsctl set int br0 mtu_request=[]
The mtu_request column can be used to configure MTU even for physical interfaces (e.g. eth0).

problem with testpmd on dpdk and ovs in ubuntu 18.04

i have a X520-SR2 10G Network Card, i gonna use that to create 2 virtual interfaces with OpenvSwitch that compiled with dpdk (installed from repository of ubuntu 18.04) and test this virtual interfaces with testpmd, i do following jobs :
Create Bridge
$ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
bind dpdk ports
$ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:01:00.0 ofport_request=1
$ ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:01:00.1 ofport_request=2
create dpdkvhostuser ports
$ ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser ofport_request=3
$ ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser ofport_request=4
define flow directions
# clear all directions
$ ovs-ofctl del-flows br0
Add new flow directions
$ ovs-ofctl add-flow br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:4
$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:3
Dump flow directions
$ ovs-ofctl dump-flows br0
cookie=0x0, duration=851.504s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser0 actions=output:dpdkvhostuser1
cookie=0x0, duration=851.500s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser1 actions=output:dpdkvhostuser0
now i run testpmd:
$ testpmd -c 0x3 -n 4 --socket-mem 512,512 --proc-type auto --file-prefix testpmd --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/dpdkvhostuser0 --vdev=virtio_user1,path=/var/run/openvswitch/dpdkvhostuser1 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan
EAL: Detected 32 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=155456, size=2176, socket=1
Configuring Port 0 (socket 0)
Port 0: DA:17:DC:5E:B0:6F
Configuring Port 1 (socket 0)
Port 1: 3A:74:CF:43:1C:85
Checking link statuses...
Done
testpmd> start tx_first
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=64
nb forwarding cores=1 - nb forwarding ports=2
port 0:
CRC stripping enabled
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0xf00
port 1:
CRC stripping enabled
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0xf00
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 64 TX-dropped: 0 TX-total: 64
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 64 TX-dropped: 0 TX-total: 64
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 128 TX-dropped: 0 TX-total: 128
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
testpmd>
version of softwares:
OS: Ubuntu 18.04
Linux Kernel: 4.15
OVS: 2.9
DPDK: 17.11.3
what should i do now ??
where is the problem from?
finally catch the problem , The problem is size of socket memory allocation, i change --socket-mem value to 1024,1024 (1024M for each numa node) and create packets with pktgen (as same using --socket-mem 1024,1024).Everything works fine.

Ping between two vm in kvm

I have configured a net with one host ( my computer) and two virtual machines. I don't want to use libvirt now to connect vm to host, so i manually created bridge and two tap interfaces.
Here is the configuration:
vm1 /etc/network/interfaces:
auto lo
iface lo inet loopback
auto enp0s2
iface enp0s2 inet static
address 192.168.50.3
netmask 255.255.255.0
dns-nameservers 8.8.8.8
up ip route add default via 192.168.50.1 dev enp0s2
the same for another one vm2:
auto lo
iface lo inet loopback
auto enp0s2
iface enp0s2 inet static
address 192.168.50.2
netmask 255.255.255.0
dns-nameservers 8.8.8.8
up ip route add default via 192.168.50.1 dev enp0s2
this is host :
auto enp4s0
13 iface enp4s0 inet manual
12
11 auto br0
10 iface br0 inet static
9 address 192.168.50.1
8 netmask 255.255.255.0
7 network 192.168.50.0
6 broadcast 192.168.50.255
5 # gateway 192.168.50.1
4 bridge_ports enp4s0 tap0 tap1
3 bridge_stp off
2 bridge_fd 0
1 bridge_maxwait 0
45 dns-nameservers 8.8.8.8
Host can ping vm and vms can ping host now. But from 192.168.50.3 vm 192.168.50.2 is unreachable. What is the problem here? I have read in "mastering kvm virtualization", that this is enough for getting the connection (ip forwarding is enabled, but this does not matter for bridge, as I understood)
can you try assigning same vlan to both vms xml(config) file ?

Issues in configuring OpenVSwitch on Ubuntu 16.04

I'm using OpenStack to help me virtualize my infrastructure.
You can see how my topology looks like --> My Topology in Openstack
I face issues in configuring the 2 switches.
Here is what I have done (I'm in sudo mode) :
1) Installing openvswitch paquets :
apt-get install openvswitch-switch
2) Creating a bridge named br0 :
ovs-vsctl add-br br0
3) Turn up mybridge interface :
ifconfig br0 up
4) Add the physical interface ens4 to the bridge (I'm connecting through the switch via SSH using the interface ens3) :
ovs-vsctl add-port br0 ens4
5) Remove ens4's IP addressing :
ifconfig ens4 0
6) Add to br0 the former ens4's IP adressing (I take the switch 1 for instance) :
ifconfig br0 192.168.1.18
7) Add a default gateway in the routing table :
route add default gw 192.168.1.1 br0
Unfortunately, after all those steps, I'm still unable to ping from Host_1 (whose IP address is 192.168.1.12) to my Switch_1 (whose IP address is 192.168.1.18, the IP address 192.168.0.30 is used for configuring the Switch via SSH connection) and vice-versa.
Any ideas ?
Thank you in advance
P.S. : If the image is not readable, please tell me, I'll make a new one.
I'm assuming those switches represent VMs, basically because in OpenStack you can't create switches.
That being said, due to ARP reasons, you have to change the MAC addresses. Try giving the bridge the same MAC address as ens4 and change the MAC address of ens4. The script should look like this:
NIC="ens4"
MAC=$(ifconfig $NIC | grep "HWaddr\b" | awk '{print $5}')
ovs-vsctl add-br br0 -- set bridge br0 other-config:hwaddr=$MAC
ovs-vsctl add-port br0 $NIC > /dev/null 2>&1
ifconfig $NIC 0.0.0.0
LAST_MAC_CHAR=${MAC:(-1)}
AUX="${MAC:0:${#MAC}-1}"
if [ "$LAST_MAC_CHAR" -eq "$LAST_MAC_CHAR" ] 2>/dev/null; then
NL="a"
else
NL="1"
fi
NEW_MAC="$AUX$NL"
ifconfig $NIC hw ether $NEW_MAC
Also, check you allow ICMP traffic in the security groups of the VMs.

Setting controller IP in Ryu for physical switch

I am new to Ryu and trying to set it up with a physical switch connected to a VM on my computer. The switch's controller is set to 10.0.1.8 and I am trying to set the same on ryu controller. I used the following commands:
sudo ovs-vsctl add-br br0
sudo ovs-vsctl add-port br0 eth2
sudo ovs-vsctl set bridge br0 10.0.1.8 protocols=OpenFlow13
Doing a netstat shows that ryu controller is still listening on 0.0.0.0 as per the output below. Can someone please assist me here?
State PID/Program name
tcp 0 0 0.0.0.0:6633 0.0.0.0:*
It seems I had to include the --ofp-listen-host parameter and specify the controller IP there, as follows:
PYTHONPATH=. ./bin/ryu-manager --verbose --ofp-listen-host 10.0.1.8 ryu/app/simple_switch.py
The commands I was using earlier apply only to a Mininet topology.