OVS L3 Routing with mininet - openflow

I am trying to make mininet topology L3 OVS OF13 such as:
sudo mn --controller=remote,ip=127.0.0.1 --topo linear,2 --switch ovsk,protocols=OpenFlow13
H1: IP 10.0.0.1/24
H2: IP 10.0.1.1/24
Add route:
h1 route add default gw 10.0.0.254
h2 route add default gw 10.0.1.254
I add the following flows:
sh ovs-ofctl add-flow -OOpenflow13 s1 priority=500,dl_type=0x800,nw_src=10.0.0.0/24,nw_dst=10.0.1.0/24,actions=normal
sh ovs-ofctl add-flow -OOpenflow13 s2 priority=500,dl_type=0x800,nw_src=10.0.1.0/24,nw_dst=10.0.0.0/24,actions=normal
sh ovs-ofctl add-flow -OOpenflow13 s1 arp,nw_dst=10.0.0.1,actions=ouput:1
sh ovs-ofctl add-flow -OOpenflow13 s2 arp,nw_dst=10.0.1.1,actions=ouput:1
In the interface s1-eth1 IP 10.0.0.254 and interface s2-eth1 IP 10.0.1.254. i do ping to see connection but i have always Destination Host Unreachable
Can anyone help me thanks..

It is not recommended to configure IP address on the switch data ports. The IP addresses on the data ports should ideally be configured using OpenFlow, that is, we should add flows in such a way that the controller responds to the ARP requests for default gateway IP address. Please refer to the link https://github.com/mininet/openflow-tutorial/wiki/Router-Exercise.
If you configure IP address on the data ports of the switch, you will have to setup complete routing in Linux, that is, you will need to enable forwarding on the switches possibly the IP addresses on the interfaces connecting the two switches.

Related

How to block all outgoing network traffic, including all broadcast messages (DHCP) with ufw/iptables?

I'm trying to block all outgoing network traffic on ubuntu 20.04 including any broadcast messages from my network interface. My purpose is to block all outgoing network traffic from my host with condition that network interface is on. But all suggested rules and blocking all outcoming traffic do not block broadcast messages such as: DHCP, ARP, IGMPv2, MDNS protocols messages.
How to reproduce this behavior:
Host1 - host with ufw, where I'm trying to block all traffic
Host2 - host with wireshark, which monitors traffic from Host1 by Host1 mac address filter
Host1 and Host2 are in the same LAN;
Host1: no rules in ufw/iptables, network interface eth0 is turned on, default network configuration for eth0 is configured for DHCP (static);
Host1: disable network interface eth0 with manually or with command:
sudo ip link set eth0 down
Host1: Add rules with ufw:
sudo ufw default deny outgoing
sudo ufw deny out to any
Or iptables rules
sudo iptables -P OUTPUT DROP
Host1: Enable ufw:
sudo ufw enable
Host2: Start wireshark, set filter:
eth.addr == <Host1 MAC ADDRESS>
Host1: Enable interface eth0 manually or with command:
sudo ip link set eth0 up
Host2: In wireshark will appear broadcast packets from Host1
Is it possible to block all packets and broadcasting packets too with ufw/iptables?

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

Unable to set correctly a firewall in mininet with sdn and opeflow ovs (UDP Version )

I'm experimenting with mininet in ubuntu 14 in order to create a basic firewall which blocks the udp packets from one host ( h1= 10.0.0.1 ) to another ( h4= 10.0.0.4 ).
Those hosts are in the same vlan and in different switchs (if that can be of any help). Also I would like to block it the udp packets which the destination port as 5001.
To do it so, i have launch two xterm in h1(in mininet) in order to check the ping is working correctly and also launch the packets to h4. xterm h1: "iperf -u -c10.0.0.4 -p 5001 -i 5 -b 200K -t 360".
In mininet I also have open a xterm h4 to set it up as a server listening in the port 5001. xterm h4: "iperf -s -u -p 5001 -i 5​".
When I guess the rule I have to introduce is this one "sh ovs-ofctl add-flow s1 udp_dst=5001,nw_proto=17,actions=drop"
But, it doesnt work due to the packets are still arriving. The ping works fine, but ( and here comes the main problem) the packet arrives at the server and it shouldn't.
Any help please?
Thank you very much
Here I leave you the screenshots of the network topology and also what I appear in the xterm windows.

docker docker0 and container broadcast addresses not set

I'm "dockerizing" an app which does UDP broadcast heartbeating on a known port. This is with docker-engine-1.7.0 on a variety of hosts (Fedora, Centos7, SLES 12).
I notice that the 'docker0' bridge on the docker host and 'eth0' inside the container each have a broadcast address of 0.0.0.0.
Assuming admin privilege on the host I can manually set the broadcast address on docker0. Likewise in the container (if the container is running privileged or with NET_ADMIN, NET_BROADCAST), but I'm curious why the broadcast address isn't set by default. Is there a configuration option I'm missing for Docker to do this automatically?
Host:
# ifconfig docker0 broadcast 172.17.255.255 up
# tcpdump -i docker0 -p 5000
Container:
# ifconfig eth0 broadcast 172.17.255.255 up
# echo "Hello world" | socat - UDP-DATAGRAM:172.17.255.255:5000,broadcast
Broadcast from the host to the container also works once the broadcast addresses are set.
if you are passing NET_ADMIN to the Docker container, I would not use the docker0 network at all for your application.
If I understood correctly what you are trying to do, the UDP broadcast heartbeating on a known port is used by Docker containers that belong to different hosts to find each other, and not by different docker containers in the same host.
I would then recommend to use --net=host:
docker run --net=host --cap-add NET_ADMIN ....
Like this if you get a shell into the docker container, you will see that the network environment is exactly the same one of the host that is running the containers. If your application was running on that server earlier using UDP broadcast, it will work exactly in the same way in the docker container.

VMs are not pinging while start the SDN OPENDAYLIGHT Controller

I am new to the SDN-Openflow and trying to install a labsetup in my PC using VM,i have successfully installed two VM's and able to ping in between them.
In one VM There is ODL,OVS and MININET and in another one there is only OVS.(VM settings are NAT and Host Only Adapter for both)
i want to create a topology with mininet and attache the mininet with the OVS in one VM and connect that OVS with the another VM OVS and ping h1 and h2 (created by mininet)
In VM1 BR0 ip 192.168.56.103 and in VM2 BR0 ip 192.168.56.102(which is actually Eth1 IP i mapped with Br0),controller IP is 127.0.0.1.
I set the controller with both the OVS switches(for VM1 127.0.0.1 and for VM2 192.168.56.103)
Upto this everything is OK,but when i start the controller ./run.sh.the VM2 stop pinging but i can able to see the OVS of VM2 in the ODL GUI with a host (192.168.56.XXX)
Please help me on that
Regards
Rupak