I am new to Ryu and trying to set it up with a physical switch connected to a VM on my computer. The switch's controller is set to 10.0.1.8 and I am trying to set the same on ryu controller. I used the following commands:
sudo ovs-vsctl add-br br0
sudo ovs-vsctl add-port br0 eth2
sudo ovs-vsctl set bridge br0 10.0.1.8 protocols=OpenFlow13
Doing a netstat shows that ryu controller is still listening on 0.0.0.0 as per the output below. Can someone please assist me here?
State PID/Program name
tcp 0 0 0.0.0.0:6633 0.0.0.0:*
It seems I had to include the --ofp-listen-host parameter and specify the controller IP there, as follows:
PYTHONPATH=. ./bin/ryu-manager --verbose --ofp-listen-host 10.0.1.8 ryu/app/simple_switch.py
The commands I was using earlier apply only to a Mininet topology.
Related
I need to secure a VPN via SSH forwarding. How can I do this?
It should work like a proxy socks, but I was not able to find the gateway via socks.
i test in bash comment in best work
after old answer step 4 work with
route add -net [server] netmask 255.255.255.255 gw [real gatway client]
after
ssh -NTCf -w 0:0 [server]
ip link set tun0 up
ip addr add 192.168.123.2/32 peer 192.168.123.1 dev tun0
route add -net 192.168.123.0 netmask 255.255.255.0 gw 192.168.123.2
route add default gw 192.168.123.1
run in server
ip link set tun0 up
ip addr add 192.168.123.1/32 peer 192.168.123.2 dev tun0
arp -sD 192.168.123.2 eth0 pub
if work server with gateway u need nat comment
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
i find Approach with ssh_vpn
this link https://help.ubuntu.com/community/SSH_VPN
https://superuser.com/questions/202310/ssh-vpn-default-gateway-help
set forward in system
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
or
edite /etc/sysctl.conf
no commet "net.ipv4.ip_forward=1"
create ssh-kegen
ssh-keygen
ssh-copy-id root#[des ip server]
edite /etc/ssh/sshd_config
add "PermitTunnel yes"
and
change "PermitRootLogin yes"
restart ssh service
5. ssh -NTCf -w 0:0 [des ip server]
6. to host set ip in tun0 ip with
i
ip link set tun0 ip
ip addr add 10.0.0.100/32 peer 10.0.0.200 dev tun0
to server set tun0 ip
ip link set tun0 up
ip addr add 10.0.0.200/32 peer 10.0.0.100 dev tun0
set route [see des ip ] in host
ip route add [des ip server]/32 via [gatway host]
set defaul route in host
route add defaul gw 10.0.0.100
go to server and set ip table
iptables -P FORWARD ACCEPT
iptables --table nat -A POSTROUTING -o eth0 -j MASQUERADE
I have an Ubuntu Host with two VM's and I am trying to create a bridge between the two VM's. I have a bridge say br0 and I am trying to create a port say tap0 and tap1 for the two VM's. So far I was able to create the bridge but when I do try to create the port, I get the below error.
root#dpdk:~# ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
root#dpdk:~# ovs-vsctl add-port br0 tap1
ovs-vsctl: Error detected while setting up 'tap1'. See ovs-vswitchd log for details.
root#dpdk:~# sudo ovs-vsctl show
4c3a769e-f900-4c8d-81a7-ba685d4e364a
Bridge "br0"
Port "tap1"
Interface "tap1"
error: "could not open network device tap1 (No such device)"
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.5.2"
I am doing this to run a DPDK pktgen application.
You need to create a tap device first.
You can either create it yourself:
$ tunctl -t tap0
$ ip link set tap0 up
$ ovs-vsctl add-port br0 tap0
or let QEMU/KVM create it for you:
$ cat << 'EOF' > /etc/ovs-ifup
#!/bin/sh
switch='br0'
ip link set $1 up
ovs-vsctl add-port ${switch} $1
EOF
$ cat << 'EOF' > /etc/ovs-ifdown
#!/bin/sh
switch='br0'
ip addr flush dev $1
ip link set $1 down
ovs-vsctl del-port ${switch} $1
EOF
$ kvm -m 512 -net nic,macaddr=00:11:22:EE:EE:EE -net \
tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -drive \
file=/path/to/disk-image,boot=on
(The first two commands create two utility scripts as callbacks for kvm. See Open vSwitch's documentation.)
I'm using OpenStack to help me virtualize my infrastructure.
You can see how my topology looks like --> My Topology in Openstack
I face issues in configuring the 2 switches.
Here is what I have done (I'm in sudo mode) :
1) Installing openvswitch paquets :
apt-get install openvswitch-switch
2) Creating a bridge named br0 :
ovs-vsctl add-br br0
3) Turn up mybridge interface :
ifconfig br0 up
4) Add the physical interface ens4 to the bridge (I'm connecting through the switch via SSH using the interface ens3) :
ovs-vsctl add-port br0 ens4
5) Remove ens4's IP addressing :
ifconfig ens4 0
6) Add to br0 the former ens4's IP adressing (I take the switch 1 for instance) :
ifconfig br0 192.168.1.18
7) Add a default gateway in the routing table :
route add default gw 192.168.1.1 br0
Unfortunately, after all those steps, I'm still unable to ping from Host_1 (whose IP address is 192.168.1.12) to my Switch_1 (whose IP address is 192.168.1.18, the IP address 192.168.0.30 is used for configuring the Switch via SSH connection) and vice-versa.
Any ideas ?
Thank you in advance
P.S. : If the image is not readable, please tell me, I'll make a new one.
I'm assuming those switches represent VMs, basically because in OpenStack you can't create switches.
That being said, due to ARP reasons, you have to change the MAC addresses. Try giving the bridge the same MAC address as ens4 and change the MAC address of ens4. The script should look like this:
NIC="ens4"
MAC=$(ifconfig $NIC | grep "HWaddr\b" | awk '{print $5}')
ovs-vsctl add-br br0 -- set bridge br0 other-config:hwaddr=$MAC
ovs-vsctl add-port br0 $NIC > /dev/null 2>&1
ifconfig $NIC 0.0.0.0
LAST_MAC_CHAR=${MAC:(-1)}
AUX="${MAC:0:${#MAC}-1}"
if [ "$LAST_MAC_CHAR" -eq "$LAST_MAC_CHAR" ] 2>/dev/null; then
NL="a"
else
NL="1"
fi
NEW_MAC="$AUX$NL"
ifconfig $NIC hw ether $NEW_MAC
Also, check you allow ICMP traffic in the security groups of the VMs.
I want running iptables on Mininet OVS.
I do this
'xterm s1'
'iptables -A INPUT(or FOWARD or OUTPUT) -i s1-eth1 -j DROP' on s1 terminal.
But it is not work.
When I use iptables on Mininet Host, it is work.
How can I running iptables or different packet filter?
please teach me.
You could not use iptables to do packet filter, because OpenvSwitch hooked network device.
All of the packets will through to OpenvSwitch datapath (OpenvSwitch kernel module).
If you want to filter packet in OpenvSwitch, please set flow entry to control your flow tables.
We have a number of iptables rules for forwarding connections, which are solid and work well.
For example, port 80 forwards to port 8080 on the same machine (the webserver). When a given webserver is restarting, we forward requests to another IP on port 8080 which displays a Maintenance Page. In most cases, this other IP is on a separate server.
This all worked perfectly until we installed bridge-utils and changed to using a bridge br0 instead of eth0 as the interface.
The reason we have converted to using a bridge interface is to gain access to the MAC SNAT/DNAT capabilities of ebtables. We have no other reason to add a bridge interface on the servers, as they don't actually bridge connections over multiple interfaces.
I know this is a strange reason to add a bridge on the servers, but we are using the MAC SNAT/DNAT capabilities in a new project and ebtables seemed to be the safest, fastest and easiest way to go since we are already so familiar with iptables.
The problem is, since converting to a br0 interface, iptables PREROUTING forwarding to external servers is no longer working.
Internal PREROUTING forwarding works fine (eg: request comes in on port 80, it forwards to port 8080).
The OUTPUT chain also works (eg: we can connect outwards from the box via a local destination IP:8080, and the OUTPUT chain maps it to the Maintenance Server IP on a different server, port 8080, and returns a webpage).
However, any traffic coming into the box seems to die after the PREROUTING rule if the destination IP is external.
Here is an example of our setup:
Old Setup:
iptables -t nat -A PREROUTING -p tcp --dport 9080 -j DNAT --to-destination $MAINTIP:8080
iptables -a FORWARD --in-interface eth0 -j ACCEPT
iptables -t nat -A POSTROUTING --out-interface eth0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
New Setup: (old setup in various formats tried as well..., trying to log eth0 and br0 packets)
iptables -t nat -A PREROUTING -p tcp --dport 9080 -j DNAT --to-destination $MAINTIP:8080
iptables -a FORWARD --in-interface br0 -j ACCEPT
iptables -t nat -A POSTROUTING --out-interface br0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
Before changing to br0, the client request would go to server A at port 9080, and then be MASQUERADED off to a different server $MAINTIP.
As explained above, this works fine if $MAINTIP is on the same machine, but if it's on another server, the packet is never sent to $MAINTIP under the new br0 setup.
We want the packets to go out the same interface they came in on, MASQUERADED, as they did before we switched to using a single-NIC bridge (br0/bridge-utils).
I've tried adding logging at all stages in iptables. For some reason the iptables TRACE target doesn't work on this setup, so I can't get a TRACE log, but the packet shows up in the PREROUTING table, but seem to be silently dropped after that.
I've gone through this excellent document and have a better understanding of the flow between iptables and ebtables:
http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html
From my understanding, it seems that the bridge is not forwarding the packets out the same interface they came in, and is dropping them. If we had a second interface added, I imagine it would be forwarding them out on that interface (the other side of the bridge) - which is the way bridges are meant to work ;-)
Is it possible to make this work the way we want it to, and PREROUTE/FORWARD those packets out over the same interface they came in on like we used to?
I'm hoping there are some ebtables rules which can work in conjunction with the iptables PREROUTING/FORWARD/POSTROUTING rules to make iptables forwarding work the way it usually does, and to send the routed packets out br0 (eth0) instead of dropping them.
Comments, flames, any and all advice welcome!
Best Regards,
Neale
I guess you did, but just to be sure, did you add eth0 to the bridge?
Although, I am not sure what the problem is, I will give some debugging tips which might assist you or other when debugging bridge/ebtables/iptables issues:
Make sure that "/proc/sys/net/bridge/bridge-nf-call-iptables" is enabled (1)
This cause bridge traffic to go through netfilter iptables code.
Note that this could affect performance.
Check for packet interception by the ebtabels/iptables rules,
Use the commands:
iptables -t nat -L -n -v
ebtables -t nat -L –Lc
This might help you to understand if traffic is matched and intercepted or not.
Check that IP NAT traffic appears in the conntrack table:
conntrack –L (if installed)
Or
cat /proc/net/nf_conntrack
Check MAC learning of the bridge
brctl showmacs br0
Use tcpdump on the eth0 and on br0 to check if packets seen at both as expected
Use the –e option to see MAC address as well.
For debugging, try to put the bridge interface in promiscuous mode, maybe the bridge receives packets with different MAC address (in promiscuous mode it will accept different MAC as well)
Maybe set bridge forward delay to 0
brctl setfd br0 0
And disable stp (spanning tree protocol)
brctl stp br0 off
What is your routing table looks like?
Try adding specific or default route rule with "dev br0"
I hope it helps a bit…
Good luck
Well only 1.5 years old but could be useful for later lookups. Looking at your link just now, it says specifically there brouting will ignore the packet, if MAC is on same side of bridge (and not another port or the bridge itself (see fig 2.b in your link).
Adding to that, I simply quote from this link: http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxBridge
"... ebtables DROP vs iptables DROP
In iptables which in most cases is being used to filter network traffic the DROP target means "packet disappear".
In ebtables a "-j redirect --redirect-target DROP" means "packet be gone from the bridge into the upper layers of the kernel such as routing\forwarding" (<-- relevant bit!)
Since the ebtables works in the link layer of the connection in order to intercept the connection we must "redirect" the traffic to the level which iptables will be able to intercept\tproxy.
ANd therein is your answer (bet added for future visitors of course, unless you are still at it ,p)