How to make mininet with RYU SDN work properly with iperf traffic? - sdn

when I generate iperf UDP traffic on a linear topology with 6 switches in mininet which is connected to RYU controller, I am getting a lot of packet in messages. like same switch is sending packet in messages for that traffic more than one time. Why is this happening? and how to solve this problem?

Related

Delay of incoming network package on Linux - How to analyse?

The problem is: Sometimes tcpdump sees that the receiving of a UDP packet is held back until the next incoming UDP packet, although the network tap device shows it goes without delay through the cable.
Scenary: My profinet stack on Linux (located in user space) has a cyclic connection where it receives and sends Profinet protocol packets every 4ms (via raw sockets). About every 30 ms it also receives UDP packets in another thread on a UDP socket and replies them immediately, according to that protocol. It's around 10% CPU load. Sometimes it seems such received UDP packets are stuck in the network driver. After 2 seconds the next UDP packet comes in and both, the missed UDP packet and that next one is received. There are no dropped packets.
My measuring:
I use tcpdump -i eth0 --time-stamp-precision=nano --time-stamp-type=adapter_unsynced -w /tmp/tcpdump.pcap to record the UDP traffic to a RAM disk file.
At the same time I use a network tap device to record the traffic.
Question:
How to find out where the delay comes from (or is it a known effect)?
(2. What does the timestamp (which tcpdump sets to each packet) tell me? I mean, which OSI layer refers it to, in other words: When is it taken?)
Topology: "embedded device with Linux and eth0" <---> tap-device <---> PLC. The program "tcpdump" is running on the embedded device. The tap device is listening on the cable. The actual Profinet connection is between PLC and embedded device. A PC is connected on the tap device to record what it is listening to.
Wireshark (via tap and tcpdump): see here (packet no. 3189 in tcpdump.pcap)
It was a bug in the freescale Fast Ethernet Driver (fec_main.c) which NXP has fixed by its awesome support now.
The actual answer (for the question "How to find out where the delay comes from?") is: One has to build a Linux with kernel tracing on, patch the driver code with kernel tracing and then analyse such tracing with the developer Linux tool trace-cmd. It's a very complicated thing but I'm very happy it is fixed now:
trace-cmd record -o /tmp/trace.dat -p function -l fec_enet_interrupt -l fec_enet_rx_napi -e 'fec:fec_rx_tp' tcpdump -i eth0 --time-stamp-precision=nano --time-stamp-type=adapter_unsynced -w /tmp/tcpdump.pcap

Fragmented UDP packet loss?

We have an application doing udp broadcast.
The packet size is mostly higher than the mtu so they will be fragmented.
tcpdump says the packets are all being received but the application doesn't get them all.
The whole stuff isn't happening at all if the mtu is set larger so there isn't fragmentation. (this is our workaround right now - but Germans don't like workarounds)
So it looks like fragmentation is the problem.
But I am not able to understand why and where the packets get lost.
The app developers say they can see the loss of the packets right at the socket they are picking them up. So their application isn't losing the packets.
My questions are:
Where is tcpdump monitoring on linux the device?
Are the packets there already reassembled or is this done later?
How can I debug this issue further?
tcpdump uses libpcap which gets copies of packets very early in the Linux network stack. IP fragment reassembly in the Linux network stack would happen after libpcap (and therefore after tcpdump). Save the pcap and view with Wireshark; it will have better analysis features and will help you find any missing IP fragments (if there are any).

Inaccurate packet counter in OpenvSwitch

I attempted to send a file from host A to B and capture the packet loss using OpenvSwitch. I connected host A and B to an OpenvSwitch VM separately and connect the two OpenvSwitch VMs. The topology looks like this:
A -- OVS_A -- OVS_B -- B
On each OpenvSwitch VM, I added two very simple flows using the commands below:
ovs-ofctl add-flow br0 in_port=1,actions=output:2
ovs-ofctl add-flow br0 in_port=2,actions=output:1
Then I sent a 10GB file between A and B and compared the packet counts of the egress flow on the sending switch and the ingress flow on the receiving switch. I found that the packet count on the receiving switch is much larger than the count on the sending switch, indicating more packets are received than being sent!
I tried to match more specific flows, e.g. a TCP flow from IP A.A.A.A to B.B.B.B on port C and got the same result. Is there anything wrong with my settings? Or this is a known bug in OpenvSwitch? Any ideas?
BTW, is there any other way to passively capture packet loss rate? Meaning measuring the loss rate w/o introducing any intrusive test flows, but simply use statistics available on the sending/receiving ends or switches.
Thanks in advance!
I just realized that it was not Open vSwitch's fault. I tested with a UDP stream and packet count was correct. I also used tcpdump to capture inbound TCP packets on the switches and the switch at the receiving end had more packets than that at the sending end. The result is consistent with that captured with Open vSwitch's flow counters. I guess I must have missed something important about TCP.

About activemq network of brokers, what's the difference between multicast discovery and fixed list of URIs?

http://activemq.apache.org/networks-of-brokers.html
I'm trying activemq network of brokers, following above article.
It works all fine with a fixed list of URIs.
But I have some problem with the multicast discovery. That is, the network bridge between two activemqs on the same machine can be started. But the bridge cannot establish between different machines(I tried telnet, it is ok).
I don't know which part went wrong. So I want to ask that is these two kind of network just difference in configuration?
Telnet is proving that Unicast networking is working, multicast may requires additional configuration in your network.
Are those machines in the same subnet?
Is there a router or Layer 3 switch between them? (it would then requires to be configured if the answer is yes..)
You could use iperf to test the multicast connectivity, you can look at Generating multicast traffic article to know how to do that.

ping command with UDP client-server

I am confused about usage of ping command on mininet. When I implement UDP server and client and execute them with mininet, do I have to use ping command to measure packet loss, delay etc. Or ping command is not used for measuring statistics of UDP server client?
Are you asking how to implement your own ping?
Ping is simply a tool that uses ICMP datagrams to measure point-to-point network latencies and other such things.
https://www.ietf.org/rfc/rfc792.txt