Get packet loss from Open Flow switch - openflow

I am using ryu controller (3.22) to monitor switches (Open vSwitch 2.0.2, supporting Open Flow 1.3), which are a part of virtual network created using mininet (2.1.0). It is a tree topology with depth = 2, and fanout = 5. I am using switch_monitor.py
With the help of controller, I can get port statistics using the EventOFPPortStatsReply decorator. I can get values of rx_packets, rx_bytes, rx_errors, tx_packets, tx_bytes, tx_errors, rx_dropped, tx_dropped etc.
But the values of rx_dropped, tx_dropped come out to be zero always, even when the switches are actually dropping packets, as reported by qdisc (linux command).
How to get packet loss statistics from an Open Flow switch?
a. How to get a non zero value?
b. Is there any alternate way?

qdisc reports what the kernel is dropping, not what the network is dropping. You're getting zero's because the switch isn't dropping frames.
(I don't know if your virtual network system supports simulating frame drops.)

I believe that dropped only cares about packets that are dropped due to actual drop rules or due to buffer overflows.
Another way to calculate packet loss is to compare the packet counts for the two switches on the edge of a link. Suppose you have A <--> B and want to calculate the packet loss rate from A to B. Then you take:
plr(A,B) = (tx_packets(A) - rx_packets(B)) / tx_packets(A))
Beware though that sometimes the counters are reset leading to rx_packets being higher that tx_packets. I am facing this behavior in my SDN software and tend to invalidate the results, if there are strange combinations.

Related

Validating ADC data through USB

In our application we are sending ADC data(240 bytes) to host computer through USB at full speed and using Serial application like (teraterm/minicom/Docklet) to validate the data, but we are facing the issue of data loss.
We are not getting where the issue is weather the seral application is not able to handle the incoming data or is there any limitations at controller side operating at USB full speed?
Microcontroller - NRF52840
USB class - CDC ACM
Best regards
Sagar
Suggest you entirely disable (temporarily) the ADC function and only have the microcontroller send a counting sequence to verify the known pattern is transferred without loss to the PC side. If the pattern is detected without loss, then re-enable the ADC function, but still only send the counting sequence and test again. If data is missing, then the problem is most likely the ADC function is causing a timing condition (such as blocking the CPU too long).

Constant carrier digital transmission in GNURadio with USRP

I'm trying to implement the UPLINK of a Ground Station controlling a small satellite. The idea is that the link should stay always active in between each transmitted telecommand. For this, I need to insert some DUMMY or IDLE sequence bytes such as 0xAA or similar.
I have found that some people already faced a similar issue and posted their questions here:
https://www.ruby-forum.com/t/constant-carrier-digital-transmission/163379
https://lists.gnu.org/archive/html/discuss-gnuradio/2016-08/msg00148.html
So far, the best I could achieve was to modify the EventStream Source block from https://github.com/osh/gr-eventstream in order to preload the vectors with my dummy sequence (i.e. 0xAA) instead of preloading them with zeroes. This is a general overview of the GNURadio graph I'm using:
GNURadio Flowgraph Picture
This solution however introduces a huge latency and the sent message does not appear at the output until a huge amount of time has expired (in the order of several seconds).
Is there a way of programming the USRP using GNURadio so that it constantly sends a fixed sequence which should only be interrupted when an incoming message is passed? I assume that the USRP has the ability of reading tagged streams in order to schedule transmissions. However, I'm not sure how to fit this in my specific application.
Thanks beforehand!
Joa
I believe this could be done using a TCP or UDP source block.
Your control information could be sent to the socket using TCP/UDP. GNU Radio would then collect and transmit the packets. Your master control program would then have to handle the IDLE stuffing but solving the problem external to GNU Radio is easier.
Your master control program would basically do the following:
1. tx control data as needed
2. if no control data ready before next packet must be sent send an IDLE packet

CAN bus arbitration backoff time

I am aware of the way CAN bus does its arbitration. In a nutshell the CAN node ID having more '0' 's in its indentifier wins the rite to transmit on the bus and the rest of contending nodes back off.
But i dont find any details of how long the backed out node waits before re-trying to win the bus back. I consulted a few sources but still cant find the answer. Any experimental evidence for this ?
Bosch CAN
Introduction to the Controller Area Network
It is free to try again after the winning frame has been transmitted and no dominant bit has been found in the "intermission field" at the end of the CAN frame. You'll probably find a formal definition of this if you search the spec for "intermission field", see for example 3.1.5 of the old (obsolete) Bosch spec you linked.
The important part here is to realize that every CAN controller listens to every single frame, even if it isn't interested in it. This is how you achieve collision avoidance, rather than collision detection.
As mentioned in the Bosch CAN specification document all the CAN nodes can start to send pending frames when Bus Idle condition occurs (no dominant bit found on the bus). During the intermission period in the Interframe spacing no node can transmit (Overload frames can be transmitted but not Data or Remote frames). CAN nodes must wait for 3 recessive bits during this period. All nodes can start transmitting right after this intermission period.
If multiple nodes start at once after intermission period then the lowest identifier frame will win the arbitration. If the remote and data frames (both have same identifier) from different nodes start then the data frame will win the arbitration.
I agree with the answers above but i was looking for more mathematical analysis of the CAN bus timings. I found this excellent lecture notes : Time analysis of CAN messages
. Chapter 3

Losing data with UDP over WiFi when multicasting

I'm currently working a network protocol which includes a client-to-client system with auto-discovering of clients on the current local network.
Right now, I'm periodically broadsting over 255.255.255.255 and if a client doesn't emit for 30 seconds I consider it dead (then offline). The goal is to keep an up-to-date list of clients runing. It's working well using UDP, but UDP does not ensure that the packets have been sucessfully delivered. So when it comes to the WiFi parts of the network, I sometimes have "false postivives" of dead clients. Currently I've reduced the time between 2 broadcasts to solve the issue (still not working well), but I don't find this clean.
Is there anything I can do to keep a list of "online" clients without this risk of "false positives" ?
To minimize the false positives, due to dropped packets you should alter a little bit the logic of your heartbeat protocol.
Rather than relying on a single packet broadcast per N seconds, you can send a burst 3 or more packets immediately one after the other every N seconds. This is an approach that ping and traceroute tools follow. With this method you decrease significantly the probability of a lost announcement from a peer.
Furthermore, you can specify a certain number of lost announcements that your application can afford. Also, in order to minimize the possibility of packet loss using wireless network, try to minimize as much as possible the size of the broadcast UDP packet.
You can turn this over, so you will broadcast "ServerIsUp" message
and every client than can register on server. When client is going offline it will unregister, otherwise you can consider it alive.

multicast packet loss - running two instances of the same application

On Redhat Linux, I have a multicast listener listening to a very busy multicast data source. It runs perfectly by itself, no packet losses. However, once I start the second instance of the same application with the exactly same settings (same src/dst IP address, sock buffer size, user buffer size, etc.) I started to see very frequent packet losses from both instances. And they lost exact the same packets. If I stop the one of the instances, the remaining one returns to normal without any packet loss.
Initially, I though it is the CPU/kernel load issue, maybe it could not get the packets out of buffer quickly enough. So I did another test. I still keep one instance of the application running. But then started a totally different multicast listener on the same computer but use the second NIC card and listen to a different but even busier multicast source. Both applications run fine without any packet loss.
So it looks like one NIC card is not powerful enough to support two multicast applications, even though they listen to exact the same thing. The possible cause to the packet loss problem might be that, in this scenario, the NIC card driver needs to copy the incoming data to two sock buffers, and this extra copy task is too much for the ether card to handle so it drops packets. Any deeper analysis on this issue and any possible solutions?
Thanks
You are basically finding out that the kernel is inefficient at fan-out of multicast packets. Worst case scenario the code is for every incoming packet allocating two new buffers, the SKB object and packet payload, and copying the NIC buffer twice.
Pick the best case scenario, for every incoming packet a new SKB is allocated but the packet payload is shared between the two sockets with reference counting. Now imagine what happens when two applications, each on their own core and on separate sockets. Every reference to the packet payload is going to cause the memory bus to stall whilst both core caches have to flush and reload, and above that each application is having to kernel context switch back and forth to pass the socket payload. The result is terrible performance.
You aren't the first to encounter such a problem and many vendors have created solutions to it. The basic design is to limit the incoming data to one thread on one core on one socket, then have that thread distribute the data to all other interested threads, preferably using user space code building upon shared memory and lockless data structures.
Examples are TIBCO's Rendezvous and 29 West's Ultra Messaging showing a 660ns IPC bus:
http://www.globenewswire.com/newsroom/news.html?d=194703