Receive Drops and Receive Overrun Errors counters on an Openflow Port - openflow

In the Openflow 1.3.3 spec you have the Receive Dropped and Receive Overrun Errors counters for a port. What are the conditions when these 2 counters increment?
Thanks

Receive Overrun Errors: This warning is is related to load on the COM port and the CPUs ability to service the COM port interrupts (ie. CPU load and interrupt priorities).
Receive Dropped: I think this increments when a switch receives a packet but it drops. In other words, it receives a packet but it doesn't forward it to any of it port.
Edit:
Here is what I think:
Note that there is 2 Received Packets counter.
Received Packets Per Flow Entry. So it is incremented when a packet is received and it is matched to that flow.
Received Packets Per Port: This one doesn't care what flow it is going to match with. If a packet is received in a port, increment it.
Receive Dropped is per port. So Receive Dropped should always be less than or equal to Received Packets.

Related

What happens to client message if Server does not exist in UDP Socket programming?

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here
TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

What is the correct method to receive UDP data from several clients synchronously?

I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)

Using sendto() and recvfrom() to deliver datagram

If the sender uses sendto() for a few times, and receiver uses recvfrom() in a while loop, when the 2nd datagram arrives before the receiver finish processing the 1st one, will it get lost?
It depends on your architecture,driver.
The packets that arrive at receiver are normally placed in driver's receive queue and processed accordingly like passing to IP layer's queue(based on architecture). If there is any overflow here, then it will be dropped at driver itself or IP layer. However, if has passed those layers but if the receive socket buffer(the amount of space for UDP queued in UDP socket) is full, then the UDP packets will be dropped else it will not be dropped.
netstat is one of the handy tool in such scenarios. netstat -s shall list statistics protocol-wise. The -p option can be used to display for specific protocol.
If you want a proper synchronization between sender and receiver, it shall be feasible only if receiver intimates sender by some means (or some kind of flow control mechanism). UDP protocol does not support it. If you are looking for such synchronization, then you need to select TCP protocol.

Writing a c++ Client-Server program using winsock2

I am having some trouble with a UDP-based connection.
In my program, I restrict the time allocated for data transfer between the transmitter and receiver (both of them are sending/receiving in a loop).
When the time passes, I send a message from the transmitter that if the receiver receives and reads - the receiver knows not to wait for anymore packets so the program continues.
The problem I thought of was that because the connection is UDP, the message might not get to the receiver, and then the client will keep on waiting for messages, but no one is sending.
So what is the correct way to finish such a connection?
Thanks!

how to receive UDP packets continuously

I want to receive UDP packets continuously from the port. I'm using recvfrom() to receive packets. How to put delay after receiving a single packet. I want to receive more than 50 packets. For that I need help. Thanks in advance....
The network stack captures all packets addressed to the socketaddress on the line for you and puts it in a queue. Just use recvfrom to read the queued data. If you want you application to wait for the next receive event you can use the select function.