Under what circumstances does OpenFlow forward packets to the controller for decision? - openflow

I just finished reading sections 1-6.2 of the OpenFlow specification here.
Section 6.1.2 says:
Packet-in events can be configured to buffer packets. For packet-in generated by an output action in
a flow entries or group bucket, it can be specified individually in the output action itself (see 7.2.6.1),
for other packet-in it can be configured in the switch configuration (see 7.3.2). If the packet-in event is
configured to buffer packets and the switch has sufficient memory to buffer them, the packet-in event
contains only some fraction of the packet header and a buffer ID to be used by a controller when it
is ready for the switch to forward the packet. Switches that do not support internal buffering, are
configured to not buffer packets for the packet-in event, or have run out of internal buffering, must
send the full packet to controllers as part of the event. Buffered packets will usually be processed via a
Packet-out or Flow-mod message from a controller, or automatically expired after some time
This makes it sound like for every packet that hits the OpenFlow switch, an asynchronous message must be sent to the controller to make a forwarding decision. However Chapter 5 makes it sound like a switch has a set of OpenFlow flows and at the end of that generates an action set which determines what should be done with a packet and the packet is only forwarded to the controller when there is a flow table miss.
Under what conditions is a packet sent to the controller for a decision? Is it always? Or is it only circumstantial?

Packets will be sent to the OpenFlow controller any time the out port is set to be the controller.
PACKET_IN events occur when a flow wasn't matched on the switch and are then sent to the controller. Otherwise no event is created - the switch simply forwards the packet according to the flow rules and the controller is none the wiser.

Related

What happens to client message if Server does not exist in UDP Socket programming?

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here
TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

USB interrupt endpoint is unidirectional?

On reading about usb protocol in
http://www.beyondlogic.org/usbnutshell/usb4.shtml
It is said that interrupt endpoint is unidirectional and periodic.
Yet, I see in the description for IN interrupt endpoint, that host initiate the IN token and then data packet is send from device to host.
"If an interrupt has been queued by the device, the function will send
a data packet containing data relevant to the interrupt when it
receives the IN Token."
So, If the data packet is sent on this IN endpoint from device to host, doesn't it mean that the same endpoint is used both the transmit and receive ?
I believe the terminology "unidirectional" is meant for only data and not for token and handshake packets. So "IN" endpoint is for reading data and "OUT" endpoint is for writing data. That's why its called unidirectional.
But control endpoint is bidirectional because you can read or write data using the control endpoint. Check the standard USB commands like "Get Descriptor" and "Set Descriptor".

OpenFlow - Sending Port statistics as an Action

OpenFlow allows a controller to request port statistics from the a switch using a message, and the controller in return receives a reply with the statistics.
For example, in Ryu we can use ryu.ofproto.ofproto_v1_3_parser.OFPPortStatsRequest for this purpose.
Is there a way to get the port statistics from a switch without issuing a request message from the controller, but possibly as an action by the switch on receipt of a particular type of packet?

Using sendto() and recvfrom() to deliver datagram

If the sender uses sendto() for a few times, and receiver uses recvfrom() in a while loop, when the 2nd datagram arrives before the receiver finish processing the 1st one, will it get lost?
It depends on your architecture,driver.
The packets that arrive at receiver are normally placed in driver's receive queue and processed accordingly like passing to IP layer's queue(based on architecture). If there is any overflow here, then it will be dropped at driver itself or IP layer. However, if has passed those layers but if the receive socket buffer(the amount of space for UDP queued in UDP socket) is full, then the UDP packets will be dropped else it will not be dropped.
netstat is one of the handy tool in such scenarios. netstat -s shall list statistics protocol-wise. The -p option can be used to display for specific protocol.
If you want a proper synchronization between sender and receiver, it shall be feasible only if receiver intimates sender by some means (or some kind of flow control mechanism). UDP protocol does not support it. If you are looking for such synchronization, then you need to select TCP protocol.

understanding the concept of running a program in interrupt handler

Early Cisco routers running IOS operating system enhanced their packet processing speed by doing packet switching within the interrupt handler instead in "regular" operating system process. Doing packet processing in interrupt handler ensured that context switching within operating system does not affect the packet processing. As I understand, interrupt handler is a piece of software in operating system meant for handling the interrupts. How to understand the concept of packet switching done within the interrupt handler?
use of interrupts is preferred when an event requires some immediate attention by the operating system, or a program which installed an interrupt service routine. This as opposed to polling, where software checks periodically whether a condition exists, which indicates that the event has occurred.
interrupt service routines aren't commonly meant to do a lot of work themselves. They are rather written to reach their end as quickly as possible, so that normal execution can resume. "normal execution" meaning, the location and state previous processing was interrupted when the interrupt occurred. reason is that it must be avoided that the same interrupt occurs again while its handler is still executed, or it may be ignored, or lead to incorrect results, or even worse, to software failure (crashes). So what an interrupt service routine usually does is, reading any data associated with that event and storing it in a queue, signalling that the queue experienced mutation, and setting things such that another interrupt may occur, then resume by restoring pre-interrupt context. the queued data, associated with that interrupt, can now be processed asynchronously, without risking that interrupts pile up.
The following is the procedure for executing interrupt-level switching:
Look up the memory structure to determine the next-hop address and outgoing interface.
Do an Open Systems Interconnection (OSI) Layer 2 rewrite, also called MAC rewrite, which means changing the encapsulation of the packet to comply with the outgoing interface.
Put the packet into the tx ring or output queue of the outgoing interface.
Update the appropriate memory structures (reset timers in caches, update counters, and so forth).
The interrupt which is raised when a packet is received from the network interface is called the "RX interrupt". This interrupt is dismissed only when all the above steps are executed. If any of the first three steps above cannot be performed, the packet is sent to the next switching layer. If the next switching layer is process switching, the packet is put into the input queue of the incoming interface for process switching and the interrupt is dismissed. Since interrupts cannot be interrupted by interrupts of the same level and all interfaces raise interrupts of the same level, no other packet can be handled until the current RX interrupt is dismissed.
Different interrupt switching paths can be organized in a hierarchy, from the one providing the fastest lookup to the one providing the slowest lookup. The last resort used for handling packets is always process switching. Not all interfaces and packet types are supported in every interrupt switching path. Generally, only those that require examination and changes limited to the packet header can be interrupt-switched. If the packet payload needs to be examined before forwarding, interrupt switching is not possible. More specific constraints may exist for some interrupt switching paths. Also, if the Layer 2 connection over the outgoing interface must be reliable (that is, it includes support for retransmission), the packet cannot be handled at interrupt level.
The following are examples of packets that cannot be interrupt-switched:
Traffic directed to the router (routing protocol traffic, Simple Network Management Protocol (SNMP), Telnet, Trivial File Transfer Protocol (TFTP), ping, and so on). Management traffic can be sourced and directed to the router. They have specific task-related processes.
OSI Layer 2 connection-oriented encapsulations (for example, X.25). Some tasks are too complex to be coded in the interrupt-switching path because there are too many instructions to run, or timers and windows are required. Some examples are features such as encryption, Local Area Transport (LAT) translation, and Data-Link Switching Plus (DLSW+).
More here: http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/12809-tuning.html