I am confused between the following terms: OpenFlow, Open vSwitch, and Mininet. I want to understand the relationships between them. Kindly, can someone provide when and how to use each one of them?
Thank you.
Let me explain OpenFlow first. In traditional legacy network devices, the control decisions unit and the forwarding unit are tightly coupled (like switches, routers, etc., where both the control decisions like say, optimal route calculation and the forwarding occur in the same device). The hardware of these devices is made specifically for a particular task. They are not flexible enough to allow researchers to test new algorithms that they might come up with to solve any of the networking issues (say they have a better congestion-control algorithm for TCP!). This requires researchers to create their own custom hardware and have a whole new setup for each experiment.
It would be a lot better if commercial switch providers allowed more flexibility, thus allowing researchers to test their new idea on the same network without new hardware.
As per the white paper on OpenFlow, an OpenFlow switch allows this flexibility and OpenFlow is the protocol used to manage the switch (i.e., add/remove/modify flow entries, capture flow statistics, etc.,). The user program that uses OpenFlow to communicate with the OpenFlow switch is called the controller. There are various frameworks available for writing controller applications. Examples are Ryu are OpenDaylight.
SDN is based on this idea of de-coupling the control and forwarding unit (also called the data plane). Not only is this useful for researchers, but also for data centers, as it reduces the cost of changing hardware each time a change is required.
OpenVSwitch - The 'V' stand for Virtual. This is a "virtual" OpenFlow switch. Apart from OpenFlow, it also supports other switch management protocols.
Many people ask if an OpenFlow switch operates at layer 2 or layer 3. Note that there is no such concept here. In an OpenFlow switch, forwarding decisions can be taken based on MAC, IP, in-port, VLAN_ID, etc., So Please don't try to fit this into the OSI model.
Mininet is a network emulator. The user can create any kind of topology with multiple hosts and switches. The type of the switch could also be chosen.
OpenVSwitch is a software OpenFlow switch that can be controlled by a Controller
OpenFlow is the protocol through which your Controller communicates with its OpenFlow switch
Mininet is an emulator that emulates a network utilizing multiple instances of software switches
Related
Is it possible to create one file, say for example uart.c to be generic, so that I can call the uart functions for different microcontrollers, say for example avr and arm? or is it a must that for every microcontroller I will have to create the the uart functions from scratch?
You can create a Hardware Abstraction Layer (HAL) that acts as a common interface for all hardware of the same kind. A correctly designed HAL allows portable application-layer code, which is the sole purpose of having one.
The HAL should be in the form of an API library that then acts as a header file template for how the drivers should be designed, the simplest form of "polymorphism". The application programmer calls the HAL, and the MCU-specific functions in the driver will then get called.
In case of UART you might have an init function taking baudrate, stop bit, parity, handshaking etc as parameters. And then a read function and a write function, with some error handling. Overrun and framing errors are universal, for example. It is then up to the specific driver for MCU "x" to implement itself according to your specified HAL.
Generally, one should not create abstaction layers needlessly though. It is quite qualified work and easy to get wrong. If you don't need portability or code re-use between projects, there's no obvious need for a HAL and you could as well call the driver directly from the application code.
The hardware implementations and register interfaces across different vendors certainly differ. ARM do not make MCUs they licence the core - the peripherals are not defined by the core so even amongst ARM devices the peripheral implementations differ between vendors.
What you can do is define a common device-layer interface and implement that interface for each device family you need, then you can reuse the application layer code across architectures.
The alternative is to stick with a common family. AVR for example covers a wide range of devices and the peripherals generally are common across the range. Similarly STM32 (ARM Cortex-M) devices share common peripherals across the range.
So the answer is no, but you can deal with that by abstracting the hardware (or the vendor supplied abstraction or device layer).
For a UART you might use stdio as your abstraction layer and access the device via fprintf, fputchar, fread, fwrite etc. Though typically you'd build that on a lower level layer too.
I know that a Programmable Data Plane gives us ability to customize & modify hardware for new protocols and policies.
For example we can use P4 to implement a device which acts like a hub in it's datapath.
But when we are able to apply our logic using P4 (or etc.) in the data plane, why do we need Controller & Control Plane anymore?
I mean we use controller to change switch behavior into layer-2, layer-3, firewall & etc.
And now using Programmable Data Plane we are able to do all of those using languages like P4.
Aren't they in conflict with each other ?
SDN provides a centralized control plane for enabling service providers to program and control the network data plane for which the OpenFlow is used as a defined communication protocol between the controller and switches. Thus SDN decoupled data plane , control plane and evolved based on assumption that the behavior of the network data path is fixed.
P4 brings in more flexibility such that the behavior of the data path can be expressed via software which would tell the switch on the way it should process packets. P4 helps in specifying the data forwarding behavior, then populating the tables and also in auto generating the APIs that are needed to populate the tables as well due to which P4 becomes superior over OpenFlow. P4 enables SDN to be independent
of protocol, flexible to programmatically match on arbitrary packet fields that were limitations of OpenFlow.
There are many existing switches that follow the SDN architecture of separate control path and data path that use OpenFlow. Hence, OpenFlow may not become obsolete immediately due to P4 as there are numerous fixed function switch ASICs that are already deployed, and they are communicating using OpenFlow. Openflow can be still applicable for networks that are a mixture of fixed functional switches and programmable switches. In such networks, the Openfow and P4 shall function together.
How exactly is isolation between data and control planes in sdn designed e.g when we assume that SDN is in a server?. And what about isolation in SDN switches between data and control ports?
SDN is a networking paradigm. A networking style. Before understanding SDN, we have to understand 2 things.
1.Data Plane: Data Path which actually forwards the packet or data from the input port to output port.
2.Control Plane: This has the logic of how to move the packet from input port to output port. Control Plane directs the data plane.
Pre SDN , the two planes data plane and control plane both were residing in the network devices like routers, switches, firewalls etc.
Now with SDN, the control plane and data plane has been separated i.e, Control Plane is moved out of the network devices and has been placed onto the central server. One SDN Controller can control many Network Elements. Granularity of the separation is left to the implementation.
To clarify SDN first imagine how traditional forwarding works
PC1 --- Router1 ----Router2-----Router3----PC2
For PC1 to reach PC2 it has to traverse through Router1, Router2 and Router3. Router 1 has information about its neighbors i.e router2. The same follows for other routers until it reaches PC2. If we observe the decision where to forward the packets is being taken by the routers at each step. The "Brain" is in the router which also acts as a device carrying the packets. This is analogous to our legs having its own brain to walk to a certain place.
In case of SDN the brain from each router is pulled up and put in one place. That is the controller or the control plane. Now the routers are just data/packets forwarding devices and hence are called switches in SDN. This is similar to how our brain works now. The brain decides how our legs and hands should move for us to reach a decision.
In SDN switch the switch talks to the control plane on virtual ports 6633 or 6653.
Hope this clarifies things.
Software Defined Network is a concept. When you say "when we assume that SDN is on a server?", I will say it's not a thing that you can just put somewhere, it's a concept. This concept is based on some explicit points. A good implementation of SDN will ensure the points below.
Control plane - data plane separation
Centralized point of control
Miscellaneous: network programmability, scalability etc.
When you try to look for isolation in the SDN concept, what you are basically looking for is point 1. Let me explain it a bit if that helps.
The major reason we started SDN was to bring in more robust and better programmability in the network. And one thing that was stopping us to do so is the distributed nature of network deployments. Bringing any change in network properties or behaviour required to explicitly go in and run some configurations in many router/switches in a deployment. That introduces all sorts of complexity and errors proneness.
For example, suppose we want packets from particular ip or ip mask to go through a special service function (e.g. firewall) which will decide the fate of the packet. Now, we will have to put this configuration into all the border routers or a group of devices that might receive the packet. In a big deployment, it can be a countless number of devices where we have to put this configuration to securely ensure the enforcement of the policy.
So, the idea of decoupling the network control from the data forwarding devices came along. This effectively means that we will have a dumb data forwarding service which can be controlled by a network controller or programmer. The data plane, which provides this data packet forwarding service, remains at router and switch level whereas the control plane might be and should be a separate physical entity. Conceptually, they are separated but it's not isolated.
One core benefit of SDN is that it enables the network control logic to be designed and operated on a global network view, as though it were a centralized application. Network control plane is thought to be a logically centralized application where we make the changes which will enforce our intended behaviour in the network.
In summary, what we basically do in SDN is, we take all those control-level business logic away from forwarding devices (means, physical and virtual switches and routers), and put them together in an application which we call controller and then we provide a way for the controller application to communicate and program changes in data plane forwarding devices. A good study to completely understand this architecture will be this: Understanding OpenFlow.
Long story short, isolation is a slightly wrong word to put between data plane and control plane. It's more like they were separated but dependent on each other. Without a control plane, forwarding devices are dumb, without the data plane, the control plane has nothing to control!
Hope this helped!
Conceptually, its not the implementation location that defines the separation(isolation), rather its because of the standard between control and data-plane (openflow for instance).
You can have both data and control planes on single server and they are separated as long as they talk through the standard interface.
The opposite case is also true, you can have control and data-plane physically separated, but if they are not through and standard interface, thats not SDN per se.
just go through ONF explanations:
https://www.opennetworking.org/software-defined-standards/overview/
I try to understand how they both relate to each other. As far as I know, they both can be a part of the HAL. In case of a communication between an application and a graphics card - can an API get the job done on its own or do we have to rely on them both? Can an API directly communicate with the hardware or do we always need a driver in-between, which translates the command of the API?
TL;DR
Think of an API as a specification that describes what to do, while a driver is an implementation that describes how to do it.
Details
As a contrived example, imagine we have three different audio cards that we want to play nicely with multiple operating systems. We can define an API for the card manufacturers that says, "Each card must support four methods: mute(), playsound(sound), volumeup() and volumedown()". By defining the API, we get a common interface that allows the operating system designers to support the audio devices without worrying about the hardware details. They know that if they want to mute the sound card, they can call mute(), or if they want to turn the volume up, they call volumeup().
It is then up to the device manufacturers to implement a driver that actually performs those actions. The driver will vary between the three different audio cards because they are different at the hardware level, but the API is consistent so the next higher abstraction level (the OS) doesn't need to know how to deal with the hardware.
For a more concrete example, consider the Advanced Control & Power Interface (ACPI) specification. It defines a common interface for operating systems to manage power consumption and thermal characteristics of hardware devices. There are methods that a device driver or firmware must implement in order to be "ACPI Compliant". This allows Windows operating systems and Linux variants to both perform the same actions on hardware devices without needing to implement their own drivers for the hardware
Note: Windows performs ACPI actions through acpi.sys, which they call an "ACPI Driver". Don't let the terminology confuse you; even though they call it a driver, it is really a window into the ACPI interface. Linux uses the acpi kernel module to do the same thing, and Linux doesn't call it a driver. Perhaps ACPI wasn't the best example, but I don't have anything better at the moment.
I'm trying to design an efficient communication protocol between a micro-controller on one side and an ARM processor on a multi-core TI chip on the other side through SPI.
The requirements for the needed protocol:
1 - Multi-session with queuing support, as I have multiple sending/receiving threads, so it will be more than one application using this communication protocol and I need the protocol to handle queuing these requests (I will keep holding the buffer if the transmission is queue but I just need the protocol to manage scheduling the queues).
2 - Works over SPI as an underlying protocol.
3 - Simple error checking.
In this thread: "Simple serial point-to-point communication protocol", PPP was a recommended option, however I see PPP does only part of the job.
I also found Light weight IP (LwIP) project featuring PPP over serial (which I assume that I can use it over SPI), so I thought about the possibility of utilizing any of the upper layers protocols like TCP/UDP to do the rest of the required jobs. Fortunately, I found TI including LwIP as part of their ethernet SW in the starterware package, which I assume to ease porting at least on the TI chip side.
So, my questions are:
1 - Is it valid to use LwIP for this communication scheme? Won't this introduce much overhead due to IP headers which are not necessary for a point to point (on the chip level) communication and kill the throughput?
2 - Will the TCP or any similar protocol residing in LwIP handle the queuing of transmission requests, for example if I request transmission through a socket while the communication channel is busy transmitting/receiving request for another socket (session) of another thread, will this be managed by the protocol stack? If so, which protocol layer manages it?
3 - Is their a more efficient protocol stack than LwIP, that meets the above requirements?
Update 1: More points to consider
1 - SPI is the only available option, I use it with available GPIOs to indicate to the master when the slave has data to send.
2 - The current implemented (non-standard) protocol uses DMA with SPI, and a message format of《STX_MsgID_length_payload_ETX》with a fixed message fragments length, however the main drawback of the current scheme is that the master waits for a response on the message (not fragment) before sending another one, which kills the throughput and does not utilise the full duplex nature of SPI.
3- An improvement to this point was to use a kind of mailbox for receiving fragments, so a long message can be interrupted by a higher priority one so that fragments of a single message can arrive non sequentially, but the problem is that this design lead to complicating things especially that I don't have much available resources for many buffers to use the mailbox approach on the controller (master) side. So I thought that it's like I'm re-inventing the wheel by designing a protocol stack for a simple point to point link which may not be efficient.
4- What kind of higher level protocols can be normally used above SPI to establish multiple sessions and solve the queuing/scheduling of messages?
Update 2: Another useful thread "A good serial communications protocol/stack for embedded devices?"
Update 3: I had a look at Modbus protocol, it seems to specify the application layer then directly the data link layer for serial line communication, which sounds to skip the unnecessary overhead of network oriented protocols layers.
Do you think this will be a better option than LwIP for the intended purpose? Also, is there a widely used open source implementation like LwIP but for Modbus?
I think that perhaps you are expecting too much of the humble SPI.
An SPI link is little more a pair of shift registers one in each node. The master selects a single node to connect to its SPI shift register. As it shifts in its data, the slave simultaneously shifts data out. Data is not exchanged unless the master explicitly clocks the data out. Efficient protocols on SPI involve the slave having something useful to output while the master inputs. This may be difficult to arrange, so you usually need a means of indicating null data.
PPP is useful when establishing a connection between two arbitrary endpoints, when the endpoints are fixed and known a priori, PPP would serve no purpose other than to complicate things unnecessarily.
SPI is not a very sophisticated nor flexible interface and probably unsuited to heavyweight general purpose protocols such as TCP/IP. Since "addressing" on SPI is performed by physical chip-select, the addressing inherent in such protocols is meaningless.
Flow control is also a problem with SPI. The master has no way of determining that the slave has copied the data from SPI the shift register before pushing more data. If your slave SPI supports DMA you would be wise to use it.
Either way I suggest that you develop something specific to your purpose. Since SPI is not a network as such, you only need a means to address threads on the selected node. This could be as simple as STX<thread ID><length><payload>ETX.
Added 27 September 2013 in response to comments
Generally SPI as its names suggests is used to connect to peripheral devices, and in that context the protocol is defined by the peripheral. EEPROMS for example typically use a common or at least compatible command interface across vendors, and SD/MMC card SPI interface uses a standardised command test and protocol.
Between two microcontrollers, I would imagine that most implementations are proprietary and application specific. Open protocols are designed for generic interoperability and to achieve that might impose significant unnecessary overhead for a closed system, unless perhaps the nodes were running a system that already had a network stack built in.
I would suggest that if you do want to use a generic network stack that you should abstract the SPI with device drivers at each end that give the SPI a standard I/O stream interface (open(), close(), read(), write() etc.), then you can use the higher-level PPP and TCP/IP protocols (although PPP can probably be avoided since the connection is permanent). However that would only be attractive if both nodes already supported these protocols (running Linux for example), otherwise it will be significant effort and code for little benefit, and would certainly not be "efficient".
I assume you dont really want or have room for a full ip (lwip) stack on the microcontroller? This just sounds like a lot of overkill. Why not just roll your own simple packet structure to move the data items you need to move. Depending on how spi is supported on both sides you may or may not be able to use it to define the frame for your data, if not a simple start pattern, length and a trailing checksum and maybe tail pattern would suffice for finding packet boundaries in the stream (no different than a serial/uart solution). You can even use the PPP solution for that with a start pattern and I think end pattern with the payload using a two byte pattern whenever the start pattern happens to show up in the data. I dont remember all the details now.
Whatever your frame is then add a packet type and your handshakes, or if the data is going to just be microcontroller to arm then you dont even need to do that.
To get back to your direct question. Yes, I think that an ip stack (lwip or other) will introduce a lot of overhead. both bandwidth and more important the amount of code needed to support that stack will chew up rom/ram on both sides. If you ultimately need to present this data in an ip fashion (a website hosted by the embedded system) then somewhere in the path you need an ip stack, etc.
I cant imagine that lwip manages your queues for you. I assume you would need to do that yourself. the various queues might want to talk to a single driver that deals with the single spi bus (assuming there is a single spi bus with multiple chip selects). It also depends on how you are using the spi interface, if you are allowing the arm to talk to multiple microcontrollers and the packets of data are broken up into a little bit from this controller a little from that controller so that nobody has to wait to long before they get a few more bytes of data. Or will a complete frame have to move from one microcontroller before moving onto the next gpio interrupt to pull that guys data? The long and short of it is I would assume you have to manage the shared resource just like you would in any other situation where you have multiple users of a shared resource (rtos, full blown operating system, etc). I dont remember lwip that well at all but with a full blown berkeley sockets application interface the user could write separate applications where each application only cared about one TCP or UDP port and the libraries and drivers managed separating those packets out to each application as well as all of the rules for the IP stack.
If you are not already doing experiments with moving data over the spi interface(s) I would start with simple experiments first just to get the feel for how well it is or isnt going to work, the sizes of transfers you can do reliably per spi transction, etc. Your solution may naturally just fall out of those experiments.