Im having a hard time Categorizing the USB Protocol in the layers of the OSI Model model.
Im guess there are 7 layers to begin with. These are the Informations i believe that corresponds to the layers:
7. Application (Software)
- Application specific
- Additional Drivers / Protocols
6. Presentation (Software)
- Application specific
- OS
5. Session (Software)
- Power mode regulation
- Configuration
4. Transport (Hardware)
- Split data into Frames
3. Network (Hardware)
- Client Adress 1 - 127
- Endpoints
2. Link (Hardware)
- CRC 5 Checksum for tokens
- CRC 16 Checksum for data packets
1. Physical (Hardware)
- Differential voltages (D-, D+)
- NRZI
- USB Plug
Is this correct so far?
How do hubs work? i believe they can "select" between clients just like an ethernet switch. doesn't that mean there the Master hast to send 2 addresses in every packet. One for the next immediate communication partner like Mac address and one for the Destination address like IP address ?
Maybe there are Usb Masters amongst us, who can send OUT Packages to this post, to help me out ;) I would be Very happy to send an ACK response :)
haha okay enough puns
When I taught these subjects, my students agreed with me that as they learned, and used, more precise words, it was easier for them to find answers to their questions. I could be wrong, but I think viewing USB through the OSI 7-layer model will be a little easier if you change "USB Protocol" to the more precise, USB specification.
The USB specifications include multiple protocols spread across multiple layers. The Physical layer includes specifications for things such as connectors, cables, power, and shielding.
The logical functions of USB protocols do not map perfectly onto the OSI model. Some protocols span two or three layers of the OSI model. But, it's possible to see which parts of which protocols fit in which layers. If the protocol is only concerned about signals between two nodes that are physically connected directly to each other, then it is the Data Link layer.
The Network layer is only about managing the bus when there are at least three nodes on the bus, such as identifying the nodes (addressing) and deciding where to send a packet (routing).
The Transport layer usually asks and answers the question, "Can you hear me now?" Or, you can think of it as analogous to using a ton of tracking numbers on the DHL website to track an order that has tons of packages. It is important between two nodes that are not directly physically connected to each other. The Data Link layer asks similar questions, but those questions are typically focused on individual signals (i.e., packets). The Transport layer does more sophisticated things such as putting packets in a specific order, dividing and combining packets, and tracking which packets in a set of packets were sent or received.
In USB, determining which node can use which part of the bus at what time is very important. Those protocols (mostly?) correspond to the Session layer.
I don't think any aspect of any USB specification corresponds with the Presentation or Application layers.
The USB-IF specification for USB4 includes their conceptual model for the USB functional stack. See section 2.2.1.
Good luck!
Hubs work on the First Layer. They just connect all the ports' pins together.
Related
I'm trying to interface a board level USB camera with a STM32 family microcontroller and send the image file to a central computer using CANbus. Just want to know if this is possible/ has been done before and how involved a task it would be.
I worked at a company where we sent live (low-resolution infra-red) video streams over CAN, but towards the end of my time there they shifted towards ethernet.
So it is possible, but certainly not what it is best suited for. The main advantages of CAN are that it is a multi-point, multi-master bus with built in arbitration. It is meant for short packets, typically 8 bytes (CAN FD allows you to increase that).
If your camera is USB, why not just get a USB repeater cable or USB-over-ethernet gateway?
If there is already a CAN network in place that you are piggy-backing onto then you need to consider what impact you will have on the existing traffic.
If you are starting from scratch then of course CAN will work but it would be an odd choice.
Depending on if its CAN or CANFD (Affects the maximum bulk transfer packet size) you have higher level protocol options to packetise your images and send them over canbus like any other block of data.
For just reguarlar CAN your after part of the standard called J1939.21 Data Link Layer, there are public versions of this floating around online, however due to the agreement when purchasing the standard, I am not able to share the specifics from what I have.
Its on pages 27-28 of the 2001 revision.
I am very new to SDN and want to know just a basic understanding of it so that I can explain to some simple what it is. From what I know that the architecture is broken into the three layers. The infrastructure layer is just the switches and routers, and other devices that makes up a network. The controller layer maps how the devices are connected and how forwarding of packets should be sent from one device to another. For the controller layer to actually do the work in mapping and knowing how to forward the packets, the application layer provides the logic to do so and this is the layer where you create your network application in certain programming language like Python. Did I get a basic understanding of how SDN layers work?
We need to understand few terms before going to SDN.
Control Plane: The plane that determines where to send traffic
Data Plane: The plane that executes these decisions and forwards traffic
Management Plane: Element of a system that configures, monitors, and provides management, monitoring and configuration services to, all layers of the network stack and other parts of the system
Traditional Network Devices:
Control Plane, Management Plane and Data Plane reside on device itself.
Each device has its own brain and disconnected from each other and uses all sorts of protocols to stay connected.
Yet such protocols are complex and levels of resiliency to control action of such protocols also complex.
Think of traditional network as the type of network that is more prone to failure due to multiple disconnected brains not working together with one another.
SDN: (Decoupling hardware from Software)
SDN architecture is divided into 3 layers
Application Layer
Control Layer
Infrastructure Layer
Infrastructure Layer:
Consist of Network Devices which contains Data Plane and work on Open Flow Protocol or you can say uses Open Flow API to communicate.
Control Layer:
Consist of Control Plane and Management Plane.
Application Layer:
In this layer user can get an overview of Devices and see Topology.
Link between Application and Control Layer is generally called NorthBound Interface
Link between Control Layer and Infrastructure is called SouthBound Interface
Benefit of SDN:
Decoupling Control and Data planes involves leaving the data plane with network hardware and moving the control plane into a software layer.
By abstracting the network from the hardware, policies no longer have to be executed on the hardware itself.
Instead, the use of a centralized software application functioning as the control plane makes network virtualization possible.
You may find useful this: SDN Layer Architecture
I have been playing around with OpenThead for about a month and I have set up two TI CC2538s in an OpenThread network, currently, I can send pings between them and modify the network parameters using the CLI, but they aren't capable of much else.
I would like to develop an application for them that is capable of transmitting some form of data using the OpenThread stack, maybe something simple at first like transmitting a block of text, however, I am not really sure where to start with this, are there any example applications that I could use as a starting point?
For application layers that sit directly on top of OpenThread, Nordic has released some examples in their nRF5 SDK for Thread.
Also note that Thread (and OpenThread) implement an IPv6 link capable of transporting vanilla IPv6 datagrams. As a result, you could run other transport protocols like TCP. However, UDP is often recommended due to relatively high loss rates and latency variance that is common to low-power, wireless mesh networks.
How exactly is isolation between data and control planes in sdn designed e.g when we assume that SDN is in a server?. And what about isolation in SDN switches between data and control ports?
SDN is a networking paradigm. A networking style. Before understanding SDN, we have to understand 2 things.
1.Data Plane: Data Path which actually forwards the packet or data from the input port to output port.
2.Control Plane: This has the logic of how to move the packet from input port to output port. Control Plane directs the data plane.
Pre SDN , the two planes data plane and control plane both were residing in the network devices like routers, switches, firewalls etc.
Now with SDN, the control plane and data plane has been separated i.e, Control Plane is moved out of the network devices and has been placed onto the central server. One SDN Controller can control many Network Elements. Granularity of the separation is left to the implementation.
To clarify SDN first imagine how traditional forwarding works
PC1 --- Router1 ----Router2-----Router3----PC2
For PC1 to reach PC2 it has to traverse through Router1, Router2 and Router3. Router 1 has information about its neighbors i.e router2. The same follows for other routers until it reaches PC2. If we observe the decision where to forward the packets is being taken by the routers at each step. The "Brain" is in the router which also acts as a device carrying the packets. This is analogous to our legs having its own brain to walk to a certain place.
In case of SDN the brain from each router is pulled up and put in one place. That is the controller or the control plane. Now the routers are just data/packets forwarding devices and hence are called switches in SDN. This is similar to how our brain works now. The brain decides how our legs and hands should move for us to reach a decision.
In SDN switch the switch talks to the control plane on virtual ports 6633 or 6653.
Hope this clarifies things.
Software Defined Network is a concept. When you say "when we assume that SDN is on a server?", I will say it's not a thing that you can just put somewhere, it's a concept. This concept is based on some explicit points. A good implementation of SDN will ensure the points below.
Control plane - data plane separation
Centralized point of control
Miscellaneous: network programmability, scalability etc.
When you try to look for isolation in the SDN concept, what you are basically looking for is point 1. Let me explain it a bit if that helps.
The major reason we started SDN was to bring in more robust and better programmability in the network. And one thing that was stopping us to do so is the distributed nature of network deployments. Bringing any change in network properties or behaviour required to explicitly go in and run some configurations in many router/switches in a deployment. That introduces all sorts of complexity and errors proneness.
For example, suppose we want packets from particular ip or ip mask to go through a special service function (e.g. firewall) which will decide the fate of the packet. Now, we will have to put this configuration into all the border routers or a group of devices that might receive the packet. In a big deployment, it can be a countless number of devices where we have to put this configuration to securely ensure the enforcement of the policy.
So, the idea of decoupling the network control from the data forwarding devices came along. This effectively means that we will have a dumb data forwarding service which can be controlled by a network controller or programmer. The data plane, which provides this data packet forwarding service, remains at router and switch level whereas the control plane might be and should be a separate physical entity. Conceptually, they are separated but it's not isolated.
One core benefit of SDN is that it enables the network control logic to be designed and operated on a global network view, as though it were a centralized application. Network control plane is thought to be a logically centralized application where we make the changes which will enforce our intended behaviour in the network.
In summary, what we basically do in SDN is, we take all those control-level business logic away from forwarding devices (means, physical and virtual switches and routers), and put them together in an application which we call controller and then we provide a way for the controller application to communicate and program changes in data plane forwarding devices. A good study to completely understand this architecture will be this: Understanding OpenFlow.
Long story short, isolation is a slightly wrong word to put between data plane and control plane. It's more like they were separated but dependent on each other. Without a control plane, forwarding devices are dumb, without the data plane, the control plane has nothing to control!
Hope this helped!
Conceptually, its not the implementation location that defines the separation(isolation), rather its because of the standard between control and data-plane (openflow for instance).
You can have both data and control planes on single server and they are separated as long as they talk through the standard interface.
The opposite case is also true, you can have control and data-plane physically separated, but if they are not through and standard interface, thats not SDN per se.
just go through ONF explanations:
https://www.opennetworking.org/software-defined-standards/overview/
I'm trying to design an efficient communication protocol between a micro-controller on one side and an ARM processor on a multi-core TI chip on the other side through SPI.
The requirements for the needed protocol:
1 - Multi-session with queuing support, as I have multiple sending/receiving threads, so it will be more than one application using this communication protocol and I need the protocol to handle queuing these requests (I will keep holding the buffer if the transmission is queue but I just need the protocol to manage scheduling the queues).
2 - Works over SPI as an underlying protocol.
3 - Simple error checking.
In this thread: "Simple serial point-to-point communication protocol", PPP was a recommended option, however I see PPP does only part of the job.
I also found Light weight IP (LwIP) project featuring PPP over serial (which I assume that I can use it over SPI), so I thought about the possibility of utilizing any of the upper layers protocols like TCP/UDP to do the rest of the required jobs. Fortunately, I found TI including LwIP as part of their ethernet SW in the starterware package, which I assume to ease porting at least on the TI chip side.
So, my questions are:
1 - Is it valid to use LwIP for this communication scheme? Won't this introduce much overhead due to IP headers which are not necessary for a point to point (on the chip level) communication and kill the throughput?
2 - Will the TCP or any similar protocol residing in LwIP handle the queuing of transmission requests, for example if I request transmission through a socket while the communication channel is busy transmitting/receiving request for another socket (session) of another thread, will this be managed by the protocol stack? If so, which protocol layer manages it?
3 - Is their a more efficient protocol stack than LwIP, that meets the above requirements?
Update 1: More points to consider
1 - SPI is the only available option, I use it with available GPIOs to indicate to the master when the slave has data to send.
2 - The current implemented (non-standard) protocol uses DMA with SPI, and a message format of《STX_MsgID_length_payload_ETX》with a fixed message fragments length, however the main drawback of the current scheme is that the master waits for a response on the message (not fragment) before sending another one, which kills the throughput and does not utilise the full duplex nature of SPI.
3- An improvement to this point was to use a kind of mailbox for receiving fragments, so a long message can be interrupted by a higher priority one so that fragments of a single message can arrive non sequentially, but the problem is that this design lead to complicating things especially that I don't have much available resources for many buffers to use the mailbox approach on the controller (master) side. So I thought that it's like I'm re-inventing the wheel by designing a protocol stack for a simple point to point link which may not be efficient.
4- What kind of higher level protocols can be normally used above SPI to establish multiple sessions and solve the queuing/scheduling of messages?
Update 2: Another useful thread "A good serial communications protocol/stack for embedded devices?"
Update 3: I had a look at Modbus protocol, it seems to specify the application layer then directly the data link layer for serial line communication, which sounds to skip the unnecessary overhead of network oriented protocols layers.
Do you think this will be a better option than LwIP for the intended purpose? Also, is there a widely used open source implementation like LwIP but for Modbus?
I think that perhaps you are expecting too much of the humble SPI.
An SPI link is little more a pair of shift registers one in each node. The master selects a single node to connect to its SPI shift register. As it shifts in its data, the slave simultaneously shifts data out. Data is not exchanged unless the master explicitly clocks the data out. Efficient protocols on SPI involve the slave having something useful to output while the master inputs. This may be difficult to arrange, so you usually need a means of indicating null data.
PPP is useful when establishing a connection between two arbitrary endpoints, when the endpoints are fixed and known a priori, PPP would serve no purpose other than to complicate things unnecessarily.
SPI is not a very sophisticated nor flexible interface and probably unsuited to heavyweight general purpose protocols such as TCP/IP. Since "addressing" on SPI is performed by physical chip-select, the addressing inherent in such protocols is meaningless.
Flow control is also a problem with SPI. The master has no way of determining that the slave has copied the data from SPI the shift register before pushing more data. If your slave SPI supports DMA you would be wise to use it.
Either way I suggest that you develop something specific to your purpose. Since SPI is not a network as such, you only need a means to address threads on the selected node. This could be as simple as STX<thread ID><length><payload>ETX.
Added 27 September 2013 in response to comments
Generally SPI as its names suggests is used to connect to peripheral devices, and in that context the protocol is defined by the peripheral. EEPROMS for example typically use a common or at least compatible command interface across vendors, and SD/MMC card SPI interface uses a standardised command test and protocol.
Between two microcontrollers, I would imagine that most implementations are proprietary and application specific. Open protocols are designed for generic interoperability and to achieve that might impose significant unnecessary overhead for a closed system, unless perhaps the nodes were running a system that already had a network stack built in.
I would suggest that if you do want to use a generic network stack that you should abstract the SPI with device drivers at each end that give the SPI a standard I/O stream interface (open(), close(), read(), write() etc.), then you can use the higher-level PPP and TCP/IP protocols (although PPP can probably be avoided since the connection is permanent). However that would only be attractive if both nodes already supported these protocols (running Linux for example), otherwise it will be significant effort and code for little benefit, and would certainly not be "efficient".
I assume you dont really want or have room for a full ip (lwip) stack on the microcontroller? This just sounds like a lot of overkill. Why not just roll your own simple packet structure to move the data items you need to move. Depending on how spi is supported on both sides you may or may not be able to use it to define the frame for your data, if not a simple start pattern, length and a trailing checksum and maybe tail pattern would suffice for finding packet boundaries in the stream (no different than a serial/uart solution). You can even use the PPP solution for that with a start pattern and I think end pattern with the payload using a two byte pattern whenever the start pattern happens to show up in the data. I dont remember all the details now.
Whatever your frame is then add a packet type and your handshakes, or if the data is going to just be microcontroller to arm then you dont even need to do that.
To get back to your direct question. Yes, I think that an ip stack (lwip or other) will introduce a lot of overhead. both bandwidth and more important the amount of code needed to support that stack will chew up rom/ram on both sides. If you ultimately need to present this data in an ip fashion (a website hosted by the embedded system) then somewhere in the path you need an ip stack, etc.
I cant imagine that lwip manages your queues for you. I assume you would need to do that yourself. the various queues might want to talk to a single driver that deals with the single spi bus (assuming there is a single spi bus with multiple chip selects). It also depends on how you are using the spi interface, if you are allowing the arm to talk to multiple microcontrollers and the packets of data are broken up into a little bit from this controller a little from that controller so that nobody has to wait to long before they get a few more bytes of data. Or will a complete frame have to move from one microcontroller before moving onto the next gpio interrupt to pull that guys data? The long and short of it is I would assume you have to manage the shared resource just like you would in any other situation where you have multiple users of a shared resource (rtos, full blown operating system, etc). I dont remember lwip that well at all but with a full blown berkeley sockets application interface the user could write separate applications where each application only cared about one TCP or UDP port and the libraries and drivers managed separating those packets out to each application as well as all of the rules for the IP stack.
If you are not already doing experiments with moving data over the spi interface(s) I would start with simple experiments first just to get the feel for how well it is or isnt going to work, the sizes of transfers you can do reliably per spi transction, etc. Your solution may naturally just fall out of those experiments.