Can someone explain to me what is CAN BUS protocol stack? Is it CAN BUS+ higher layers, like CANopen with 7 layers or something else, and can someone explain how can I use CAN stack, how I connect it with CAN bus, and why I need it?
Thank you
Yes it is CAN hardware with higher layer protocols, such as CANopen, J1939 or DeviceNet.
In terms of the "OSI model", it only really makes sense to speak of layers 1-3 and 7, where CAN is layers 1 and 2 and a protocol like CANopen roughly provides layers 3 and 7. Roughly, since CAN-open also comes with hardware specifications such as baudrate, sync point & stub length recommendations.
What's known as a "protocol stack" is really just a library with a platform-independent API, usually delivered with hardware-specific drivers. If the vendor claims that they support a particular MCU, then it usually means that you get the drivers from the vendor.
So basically you buy this pre-made library and integrate your program with it, then get standardized protocol behavior on the CAN bus, necessary to communicate with other nodes implementing the same protocol. Writing such a library yourself is no small task, particularly not for CANopen which is a big standard, where you are probably just going to use some 10% of the available functionality.
Related
Is it possible to create one file, say for example uart.c to be generic, so that I can call the uart functions for different microcontrollers, say for example avr and arm? or is it a must that for every microcontroller I will have to create the the uart functions from scratch?
You can create a Hardware Abstraction Layer (HAL) that acts as a common interface for all hardware of the same kind. A correctly designed HAL allows portable application-layer code, which is the sole purpose of having one.
The HAL should be in the form of an API library that then acts as a header file template for how the drivers should be designed, the simplest form of "polymorphism". The application programmer calls the HAL, and the MCU-specific functions in the driver will then get called.
In case of UART you might have an init function taking baudrate, stop bit, parity, handshaking etc as parameters. And then a read function and a write function, with some error handling. Overrun and framing errors are universal, for example. It is then up to the specific driver for MCU "x" to implement itself according to your specified HAL.
Generally, one should not create abstaction layers needlessly though. It is quite qualified work and easy to get wrong. If you don't need portability or code re-use between projects, there's no obvious need for a HAL and you could as well call the driver directly from the application code.
The hardware implementations and register interfaces across different vendors certainly differ. ARM do not make MCUs they licence the core - the peripherals are not defined by the core so even amongst ARM devices the peripheral implementations differ between vendors.
What you can do is define a common device-layer interface and implement that interface for each device family you need, then you can reuse the application layer code across architectures.
The alternative is to stick with a common family. AVR for example covers a wide range of devices and the peripherals generally are common across the range. Similarly STM32 (ARM Cortex-M) devices share common peripherals across the range.
So the answer is no, but you can deal with that by abstracting the hardware (or the vendor supplied abstraction or device layer).
For a UART you might use stdio as your abstraction layer and access the device via fprintf, fputchar, fread, fwrite etc. Though typically you'd build that on a lower level layer too.
I am currently researching on SDN in mobile networks (LTE). for that i am looking for a simulator, need advise.
I have gone through NS3 and it appears as it supports Openflow Switches, but couldn't find documentation for that purpose. Need help in simulating SDN network in LTE. any suggestion/advise would be highly appreciated.
Thanks,
Shahzoob
NS-3 has a module called ofsoftswitch13 which might be quite helpful in your case. It implements controller entity as well as OF 13 protocol.
I did an undergraduate thesis on this topic and used Mininet and Ns-3 together using this project. We primarily did validity testing on this platform to determine it's accuracy and limitations (especially at scale). The wireless model is very good until a very clear performance degradation when the CPU usage reaches (100/n)% - where n is the number of available cores on the machine (for a single threaded implementation).
As firewire cameras are becoming obsolete because of their bandwidth limitations, it seems as though camera manufacturers are switching to USB 3.0 or Gigabit Ethernet interfaces. Both have standards USB3 Vision and GigE Vision, which many manufacturers are adhering to.
However, it seems as though each manufacturer - Basler, Pointgrey, Ximia, and others - has its own SDK for interfacing with their cameras. When developing an application, developers would need to learn and interface with each API which is a pain, or stick to one manufacturer. I may be misunderstood but, in that case, what is the point of an industry standard if developers need to use manufacturer dependent APIs?
For firewire cameras, developers have access to libdc1394 cross-platform, high level API. They do not need to worry about who manufactures the camera and do not have to write separate drivers. Is something like this even possible for USB3 Vision and GigE Vision? If so, who would develop it?
At least for GigEVision, let me mention the Aravis project is available for linux. It is meant to be a GenTL/GenICam library but only supports GigE right now due to the driver-constraint problems outlined below.
First of all, I agree with Martin's point that creating a general SDK is not in the interest of the camera manufacturers themselves for competitive and support reasons. The manufacturers develop proprietary usb drivers (for USB3Vision) and NIC filter drivers (optional for GigE but highly recommended) in conjunction with their SDKs. It incentivizes them to lock in users to their ecosystem and to separate them from the competition.
This is the reason why I disagree with AdamF - I do not think that GenTL is widely supported by camera manufacturers, particularly for GigE or USB3Vision camera. Supporting GenTL would effectively allow users to use any general purpose SDK while still leveraging the manufacturer's proprietary drivers
I think it would be easier for OpenCV to support GenTL instead of GigE/U3V at this point because the giant hurdle to develop GigE/U3V drivers across the available hardware platforms. GenTL support would at least only be a software-based interface at this point.
I'm not very familiar with libdc1394, but I know a little bit most of all other interfaces.
USB3 Vision, GigE Vision and all other standards may be connected using one common interface: GenICam :
The goal of GenICamTM is to provide a generic programming interface
for all kinds of cameras and devices. No matter what interface
technology (GigE Vision, USB3 Vision, CoaXPress, Camera Link HS,
Camera Link, 1394 DCAM, etc.) they are using or what features they are
implementing, the application programming interface (API) should be
always the same.
The GenICamTM standard consists of multiple modules according to the
main tasks to be solved:
GenApi: configuring the camera.
Standard
Feature Naming Convention (SFNC): standardized names and types for
common device features. Includes Pixel Format Naming Convention
(PFNC).
GenTL: transport layer interface, grabbing images.
CLProtocol: GenICam for Camera Link.
GenCP: generic control protocol.
GenTL SFNC: recommended names and types for transport layer interface.
Most of the biggest camera producers supplies GenTL providers to work with their cameras.
Unfortunately I don't know any open source High Level Api for GenICam. I know 2 image processing libraries with GenICam support: Adaptive Vision Library and Halcon but they are not cost free.
Another less popular in industry common image grabbing interface is: DirectShow.
DirectShow is supported for example by: Ximea, Net-Gmbh, Basler and almost all web cameras.
So in my opinion if you want to use one common interface for all cameras you should consider using GenICam interface.
Check out https://github.com/ni/usb3vision
It implements the core USB3 Vision specification as a kernel driver. To control a camera, you would still need to wrap some usermode logic around it that connects it up to GenApi (the reference implementation of GenICam) as well as handles buffers queued/de-queued to the driver.
Also, regarding your question about if it is possible to implement a vendor-independent driver, of course it is. That is indeed the point of the standards. Most camera vendors provide their own proprietary SDK with their cameras for various reasons, but there are independent SDKs that will work with any standards-compliant GigE Vision and USB3 Vision cameras. Whether any of these are open-source is a good question, and I am not aware of any that are. The above-mentioned USB3 Vision driver is used by National Instruments's IMAQdx driver, which is commercial and closed source.
An old thread, but in case someone else comes looking...
Plus 1 for Aravis for opensource and in Linux. At the time of me writing this response, the project is now supporting USB3 Vision cameras although some are better than others. There is a lot of activity on the repo at Github at present
On the paid side of things (In windows at least) there is an API called ActiveUSB (for USB3 cams) and ActiveGigE by A&B Software. I've no experience with the GigE software, but have used the USB3 vision library that they provide and it is quite good across different cameras as long as they adhere to the GeniCam standard. It also offers a trial period allowing you to decide if its right for you or not. It is useable in Python, C, C# & VB languages. If you are developing for a commercial product/ solution then its worth taking a look at. On the other hand, if you don't want or can't afford to spend any money, then Aravis is the way to go although.
Its also worth noting that some manufacturers are starting to provide demos written in Python that can be used to create your own API. As already mentioned, this is limited for use with the manufacturers cameras and not easily interchangeable unless you have good code writing skills.
I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.
I'm trying to design an efficient communication protocol between a micro-controller on one side and an ARM processor on a multi-core TI chip on the other side through SPI.
The requirements for the needed protocol:
1 - Multi-session with queuing support, as I have multiple sending/receiving threads, so it will be more than one application using this communication protocol and I need the protocol to handle queuing these requests (I will keep holding the buffer if the transmission is queue but I just need the protocol to manage scheduling the queues).
2 - Works over SPI as an underlying protocol.
3 - Simple error checking.
In this thread: "Simple serial point-to-point communication protocol", PPP was a recommended option, however I see PPP does only part of the job.
I also found Light weight IP (LwIP) project featuring PPP over serial (which I assume that I can use it over SPI), so I thought about the possibility of utilizing any of the upper layers protocols like TCP/UDP to do the rest of the required jobs. Fortunately, I found TI including LwIP as part of their ethernet SW in the starterware package, which I assume to ease porting at least on the TI chip side.
So, my questions are:
1 - Is it valid to use LwIP for this communication scheme? Won't this introduce much overhead due to IP headers which are not necessary for a point to point (on the chip level) communication and kill the throughput?
2 - Will the TCP or any similar protocol residing in LwIP handle the queuing of transmission requests, for example if I request transmission through a socket while the communication channel is busy transmitting/receiving request for another socket (session) of another thread, will this be managed by the protocol stack? If so, which protocol layer manages it?
3 - Is their a more efficient protocol stack than LwIP, that meets the above requirements?
Update 1: More points to consider
1 - SPI is the only available option, I use it with available GPIOs to indicate to the master when the slave has data to send.
2 - The current implemented (non-standard) protocol uses DMA with SPI, and a message format of《STX_MsgID_length_payload_ETX》with a fixed message fragments length, however the main drawback of the current scheme is that the master waits for a response on the message (not fragment) before sending another one, which kills the throughput and does not utilise the full duplex nature of SPI.
3- An improvement to this point was to use a kind of mailbox for receiving fragments, so a long message can be interrupted by a higher priority one so that fragments of a single message can arrive non sequentially, but the problem is that this design lead to complicating things especially that I don't have much available resources for many buffers to use the mailbox approach on the controller (master) side. So I thought that it's like I'm re-inventing the wheel by designing a protocol stack for a simple point to point link which may not be efficient.
4- What kind of higher level protocols can be normally used above SPI to establish multiple sessions and solve the queuing/scheduling of messages?
Update 2: Another useful thread "A good serial communications protocol/stack for embedded devices?"
Update 3: I had a look at Modbus protocol, it seems to specify the application layer then directly the data link layer for serial line communication, which sounds to skip the unnecessary overhead of network oriented protocols layers.
Do you think this will be a better option than LwIP for the intended purpose? Also, is there a widely used open source implementation like LwIP but for Modbus?
I think that perhaps you are expecting too much of the humble SPI.
An SPI link is little more a pair of shift registers one in each node. The master selects a single node to connect to its SPI shift register. As it shifts in its data, the slave simultaneously shifts data out. Data is not exchanged unless the master explicitly clocks the data out. Efficient protocols on SPI involve the slave having something useful to output while the master inputs. This may be difficult to arrange, so you usually need a means of indicating null data.
PPP is useful when establishing a connection between two arbitrary endpoints, when the endpoints are fixed and known a priori, PPP would serve no purpose other than to complicate things unnecessarily.
SPI is not a very sophisticated nor flexible interface and probably unsuited to heavyweight general purpose protocols such as TCP/IP. Since "addressing" on SPI is performed by physical chip-select, the addressing inherent in such protocols is meaningless.
Flow control is also a problem with SPI. The master has no way of determining that the slave has copied the data from SPI the shift register before pushing more data. If your slave SPI supports DMA you would be wise to use it.
Either way I suggest that you develop something specific to your purpose. Since SPI is not a network as such, you only need a means to address threads on the selected node. This could be as simple as STX<thread ID><length><payload>ETX.
Added 27 September 2013 in response to comments
Generally SPI as its names suggests is used to connect to peripheral devices, and in that context the protocol is defined by the peripheral. EEPROMS for example typically use a common or at least compatible command interface across vendors, and SD/MMC card SPI interface uses a standardised command test and protocol.
Between two microcontrollers, I would imagine that most implementations are proprietary and application specific. Open protocols are designed for generic interoperability and to achieve that might impose significant unnecessary overhead for a closed system, unless perhaps the nodes were running a system that already had a network stack built in.
I would suggest that if you do want to use a generic network stack that you should abstract the SPI with device drivers at each end that give the SPI a standard I/O stream interface (open(), close(), read(), write() etc.), then you can use the higher-level PPP and TCP/IP protocols (although PPP can probably be avoided since the connection is permanent). However that would only be attractive if both nodes already supported these protocols (running Linux for example), otherwise it will be significant effort and code for little benefit, and would certainly not be "efficient".
I assume you dont really want or have room for a full ip (lwip) stack on the microcontroller? This just sounds like a lot of overkill. Why not just roll your own simple packet structure to move the data items you need to move. Depending on how spi is supported on both sides you may or may not be able to use it to define the frame for your data, if not a simple start pattern, length and a trailing checksum and maybe tail pattern would suffice for finding packet boundaries in the stream (no different than a serial/uart solution). You can even use the PPP solution for that with a start pattern and I think end pattern with the payload using a two byte pattern whenever the start pattern happens to show up in the data. I dont remember all the details now.
Whatever your frame is then add a packet type and your handshakes, or if the data is going to just be microcontroller to arm then you dont even need to do that.
To get back to your direct question. Yes, I think that an ip stack (lwip or other) will introduce a lot of overhead. both bandwidth and more important the amount of code needed to support that stack will chew up rom/ram on both sides. If you ultimately need to present this data in an ip fashion (a website hosted by the embedded system) then somewhere in the path you need an ip stack, etc.
I cant imagine that lwip manages your queues for you. I assume you would need to do that yourself. the various queues might want to talk to a single driver that deals with the single spi bus (assuming there is a single spi bus with multiple chip selects). It also depends on how you are using the spi interface, if you are allowing the arm to talk to multiple microcontrollers and the packets of data are broken up into a little bit from this controller a little from that controller so that nobody has to wait to long before they get a few more bytes of data. Or will a complete frame have to move from one microcontroller before moving onto the next gpio interrupt to pull that guys data? The long and short of it is I would assume you have to manage the shared resource just like you would in any other situation where you have multiple users of a shared resource (rtos, full blown operating system, etc). I dont remember lwip that well at all but with a full blown berkeley sockets application interface the user could write separate applications where each application only cared about one TCP or UDP port and the libraries and drivers managed separating those packets out to each application as well as all of the rules for the IP stack.
If you are not already doing experiments with moving data over the spi interface(s) I would start with simple experiments first just to get the feel for how well it is or isnt going to work, the sizes of transfers you can do reliably per spi transction, etc. Your solution may naturally just fall out of those experiments.