What‘s the difference between OSI and OPC? - automation

OSI: Open Systems Interconnection Reference Model
OPC: OLE (Object Linking and Embedding) Process Control
I can‘t figure out the difference, since both of these refer to communication between machines.
Glad about any answer. Thanks

The Open Systems Interconnection (OSI) Model is a logical layout to define how systems should communicate with each other, using various protocols available at each of seven layers. It is a universal language to guide vendors and IT professionals to develop products and solutions or troubleshoot problems in networks. So, it is more like a conceptual tool than a specific protocol specification or a single technology.
On the other hand, OPC (Open Platform Communications, formerly OLE Process Control) can be described as an industrial standard M2M protocol for interoperability. The OPC standard is a series of specifications with a long history. The platform-dependent old specifications are called OPC Classic, and the next generation of OPC technology is known as OPC UA. In other words, it defines a specific communication solution in order to address a certain gap in industry.
Consequently, as a reference model and a generic tool, we can use the OSI model to develop a better understanding of how OPC deals with the problems we have in industrial networks today.

Related

CAN BUS protocol stack

Can someone explain to me what is CAN BUS protocol stack? Is it CAN BUS+ higher layers, like CANopen with 7 layers or something else, and can someone explain how can I use CAN stack, how I connect it with CAN bus, and why I need it?
Thank you
Yes it is CAN hardware with higher layer protocols, such as CANopen, J1939 or DeviceNet.
In terms of the "OSI model", it only really makes sense to speak of layers 1-3 and 7, where CAN is layers 1 and 2 and a protocol like CANopen roughly provides layers 3 and 7. Roughly, since CAN-open also comes with hardware specifications such as baudrate, sync point & stub length recommendations.
What's known as a "protocol stack" is really just a library with a platform-independent API, usually delivered with hardware-specific drivers. If the vendor claims that they support a particular MCU, then it usually means that you get the drivers from the vendor.
So basically you buy this pre-made library and integrate your program with it, then get standardized protocol behavior on the CAN bus, necessary to communicate with other nodes implementing the same protocol. Writing such a library yourself is no small task, particularly not for CANopen which is a big standard, where you are probably just going to use some 10% of the available functionality.

Will there ever be a libdc1394-like API for USB3 Vision and/or GigE Vision cameras?

As firewire cameras are becoming obsolete because of their bandwidth limitations, it seems as though camera manufacturers are switching to USB 3.0 or Gigabit Ethernet interfaces. Both have standards USB3 Vision and GigE Vision, which many manufacturers are adhering to.
However, it seems as though each manufacturer - Basler, Pointgrey, Ximia, and others - has its own SDK for interfacing with their cameras. When developing an application, developers would need to learn and interface with each API which is a pain, or stick to one manufacturer. I may be misunderstood but, in that case, what is the point of an industry standard if developers need to use manufacturer dependent APIs?
For firewire cameras, developers have access to libdc1394 cross-platform, high level API. They do not need to worry about who manufactures the camera and do not have to write separate drivers. Is something like this even possible for USB3 Vision and GigE Vision? If so, who would develop it?
At least for GigEVision, let me mention the Aravis project is available for linux. It is meant to be a GenTL/GenICam library but only supports GigE right now due to the driver-constraint problems outlined below.
First of all, I agree with Martin's point that creating a general SDK is not in the interest of the camera manufacturers themselves for competitive and support reasons. The manufacturers develop proprietary usb drivers (for USB3Vision) and NIC filter drivers (optional for GigE but highly recommended) in conjunction with their SDKs. It incentivizes them to lock in users to their ecosystem and to separate them from the competition.
This is the reason why I disagree with AdamF - I do not think that GenTL is widely supported by camera manufacturers, particularly for GigE or USB3Vision camera. Supporting GenTL would effectively allow users to use any general purpose SDK while still leveraging the manufacturer's proprietary drivers
I think it would be easier for OpenCV to support GenTL instead of GigE/U3V at this point because the giant hurdle to develop GigE/U3V drivers across the available hardware platforms. GenTL support would at least only be a software-based interface at this point.
I'm not very familiar with libdc1394, but I know a little bit most of all other interfaces.
USB3 Vision, GigE Vision and all other standards may be connected using one common interface: GenICam :
The goal of GenICamTM is to provide a generic programming interface
for all kinds of cameras and devices. No matter what interface
technology (GigE Vision, USB3 Vision, CoaXPress, Camera Link HS,
Camera Link, 1394 DCAM, etc.) they are using or what features they are
implementing, the application programming interface (API) should be
always the same.
The GenICamTM standard consists of multiple modules according to the
main tasks to be solved:
GenApi: configuring the camera.
Standard
Feature Naming Convention (SFNC): standardized names and types for
common device features. Includes Pixel Format Naming Convention
(PFNC).
GenTL: transport layer interface, grabbing images.
CLProtocol: GenICam for Camera Link.
GenCP: generic control protocol.
GenTL SFNC: recommended names and types for transport layer interface.
Most of the biggest camera producers supplies GenTL providers to work with their cameras.
Unfortunately I don't know any open source High Level Api for GenICam. I know 2 image processing libraries with GenICam support: Adaptive Vision Library and Halcon but they are not cost free.
Another less popular in industry common image grabbing interface is: DirectShow.
DirectShow is supported for example by: Ximea, Net-Gmbh, Basler and almost all web cameras.
So in my opinion if you want to use one common interface for all cameras you should consider using GenICam interface.
Check out https://github.com/ni/usb3vision
It implements the core USB3 Vision specification as a kernel driver. To control a camera, you would still need to wrap some usermode logic around it that connects it up to GenApi (the reference implementation of GenICam) as well as handles buffers queued/de-queued to the driver.
Also, regarding your question about if it is possible to implement a vendor-independent driver, of course it is. That is indeed the point of the standards. Most camera vendors provide their own proprietary SDK with their cameras for various reasons, but there are independent SDKs that will work with any standards-compliant GigE Vision and USB3 Vision cameras. Whether any of these are open-source is a good question, and I am not aware of any that are. The above-mentioned USB3 Vision driver is used by National Instruments's IMAQdx driver, which is commercial and closed source.
An old thread, but in case someone else comes looking...
Plus 1 for Aravis for opensource and in Linux. At the time of me writing this response, the project is now supporting USB3 Vision cameras although some are better than others. There is a lot of activity on the repo at Github at present
On the paid side of things (In windows at least) there is an API called ActiveUSB (for USB3 cams) and ActiveGigE by A&B Software. I've no experience with the GigE software, but have used the USB3 vision library that they provide and it is quite good across different cameras as long as they adhere to the GeniCam standard. It also offers a trial period allowing you to decide if its right for you or not. It is useable in Python, C, C# & VB languages. If you are developing for a commercial product/ solution then its worth taking a look at. On the other hand, if you don't want or can't afford to spend any money, then Aravis is the way to go although.
Its also worth noting that some manufacturers are starting to provide demos written in Python that can be used to create your own API. As already mentioned, this is limited for use with the manufacturers cameras and not easily interchangeable unless you have good code writing skills.

Any wireless chipsets with open specifications?

I'm wondering if there's any "mainstream" wireless chipsets/adapters for PC's that have open specifications, to a level that would permit one to implement a custom driver (i.e. specifications of registers, mode of operation etc.)? It's OK if the chipset requires the upload of binary blobs (for which the source isn't available) to the chip/card itself etc. as long as the host <-> adapter interface is public. I'm looking for it mainly out of interest to see what this interface looks like, but I might also be interested in doing some coding myself. Thanks!
You have OpenWRT which is a fully capable open source router operation system, TP-LINK products are based on OpenWRT.
You may be also interested in https://www.zigbee.org/ more oriented to the Internet of Things and M2M wireless sensor networks.
You probably want to check the Atheros WiFi chipset and its open source drivers, for examples ath5k and ath9k. These drivers are preinstalled in Linux kernel. It's widely used in academy, at least, and adopted by many off-the-shelf NIC.

What is an embedded system? Can Mobile be considered as an embedded product?

What is mean by embedded system?
If a system/machine or product which we are making is for multiple purposes, then can we consider it as an embedded system? Or is it that only a system dedicated for a particular task that is considered as an embedded system? Can a PC/mobile/laptop be considered as an embedded system or not?
Generally an embedded system is one placed into operation for a specific, narrow purpose, and lacking the kind of general purpose user interfaces you would find on an ordinary desktop/laptop.
That is not to say though that an embedded system cannot have these - I've seen test equipment such as network analyzers running desktop operating systems, with mouse/keyboard ports. One could probably hack one of those to use it for general purpose computing, but it would not be cost effective.
Going the other way, you can take a general purpose computer and shove it into an embedded application. However, systems optimized for embedded use may be more robust, support better real-world I/O (often retaining legacy ports), and use parts expected to be available over longer lifetimes than used in commodity PCs (if one fails, you want to be able to replace it with the exact same thing).
Often embedded systems are smaller - 8 bit processors (even 4-bit or serial-core historically) with limited memory; though 32 bit cores such as the arm family are now inexpensive and commonplace. Nor are tens to hundreds of megabytes of memory unknown.
Older cellphones would have a lot in common with embedded systems, but rather obviously contemporary smartphones are catching up in power and versatility, though still often constrained by user interface. Software wise some "think small" habits endure - for example, Android's compact bionic C library and toolbox shell have similar design goals to embedded C libraries and busybox. In other ways though, expansive resource-gobbling user experiences are now the norm on phones. Toss tablets based on the same processors and accessorized with keyboard into the mix, run a kernel designed originally for desktop computers on them, and the real difference is between UI software stacks designed to run segregated "apps" on a touch interface, vs one designed to run more traditional programs.
This is a question that even embedded systems experts often ask and discuss. There is as with many things a spectrum, and simple definitions are difficult.
My preferred definition is: a system containing one or more computing or processing element that is not a general purpose computer.
Some systems are inarguably embedded within that definition, and include such things as washing machine controllers, telephone switches, satellite navigation equipment, marine chart-plotters, automotive ECUs, laser printers etc.
Some are less easily categorised. A first generation digital mobile phone, is probably certainly an embedded system while more modern feature and smart phones however are somehow different. They can run apps chosen and installed by end-users allowing them to perform tasks not determined by the manufacturer. With increasing capabilities they are essentially hand-held computers and the range of apps sufficient to be able to regard them as "general purpose".
With these more ambiguous systems, it is useful to ask perhaps not what is an embedded system, but rather what is embedded systems development? For example, the manufacturer of your smart-phone deployed on it an operating system, the signal processing and communications stack required for it to operate as a telephone, all the device drivers and stacks for WiFi, USB, data storage etc., and this is certainly embedded systems development. However the guys writing apps for PlayStore or AppStore etc. are writing to a defined common platform abstracted by all that embedded code - that is not embedded systems development by any definition that I would accept, unless perhaps the application were for some bespoke vertical market application - like the delivery signature apps UPS drivers have on PDAs for example - in that environment the "general-purpose" device has been re-purposed as a "special-purpose" device.
With respect to a PC; a PC can be the embedded computing element in a system that is not a general purpose computer. Industrial PCs are commonly found embedded in manufacturing and packaging machinery, CNC machine tools, medical equipment etc. Although they share hardware architecture with desktop PCs they do not necessarily look like desktop PCs and come in many different form factors of both boards, and enclosures. Even within a desktop PC however, there are many examples of embedded computing elements, and embedded software such as the BIOS responsible for bootstrapping the system, the keyboard controller and disc drive controllers for example.
An embedded system is any electronic system that uses a CPU chip, but that is not a general-purpose workstation, desktop or laptop computer.
An embedded system is a special-purpose computer system designed to perform a dedicated function. Unlike a general-purpose computer, such as a personal computer, an embedded system performs one or a few pre-defined tasks, usually with very specific requirements, and often includes task-specific hardware and mechanical parts not usually found in a general-purpose computer.
Read more: http://romux-loc.com/tutorials/embedded-system#ixzz3113gchPt
Embedded system are devices that do some specific job not like our laptops which can play music, click pictures and format documents. They are devices like water filter , washing machines, vending machine etc.
They are programmed for some specific work and they do that work in a super loop depending on the user input.Like the vending machine always perform same thing when you opt for coffee in it with the help of button provided in it.
So in that way mobile phone is not an embedded system because it has no super loop and it can do various general purpose things just like a computer.
An embedded system has memory constrain, timing constrain and they do things in limited space.
Embedded system is any device that includes a programmable computer put it is not itself a general-purpose computer, so the mobile is not an embedded system because it has no super loop and it can do various general purpose things just like a computer, and an embedded system has memory constrain timing, constrain and they do things in limit space.
The embedded system is a microprocessed system in which a computer is attached to the system it controls. An embedded system can perform a set of tasks that have been predefined. The system is used for specific tasks, and thus, through engineering it is possible to optimize a given product and decrease the size, as well as the computational resources and its final value.
Embedded systems are all around us, and for that reason, we are not aware of their computational capacity, since we are so involved with such mechanisms. Embedded systems operate on machines that can work for several years without stopping, and which still, in some cases, have the ability to self-correct.
An excellent example of items that use embedded systems are the famous smartphones, which perform specific functions, and which have more limited mechanisms than computers.
Check below a list with some examples that receive the application of embedded systems:
Electronic ballot box
Video games
Calculators
Printers
Hospital equipment
In vehicles
Some home appliances
Cellular apparatus
Routers
A definition that may help to get the difference.
An embedded system can be considered as a system with which another embedded system cannot be developed. So presently, using a mobile phone, one cannot develop an 'embedded system'. If it is possible by the mobile device, then it should be considered as a general purpose system.
An example of an "embedded system" is a chip that is inserted underneath a dog's skin for identification purposes. Words like "embedded system" have specific meanings that only specialists understand. Such ambiguities make understanding technical language difficult for ordinary people.
embedded (ɪmˈbɛdɪd)
adj
fixed firmly and deeply in a surrounding solid mass
constituting a permanent and noticeable feature of something
(Journalism & Publishing) journalism assigned to accompany an active military unit
(Grammar) grammar inserted into a sentence
(Computer Science) computing (of a piece of software) made an integral part of other software

Where does VISA go on the OSI stack?

I am looking at putting together a communications protocol for an embedded application, but I don't know much about high-level communications such as TCP/IP, etc. I'm more used to dealing with bits and bytes on I²C and SPI, etc.
Someone has suggested that I use a VISA (virtual instrument software architecture) I/O API with SCPI (standard commands for programmable instruments) command syntax. What layer would these sit at on the OSI model? I'm thinking VISA would be application and SCPI presentation?
Someone else has suggested using SSH, again as I'm not sure what layer VISA/SCPI sits at, I don't know how SSH would affect the design.
Since you're basically just using the network to pass data between a hardware API and an application, you're on layer 7(application) of the OSI stack.