The relationship between OPC and DCOM - dcom

I am trying to grasp the link between OPC and DCOM. I have watched all four of the tutorials here and I think I have a good feeling for what OPC is, but in one of the tutorials (the third one 35 seconds in) the narrator states that OPC is based on DCOM, but I do not understand how the two are really linked. My confusion comes from a question my professor posed in which he asked "How and where would you deploy OPC instead of DCOM and vice-versa." His question makes it seem like the two are not as linked as my research suggests. I'm not looking for anyone to answer the question my professor asked, I just want to know the relation between OPC and DCOM, then I can figure the rest out. Specifically I would like to know if: 1.) One is always based on the other 2.) One can always be deployed without the other.

The way I read this is that DCOM/COM/OLE was the substrate upon which OPC standards (which are intercommunicating objects defined by interfaces as groups of methods) were built in windows. In other words OPC which consists of object, interfaces and methods can be and has been built out a DCOM which is a technology on windows that allows the creation of these.
Hence OPC could be built out of other substrates, but I'm not familiar enough with to know if it has been. I'd suspect yes for non windows based systems.
From here
OPC is open connectivity in industrial
automation and the enterprise systems
that support industry.
Interoperability is assured through
the creation and maintenance of open
standards specifications. There are
currently seven standards
specifications completed or in
development.
Specifically:
OPC is a series of standards
specifications. The first standard
(originally called simply the OPC
Specification and now called the Data
Access Specification) resulted from
the collaboration of a number of
leading worldwide automation suppliers
working in cooperation with Microsoft.
Originally based on Microsoft's OLE
COM (component object model) and DCOM
(distributed component object model)
technologies, the specification
defined a standard set of objects,
interfaces and methods for use in
process control and manufacturing
automation applications to facilitate
interoperability. The COM/DCOM
technologies provided the framework
for software products to be developed.
There are now hundreds of OPC Data
Access servers and clients.

The orginal OPC specifications were based on COM - not DCOM. This means a server could be implemented as an in-process COM server which would not require the use of DCOM. In practice almost all classic OPC products require DCOM.
In 2003 the OPC Foundation released XML-DA which provides the same features as OPC DA but use XML Web Services instead of DCOM.
The next generation technology: OPC Unified Architecture (UA) was released in 2009 and is independent of the the transport technology. Implementations currently support communication via XML Web Services and UA TCP (a binary protocol defined by the OPC Foundation).
More information can be found here: opcfoundation.org/ua

Related

CAN-bus bootloader standards

I'm developping an open source OTA update system for a few MCUs of a certain project. I wonder if there is some "standard" protocol for CAN-bus based bootloaders. Everything I saw online and in Application Notes from the chip manufacturers seem to be using their own brand of communication and thus their own specialized upload software too (mainly for demonstration for ANs).
My question is, am I missing something? Is there some standard way of doing this I'd rather adhere to, or should I just roll my own like they do and call it a day?
Features I'm interested in for the protocol side besides the obvious ones: checksumming, digital signatures, authenticated encryption.
Based on your tag, despite I do not see this from your question, I assume for now that you want to develop a boot-loader for automotive ECUs, which have a CAN connection.
The relevant protocols, which provide the services, are ISO 14229-3 or SAE J1939/73, with the first one much more common to my experience.
For development purposes, also ASAM MCD-1 XCP has support for that.
However, these are just the communication services and does not include usual usage patterns, which differ a lot across the OEMs.
For security, the German OEMs put a document together called "HIS Security. Module Specification", which I unfortunately did not find any more on the web.
They also have a blueprint for the design of a boot-loader.
However, this is anyway somewhat outdated, as boot-loaders today often are at least partially based on AUTOSAR, like the applications.
Last from them, you could also get a document partially specifying how the services above are used for flashing an ECU.
If you need further input, feel free to ask.
However, you will need yourself access to the non-free industry standards and recommendations.

How do SAP and Navision interact with third-party applications?

I am developing a business application and, provided that many companies look for integration, I would like to make it "compatible" with business systems like SAP or Navision. What mechanisms do these systems use for importing/exporting/syncing data with third-party applications?
There exist software tools known as EAI (Enterprise Application Integration) whose purpose is to act as middleware to enable the integration of applications across a company.
Apache Camel is an example of such framework, but there exist many of them. You can find a comparison list here: http://en.wikipedia.org/wiki/Comparison_of_business_integration_software
As the user nmiranda pointed out, in the case of SAP, the framework used for data interchange is SAP PI (SAP NetWeaver Process Integration).
I think your question was actually aimed to find this "starting point", wasn't it? I faced the same question some years ago and I also wondered if there was any "standard" interface to integrate applications. In such case, I hope have helped you.
There are multiple ways to integrate with ERP data sources. You can do batch integration where you setup a query that pulls the data from the source ERPs on a scheduled bases. ETL tools like Informatica and Talend shines on this front.
If you want online data integration when you want live data in your business application then you need to look at Data Virtualization solutions like Denodo, VirtDB or Composite.
Prices, feature sets, performance and flexibility highly differ. One distinguishing factor in my practice is security. Solutions tend to extract data into file system, which makes a problem when sensitive data is extracted. In real projects, implementors usually start a long process replicating the source system security objects in the target application.

Understanding BizTalk Development

Coming from a .net developer's perspective, I've been recently introduced to BizTalk. I was expecting something like a series of Service References, auto-mapping classes and workflows. I really wasn't expecting heavy XSD use and I was surprised by the orchestration maps.
I just don't understand why it isn't more like a bunch of enterprise features built on a foundation of WCF.
Can anyone help me understand the idea behind how BizTalk was designed?
BizTalk can work with WCF services, but doesn't need to for some simple scenarios. It can also work in scenarios where custom non-WCF adapters are needed - it includes many useful ones out of the box, like for FTP, SFTP, File system access, POP3, Sharepoint, Azure ServiceBus, MSMQ, and MQSeries. Custom adapters can be written for legacy systems and services that don't expose WCF endpoints. There are many WCF adapters for cases where WCF is useful, and these adapters can be used and configured a bit more easily than drawing up a WCF Service from scratch. BizTalk can also expose its services as WCF endpoints.
The real power of BizTalk is in its server architecture, which allows for high availability, durable messaging, suspending and resuming messages, advanced debugging options, and rapid development of artifacts (like maps and orchestrations). It also provides for some powerful out of the box support for EDI, HL7, and WCF LoB integration work.
XML is at the heart and soul of the BizTalk messaging engine. This is good because XML is standardized and powerful; it's bad because XML is unwieldy at times, especially when dealing with larger messages and BLOBs.
ReceivePorts get the data into BizTalk's messaging engine (using adapters and receive locations). Send ports send the XML (or other) data out using the adapters mentioned above.
Maps use XSLT behind the scenes to transform the XML messages; it's possible to direct a map to use custom XSLT, or to use C#, VB, or JScript as well. However, for most trivial mapping tasks, the visual mapping interface allows for rapid development and testing of mappings between different message types. They can be called from receive ports, send ports, or orchestrations.
Orchestrations are more or less services that use the XLANGs language. When designed properly, they can provide very powerful processing of business logic and application handling, all with the above mentioned architectural features that BizTalk provides (durable messaging, high availability).
I look at it from a different perspective. BizTalk is more inline with Web/SOAP and cross-platform standards, Xml and now JSON, than WCF. BizTalk also supports a lot more protocols than WCF. BizTalk supports WCF, not the other way around.
That the WCF stack can build Contracts on and serialize/deserialize .Net classes is the custom approach. Keep in mind, WCF is just hiding all the Xml/Xsd from you, it's still there and is the same as what BizTalk uses.
BizTalk was designed and shipped before WCF as a reliable, cross-platform, multi-protocol integration engine. In terms of capabilities, the BizTalk stack as a whole is several orders of magnitude beyond WCF. In practice, we spend a lot of time in a BizTalk app working around the limitations of WCF.*
*For clarity, I'm referring to the OOB binding elements mostly and their application to actual implementations. WCF as a framework is perfectly serviceable.
My research indicates that BizTalk has remained largely unchanged since 2004, and thus would not experience the kind of technological convergence seen in other areas of the Microsoft stack. The reason for this appears to be because of a painful migration from BizTalk 2002 to 2004 that no one wants to replicate. Reminiscent, to me, of the many versions of the Entity Framework.
In 2010-2011, there was a "BizTalk is dead" movement, with promises that a combination of WCF, Workflow Foundation, and AppFabric on Azure would be the replacement tools. There has been little talk of it at all since 2012 -- it looks like the two technologies both had their unique pros and cons, but never would the two compete.
BizTalk has the strength of out-of-the-box throttling and disk persistence and an assortment of adapters that aren't standardized elsewhere (enterprise-iness). It's as if its stance is to tame an unwieldy beast. It appears to suffer, still, from taking advantage of scalability options that have come about in the last 10 years. The other stack is more along the lines of what I initially expected but lacking in enterprise-iness.
I don't quite have my head wrapped around BizTalk being described as a publish/subscribe model versus... some other model. Need to look more into that.
In conclusion, I don't like either technology set, and I think they're both in need of work.
Thanks to all who read this question and those who answered it. I know subjective answers aren't a big thing on stack overflow.

Tibco EMS vs. MSMQ vs. MQ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Could not find an answer on this question, so would like to initiate this:
Tibco EMS vs. MSMQ vs. MQ.
How do these 3 technologies compare?
Which one is better and in which kinds of scenarios?
Specifically, I think to use one of these in SOA environment (.NET + WCF), where the scenario will mature over time.
I have one additional specific interest in the performance, which is important to mention. So, if given a choice, performance is of a critical priority.
I would appreciate to have a comparison table for a clear picture.
Thanks!
EDIT:
I am concentrated on two parameters: performance and scalability.
Scalability - how do these technologies compare in terms of supported concurrent users' count? which can support more users? scenario does not matter, let's choose the scenario which is supported by all them - e.g. simple queues.
Performance - in exactly the same scenarios, which performs faster?
If you want to use WCF than non of them really matters. You will get most of them only when you use their direct API.
MSMQ - MS technology installed with every Windows installation. It is only transport technology with support for queues.
Tibco EMS - Tibco technology supporting both queues and topics (publish/subscribe). It is expensive and more suitable for enterprise scenarios. You will most probably need other Tibco tools and technologies as well to implement full SOA solution (Tibco ActiveMatrix product suite). .NET and WCF will be only apps connected to this infrastructure which is more designed for Java world. It runs on non Windows platforms as well and together with Tibco Business Works it offers connectors (adapters) to many LOB applications. I like APIs for Tibco products but I really don't like UIs of their tools.
IBM MQ - IBM technology supporting queues and it also somehow emulates topics (publish/subscribe). Again it is expensive commercial solution more suitable for enterprise scenarios where mainframes are involved - that is biggest MQ advantage - it runs "everywhere". But that is end of advantages. APIs for both Java and .NET are terrible. .NET API is full of bugs and it doesn't work as expected. IBM doesn't understand .NET libraries versioning which leads to terrible problems when moving your client application to machines with different MQ clients installed, etc.
Edit:
There were several question / comments about what problems MQ has? As few examples you can check my MQ questions. Not every question is actually an issue but you will find few of them pointing directly to bugs. Those issues can already be fixed in new MQ client versions but that doesn't mean there are no other. Generally I found MQ .NET API the most frustrating library I have ever used - it even beaten hated SharePoint.
On the other hand if you just need to send and receive some message and don't plan to do anything special or use low level features you should be OK. At the end the API is used for a while and common use cases should work - if you are not happy enough to hit regression bugs.
For a simple integration scenario - i.e. 2 applications interacting in a Point to point manner , no difference will be there. You would better check the support of each technology within your applications. And in that type of scenarios, you shouldn't be worried about performance as the messaging time shouldn't be the main issue. On the other hand, the real selection would be based on the target model for integrating your whole enterprise. For example,
- Are you doing any mediation functions - e.g: data transformation, protocol mapping ...etc
- Will you integrate systems in a point to point manner or you may consider having a Hub / ESB?
- Will you cover security aspects in your integration scenario (Authorization, authentication, auditing, encryption, certificate exchange ...etc)
Finally having such vision will give better understanding of what real constraints you've for your design. Personally, I would go for WCF only if I'm not expecting complex integration scenarios and I'm not willing to spend money on the solution. And I would go for IBM if I'm building a foundation for SOA. And will go to Tibco if I'm planning a Java based integration with a defined scope.
Again it is expensive commercial solution more suitable for enterprise scenarios
where mainframes are involved
Not sure why you mentioned mainframes. Many MQ enterprise customers don't have them.
IBM MQ - IBM technology supporting queues and it also somehow emulates
topics (publish/subscribe)
MQ v7.0.0 (released 2008) and onwards supports pub/sub topics as a native feature, there is no emulation involved.
APIs for both Java and .NET are terrible.
The MQ Classes for Java and JMS have evolved over 10+ years and are used heavily by thousands of enterprises.
.NET API is full of bugs and it doesn't work as expected.
The .Net API has been around for 7+ years over a few major releases of MQ. I would imagine that the obvious bugs would have been shaken out by now.
I am concentrated on two parameters: performance and scalability.
MQ has unlimited scalability. Performance is very good even with no tuning.
MQ is best only if you need to integrate with lots of mainframes. Pub/Sub is implemented poorly and the many APIs are 'strange to use'.
If all your applications are Windows, MSMQ might be a good choice, but it will be difficult to bridge into Unix or Java worlds.
The whole Java community standardized on JMS so TIBCO EMS is a good choice if you ever want to connect non-Windows applications.

Creating a Data Access Layer when using Web Client Software Factory 2010

I am exploring WCSF and wondering how is the data access layer created? Some of the articles I have found are two years old and talk about using Web Service Factory. I am using VS 2010 and .Net 4.0. I am looking for some sample and tutorials with real world examples.
The Web Client Software Factory doesn't provide automated guidance for creating the data access layer. It's focus is primarily on providing guidance to facilitate composite Web application development (i.e. Web applications which are comprised of individual modules, often developed by different development teams).
There are several approaches for accomplishing data access, but a few resources you might want to check out are the ASP.Net MVC Nerd Dinner tutorial, the S#arp Architecture project, the Code Camp Server source, and the Microsoft Pattern & Practices Data Access Guidance. All of these use variations of the Repository pattern which is the predominate approach among teams following Domain-Driven Design.
There is a good reference implementation hidden in the WCSF2010 Source file, and a few other examples. On http://webclientguidance.codeplex.com, click Web Client Software Factory 2010 Source and then download WCSF2010Source.zip. Inside you'll find Trunk\Source\GlobalBankRI\GlobalBank.Commercial.EBanking (VSTS Tests).sln, which is a pretty good example of many aspects of WCSF, including data access through a WCF service. There are some other simpler examples in the Trunk\Source folder.
Only the ETF module is fully built out. Each view presenter uses an ETFController to manage data common to all presenters. The ETFController uses an instance of IAccountServiceAgent, realized by AccountServiceAgent (for non-unit testing), which is registered as a module. AccountServiceAgent uses a class that acts as a proxy for the WCF reference. The proxy instance to use, AccountServiceProxy, is hardcoded.
The actual source code for WCSF is in BlocksTrunk\Source.
Yeah, not at all easy to find. I don't remember what made me download this and look inside for such examples. Certainly not anything I read on the website.
I've used this example to build a web app that access SQL data and scrapes a website, if you'd like to take a look. It's still under development, but the data access bits are pretty firm: http://lcbodrinkfinder.codeplex.com/