COleDispatchDriver with a specific exe as the COM server - com

I have inherited an application that consists of a bunch of exe files that communicate using COM and COleDispatchDriver. There is one main "client" exe, and several "server" executables that provide services.
At the moment, the client process starts the servers using COleDispatchDriver::CreateDispatch(), passing an application ID that gets resolved to a class ID. The problem with this is that it relies on the COM server being registered (a potential point of failure). It can also be problematic if there are several different versions of the COM server exes on the machine.
I'd like to make this less fragile without having to completely rearchitect the application at this point. Is there any way to keep the same mechanism for communication, but explicitly start a specific server application? The client knows where the server apps are and what they are called (they are alongside the client in the same directory).

It's a bit more tricky but you can manually marshal the interfaces you need yourself. Get the client to fire the server up which creates the client object and marshalls an interface of it back to the client using CoMarshallInterface().
Once the client has marshalled the interface you should be able to get hold of a IDispatch interface with simply a call to QueryInterface.

Related

Need shared property accross instances of my COM server

I have a VB.NET COM class with a Shared property, like ABC. The problem is the component is used by several C++ COM exe, so its my understanding that they each will get their own assembly load, and the Shared property will be unique to each EXE. Is there a way to get for this assembly a cross EXE shared property ?
Tx.
Create a Windows Service application and either register your shared singleton object in ROT or simply use RegisterActiveObject/RevokeActiveObject to register it with a unique guid.
Accordingly, use ROT or GetActiveObject to obtain a COM proxy to this object from any other place. You'd need to manually start the windows service if the object has not been registered.
Updated, it's also possible to implement IClassFactory on the singleton object (which would return itself). The service would register the singleton via CoRegisterClassObject, resembling the out-of-proc server behavior. The initial service activation would still be required.
Finally, perhaps the simplest solution is to register the assembly as an out-of-proc DLL surrogate. I haven't tried that, but it ought to be easy with [ComRegisterFunction] / [ComUnregisterFunction] custom interop registration.
Updated, here is an example of using a surrogate process.
What you are describing would be "easily" accomplished in native COM by creating an out-of-process COM server (also commonly referred to as an ActiveX EXE). As the name implies, an out-of-process COM server runs in its own process and serves it's methods via a COM interface. If multiple clients use the COM server simultaneously, they both share the same server process, so any global data within that process is shared between all of the clients.
Unfortunately, .NET does not provide any mechanism for creating an out-of-process COM server. All COM visible .NET assemblies act as in-process COM libraries, so each client using it has it's own set of global data within their own processes.
The only alternative is to create a standard in-process COM-visible library, but have it just be a pass-through wrapper which calls out to some other process. Inter-process communication in .NET is typically handled with WCF, so the typical solution would be to have a WCF service running in the back-end with which the COM-visible library communicates. If you don't want to use WCF, you could also look at .NET Remoting or raw TCP/IP sockets.
Here's a chicken-scratch diagram to help visualize what I mean:

Running multiple instances of the same XPC service (NSXPCConnection)

Is it possible to run multiple instances of the same XPC service using the XPC APIs found in Foundation.framework (NSXPCConnection, etc.)? The docs don't provide much insight on this matter.
EDIT: Did a quick test, and it seems like only one instance of the service is running even though I created two XPC connections. Is there any way to have it run another instance?
A bit late, but the definitive answer to this question is provided in the xpcservice.plist manpage:
ServiceType (default: Application)
The type of the XPC Service specifies how the service is instantiated.
The values are:
• Application: Each application will have a unique instance of this service.
• User: There is one instance of the service process created for each user.
• System: There is one instance of the service process for the whole system. System XPC Services are restricted to reside in system frameworks and must be owned by root.
Bottom line: In most cases there is a single instance of an XPC Service and only in the case where different applications can connect to the same service (not even possible when the service is bundled with an app), will there be multiple instances (one-instance-per-app).
I believe XPC services designed for one instance per multiple connections. Probably, it is more convenient to manage named pipes with one running executable. So, the most likely it is impossible to create multiple instances simultaneously.
Since XPC services should have no state, it should not matter, whether one ore more instances are running:
XPC services are managed by launchd, which launches them on demand, restarts them if they crash, and terminates them (by sending SIGKILL) when they are idle. This is transparent to the application using the service, except for the case of a service that crashes while processing a message that requires a response. In that case, the application can see that its XPC connection has become invalid until the service is restarted by launchd. Because an XPC service can be terminated suddenly at any time, it must be designed to hold on to minimal state—ideally, your service should be completely stateless, although this is not always possible.
–– Creating XPC Services
Put all neccessary state information into the xpc call and deliver it back to the client, if it has to persist.
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/xpcservice.plist.5.html
ServiceType key in XPCService dictionary: Application or User or System
But this ‘ServiceType’ is irrelevant… IF Service is embedded in an application bundle then it will only be visible to the containing application and will be, by definition, Applicaton-type services. A subsequent connection request from an application to a service will result in a new connection to an existing service.
I know I'm late to the party, but while you can't do this with plain XPC,
there's library (a component of OpenEmu) that should be able to do what you're asking: OpenEmuXPCCommunicator

Detect outdated WCF Service Reference Proxy

Consider following scenario during development -
We change WCF service contracts very frequently.
There is a web application consuming these services.
We do update, service reference frequently in the web application.
But at times when we forget to do this, we have to debug our whole web application, to finally find out that, the service contract has changed.
Can we detect outdated proxy at runtime before invoking the service.
The best practice is to version your service to allow the client to connect with the interface its familiar with. Usually you keep one or two versions older online and add any breaking changes as an up-rev to the service. (e.g. /myservice/2012/01 then /myservice/2012/06). Then as new versions are created you can deprecate previous versions.
Second practice would be to implement a GetVersion() (or similar) method you can call and use for testing purposes. Make an initial call to the service and see what it's running then test against a locally stored version number and see if a conflict exists.
For more detail on this, there's a good article by Yoav Helfman that goes over handling version changes and updates.
I have posted about this kind of thing before.
Essentially one way to manage this situation is to require your service consumers to declare what version of the service interface they are expecting with each request.
Then expose a fault contract on your service of a type which will allow you to identify that a service version mismatch has occurred. This will mean that consumers can catch and then handle this specific problem accordingly.

Event Dispatcher for WCF call-backs

I have a server that needs to keep a small number of clients in sync. Whenever there is a change of state at the server, all the connected clients must be informed.
I am planning to use a “callback
contract”,
I can get hold of the
callback reference for each client on
the server by using
GetCallbackChanel().
I then need
to manage all these client channel
reference and call all of them when
needed.
So far so good however:
I don’t wish to block the server, so calls to the clients must be none blocking
Errors calling the client must be logged and coped with
Is there a standard WCF component to do this?
No, there is not a standard WCF component for this, at least through .NET 3.5. I can't speak to what may be available in .NET 4.0.
That said, there is a pretty straightforward way to do this. Juval Lowy, author of Programming WCF Services, describes how to do this using his WCF-based Publish-Subscribe Framework.
Basically, the idea is to create a separate WCF event service that resides in the same hosting application as your server (e.g., Windows service, IIS). When the state of your server changes, you publish the state change to the event service. The clients that need to be kept in sync subscribe to this same event via the event service. In effect, the event service becomes a broker for your server to notify clients of whatever events your server publishes.
The article I listed above has a code download, but you can also get the Publish-Subscribe Framework and a working example for free from his website, IDesign.net. Here is the link to the download. You may need to scroll your browser up just a little bit to see it as I believe their internal hyperlink is wrong.

WebSphere Application Server EJB Optimization

We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.