There seems to be a lot of enmity against DCOM, and I'm curious to understand why. For a company still writing to the Win32 SKD using C++, is there any real reason not to use DCOM in current or future development? Is some future version of Windows not going to support it? Is it too fragile and fails to work often? Is it too complicated to implement compared to other technologies? What's the deal?
Security model. Especially when computers are not in the same domain (or aren't in domain at all).
Auto interfaces modeled for Visual Basic (original, not .NET), obsolete and not pretty to use from other languages.
If you only want to develop in C++ and deploy in controlled network, it may still be a good choice.
I dislike COM/DCOM because "Catastrophic failure" is the most unhelpful error message in the history of error messages.
Well, DCOM is a distributed version of COM and COM is very complex by itself and it's very easy to do something wrong unintentionally (see this recent question and the answer to it for examples). With DCOM you just have even more ways to hurt yourself.
Other than that it works and is for example a good way for hosting in-proc COM components in a separated process.
If your trying to build a client server application and want the communication to go across network boundaries (for example the internet) then DCOM can be problematic due to firewalls.
I had worked on a very success server application which was distributed using DCOM, we let the system handle most of the complexity by creating COM+ Server Applications and exporting Application Proxies. In this case it worked very well as long as all of our versions were synched up.
I implemented a large system using DCOM in the late 90's. Although it worked pretty well, there were a couple of issues. For starters it uses unpredictable port numbers for communication. It is not scalable, and you are much better off using WCF than DCOM.
I think momentum has shifted to SOAP and other web service technology because it is:
easier to deploy systems in the presence of firewalls
no vendor lock-in
I've never used DCOM myself, so I can't really comment on its general quality or fitness.
Related
All,
I'm attempting to estimate the effort to port an app developed on Windows (.NET) to Linux (Mono). I came across the MoMA tool, which attempts to look through my .exe and find potential areas of incompatibility. Most of my issues appear to be centered around get/set of network settings, getting network info, etc. (Object ManagementBaseObject.get_Item and set_Item. etc).
In almost all of the cases, the Mono functionality is listed as "ToDo". For estimation purposes, is it safe to assume most/all of these have some kind of workaround? I would imagine this type of basic networking support must be included in the latest version of Mono. Or should I assume none of this is currently available and I would be stuck waiting for it to be implemented (or be forced to implement it myself)?
Thanks,
Dan
First,see Mono Compatible Networking/Socket Library. Also,take a look on Cross-Platform Network Applications with Mono. You can start with C# Network Library.
I have a vb6-mysql client-server desktop application which is distributed as a setup file.
It uses DLL all logical operations as well as database operations. The EXE and the DLL are installed in the server as well as the client machines. When I say server which only means the database resides in that machine no other difference in EXE or DLLs.
As all the database operations are done in the DLL when connected from a client machine, performance would be less. It is not possible now to change all the logic into database.
Is it possible to store the DLL in the server machine only and use the same DLL by the client machine also so that database connection will always be from the server itself?
Is converting the DLL to a windows service the possible solution for this?
How can I to convert it to a service?
And finally, if it is possible to make the DLL act as a service, what would the connection issues be?
You appear to be trying to rediscover n-tier application development.
The usual way this would be done using VB6 within a LAN would be to create an ActiveX EXE instead of a DLL so you can use DCOM. However DCOM isn't something you'd want to expose over the Internet.
For such cases it is more typical to use a commonly-open-port protocol such as HTTP or HTTPS. Almost everyone has firewall settings permitting outbound HTTP and HTTPS connections and most of the major Web servers undergo regular hardening to make them safer to expose to the Internet.
The classic way to do this with VB6 was to use IIS to host the Remote Data Service, which uses a form of Web Service "under the covers" where your program doesn't deal with the gory details. However this is a deprecated approach, and today configuring IIS and the RDS components can be a chore since they are locked down hard by default.
This leaves you with such things as the deprecated SOAP Toolkit or 3rd party tools such as those in the PocketSOAP suite... or you can roll your own.
Doing this from scratch can be a bit of work but is more flexible, allowing REST instead of SOAP which can have advantages in itself. You could use whatever Web server you choose that can work with VB6 (via CGI, etc.).
The hardest approach to justify might seem the simplest on the surface: create your own protocol over TCP and write a Windows Service. This can be the most flexible of all but it can be more work than other options and you are on your own as far as making it and keeping it secure. You'll probably also face firewall issues depending on where your clients are and what the local firewall policies are there.
When we could rely on DCOM the issues were relatively small aside from security configuration headaches. With the Internet in the picture it is an entirely different story.
This really isn't something you undertake casually. Even assuming that your database is safe exposed to the Internet is naive and should be rethought.
I am developing a Remote Software Provisioning system that should be able to handle all deployment, installation, un-installation and upgrades of software components. Software can be in any language (java, .net, c/c++ etc) and target side can be PC, embedded systems and smart phones.
I have found Apache ACE as good candidate for developing this system.
I want to know if there is any advantage/necessity of using OSGi at target side as Apache ACE can do software provisioning to non-OSGi targets as well.
Having a modular framework like OSGi at the client side is a huge advantage when doing remote management, because it gives you much insight into what's happening inside - installed bundles, dependencies, states of the bundles, available services etc. This helps a lot when you have to solve a problem remotely. Another advantage is that OSGi basically forces programmers to develop proper modular and dynamic systems, which makes (remote) updating much easier.
So, if you have to decide now what language and framework to use for the client side, I strongly recommend OSGi for the embedded and mobile clients. For the PCs (I guess you mean desktop PCs?) this is probably not the best choice - it depends a lot what you want to achieve there. If you want to install MS Office remotely OSGi won't bring you forward ;)
However, if you already have existing programs at the client side and are discussing whether to convert them to OSGi, I would recommend to investigate some time first to see whether they can be converted easily. Some software packages could give you a lot of trouble converting to OSGi, not because OSGi is complex, but because the program itself is not modular and has a lot of assumptions about the static nature of the environment (e.g. nothing ever disappears, parts of the system never get updated etc.). The irony in the matter is that these are exactly the programs which will give you most trouble later anyway no matter which remote provisioning system you chose.
If you have OSGi at some of the targets be sure to use a remote provisioning system which gives you access to the full OSGi functionality and not only the most basic and simple install and update functions. I haven't yet used Apache ACE, but I have experience with another provisioning system - mPower Remote Manager. Here are some snapshots from the documentation which can give you a feeling what is possible with OSGi as a base - you can draw your own conclusions whether it will be useful for your case or not.
I've given some examples in the other question you asked:
What are the non-osgi targets with which Apache ACE can work
You can write your own management agent that talks to the ACE server and installs artifacts. There actually are a couple of places where you could hook in your own code and protocol. Is there a concrete language/environment you're thinking of using, or are you just exploring the possibilities right now?
Well, the advantages of OSGi haven't changed, so for that I can refer you to the standard page.
To be a bit more constructive, I'll read the question as 'Should I bother converting my application to OSGi, as it is not necessary for ACE?'
I think that depends on what 'kind' of updating mechanism you're after. If you have a monolithical application (at least from the provisioning perspective) which you deploy and update only as a whole (Like an iOS app) then there isn't much to gain for provisioning purposes by using OSGi.
For the rest I can tell you the same as I tell anybody else: Converting an application to OSGi isn't hard, but modularizing code can be a nightmare, but something you'll need to face at some point, OSGi or not. If your code is modularized already, using OSGi should be a piece of cake.
I need a way to exchange data between a process and a windows service.
The process (Windows Form Application, Console Application, in the future also a Web Solution) needs to instruct and interact with the windows service.
I want to know which way is the best to accompplish this.
I'll write the solution in C#, .NET Framework version does not matter.
In the past I've used Remoting (Activator), WCF Interface with Contracts, Inter Process Exchange IPC and some named pipe implementation. What is your experience? Other ways?
I would choose WCF. It is most modern and probably best supported approach at the moment. It "replaced" older technologies in most scenarios. Nice feature of the WCF is that if you need to move your service to other protocol you can do that simply in configuration.
If you expect that windows service will always run on the same machine as other application you can use WCF with netNamedBinding. If you decide to move your service to other machine you will have to change configuration (probably to netTcpBinding) because Named pipes in WCF are limited only to IPC.
My previous experiences have always been over an IPCChannel, mainly because the only code I've had to be involved in that does any form of inter-process communication. It's never caused me any problems and the code is working away quite merrily as I type.
The only real answer to this question is, whichever you're most comfortable with.
Is Mono appropriate for developing server applications, or only desktop applications? I'd like to develop server applications in C# for Linux. I want to write a First Person Shooter (FPS) game in C#/XNA, and I've a Linux dedicated server. But this question is generally for all types of server applications...
Mono handles ASP.NET (including ASP.NET MVC) quite well. Most other server implementations work very well, as well. It does depend, slightly, on what exactly you are trying to serve, and how you are going to use it.
Mono also supports WCF directly in the core, which allows most non-web service applications to be written very effectively.
Edit:
Given your edit, and your desire to handle the server side of a multi-player FPS game, Mono should work fine. You will likely want to avoid using the high level interfaces like WCF and ASP.NET, and go straight to the System.Net namespace (depends a bit on how many players you'll be synchronizing, but if it's large, you'll want speed here over ease). Mono supports this quite well.
That being said, Mono's support of the System.Net namespace is very good, and quite mature, so you should have no problems using it for the server side of a multiplayer FPS game.
I don't see why not. I believe FogBugz uses Mono to deploy to apache servers.
Here is a conversation about running the FogBugz application on mono as an example of having a server app running on it.
It looks like your needs cover a broad range of different applications.
I think the overall answer would be yes, Mono is appropriate for developer server applications.
As others have pointed out, Mono has ASP.NET support as well as WCF built-in.
You also have the ability of working directly down to the Socket level if you need to squeeze every last bit of performance out of your server application (although you'll have to figure out how to persist state if the need is there).
I'd definitely be interested in seeing the performance difference of something like that between the two platforms (I wouldn't expect much difference...it's possible that Mono might even get slightly better performance because of the rest of the *NIX stack).