How to call VB6 DLL from another machine (DLL as a service) - dll

I have a vb6-mysql client-server desktop application which is distributed as a setup file.
It uses DLL all logical operations as well as database operations. The EXE and the DLL are installed in the server as well as the client machines. When I say server which only means the database resides in that machine no other difference in EXE or DLLs.
As all the database operations are done in the DLL when connected from a client machine, performance would be less. It is not possible now to change all the logic into database.
Is it possible to store the DLL in the server machine only and use the same DLL by the client machine also so that database connection will always be from the server itself?
Is converting the DLL to a windows service the possible solution for this?
How can I to convert it to a service?
And finally, if it is possible to make the DLL act as a service, what would the connection issues be?

You appear to be trying to rediscover n-tier application development.
The usual way this would be done using VB6 within a LAN would be to create an ActiveX EXE instead of a DLL so you can use DCOM. However DCOM isn't something you'd want to expose over the Internet.
For such cases it is more typical to use a commonly-open-port protocol such as HTTP or HTTPS. Almost everyone has firewall settings permitting outbound HTTP and HTTPS connections and most of the major Web servers undergo regular hardening to make them safer to expose to the Internet.
The classic way to do this with VB6 was to use IIS to host the Remote Data Service, which uses a form of Web Service "under the covers" where your program doesn't deal with the gory details. However this is a deprecated approach, and today configuring IIS and the RDS components can be a chore since they are locked down hard by default.
This leaves you with such things as the deprecated SOAP Toolkit or 3rd party tools such as those in the PocketSOAP suite... or you can roll your own.
Doing this from scratch can be a bit of work but is more flexible, allowing REST instead of SOAP which can have advantages in itself. You could use whatever Web server you choose that can work with VB6 (via CGI, etc.).
The hardest approach to justify might seem the simplest on the surface: create your own protocol over TCP and write a Windows Service. This can be the most flexible of all but it can be more work than other options and you are on your own as far as making it and keeping it secure. You'll probably also face firewall issues depending on where your clients are and what the local firewall policies are there.
When we could rely on DCOM the issues were relatively small aside from security configuration headaches. With the Internet in the picture it is an entirely different story.
This really isn't something you undertake casually. Even assuming that your database is safe exposed to the Internet is naive and should be rethought.

Related

Agent based applications using WCF

i'm about to decide on technology choices for an agent based application used in the transportaion systems domain.
basically there will be a central system hosting the backend, and multiple agents located across town (installed on desktops) that communicate with devices/kiosks collecting data and then transmitting them back to the central server. the central server could also be hosted on the cloud.
following are important
securing the data and communications between the device and the agent
and the agent and central server.
agents should be easily installable with little or no configuration.
near 100% uptime and availability
Does WCF fit the bill here?
if so what binding types should i go for? netTCP or wsHttp with SSL/HTTPS?
WCF is definitely a fit choice for this kind of scenario. For your bindings, the actual question is what technology you are going to use. Do you want to make the agents run in a non .NET environment like Java, then you should chose for wsHttpBinding. This binding communicates through SOAP and is very interoperable.
If you chose to use .NET agents, you might as well use netTcpBinding because they use the same WCF frameworks. It also supports binary encoding. If you really need to make a choice, take a look at the MSDN Documentation.
For your agents you could use a simple console application that runs in the background as a Windows service. WIX can help you with that (install an application as windows service), but thats all I know. WIX can also help you with basic installing and configure everything for you but it has a high learning curve so you might need to invest time in it.

Why not directly connect to SQL servers from client? Why do we need application servers in client-server model?

Many applications use the following model:
Browsers or other clients interact with application servers.
Application servers (web servers or RPC servers) interact with data store servers (SQL servers or non-SQL storage).
For internet applications, they need application servers because they must keep simple feature on data servers for performance. But I can't see why they need application servers on intranet.
For example, can we develop an Adobe AIR application, which directly connect to a PostgreSQL server? I guess we can deploy a center PostgreSQL server which has many stored procedures and set strict permission, and let the Adobe AIR application fetch (and modify) data only by invoking the stored procedure.
Why don't the most of applications choose a simplier solution?
In general, there is no reason why you couldn't get an independent application to talk to a PostgreSQL server directly. Some applications do this and it works fine.
I'm not familiar enough with Adobe AIR to say whether it's possible in this context. In principle, if you can get a PostgreSQL driver, or if you can write your own using TCP sockets (the PostgreSQL network protocol is documented in details in the official documentation), you could certainly connect directly.
This being said, having a form of application server between the end-client and the database server isn't purely for performance.
Web-based development allows the SQL queries to be controlled by the server. Instead of exposing complete SQL access, you expose the features that the client can use. If you need to tweak the queries later (bug, change of data structure, ...), you can do this rather centrally on your application server, without having the need to deploy a new version of the client to each user.
Of course, you can do some abstraction like this user server programming directly, but this isn't suitable for all applications. This may depend on what other features your application needs, for example if it needs to make use of a library programmed in another language. You can use some procedural languages bindings, but it's not always suitable: pl/Python is an "untrusted" language (which may cause security problems) and pl/Java needs a external add-on, for example.
In addition, not all applications are ultimately reserved for intranet usage nowadays. It often makes sense not to restrict yourself to intranet usage when you start designing an application.
I initially started with a direct access design and quickly found it useful to move to an application server where I talked to the DB via web services. Reasons included:
Handling DB restart, local connection loss, client IP address change, etc is much easier when you're talking to the DB over a stateless protocol like HTTP. This is more of an issue for remote workers.
Transactions are clearly demarcated and isolated in server-side transactional methods (I used EJB3 and container managed transactions)
It's much easier to add new clients like a phone app as they can share more of the code and business logic. Stored procedures in the database are very useful, but can be limited and occasionally frustrating.
Some tools/languages don't have built-in tools for talking to PostgreSQL directly, but can easily talk to a RESTful web service with XML or JSON request/response format.
DB admin is easier if you're dealing only with a single application server connection pool
The main downside is of course the extra layer means extra work and extra maintenance.
You can, but...
Browser languages/libraries tend to have poor database support
What happens when someone wants to use this application remotely?
If you're not talking about browser-based applications, then that is exactly what many do. There are plenty of traditional installed client applications talking to a backend database either directly or via a wrapper (odbc/jdbc).

Start with remoting or with WCF

I'm just starting with distributed application development. I need to create (all by myself) an enterprise application for document management. That application will run on an intranet (within the firewall, no internet access is required now, BUT is probably that will be later).
The application needs to manage images that will be stored within MySQL Server (as blobs) and those images will be then recovered by the app and eventually one or more of them will be converted to PDF.
Performance is the most important non-functional requirement.
I have a couple of doubts.
What do you suggest to use, .NET Remoting or WCF over TCP-IP (I think second one is the best for the moment I need to expose the business logic over internet, changing the protocol).
Where do you suggest to make the transformation of the images to pdf files, I'm using iText. (I have thought to have the business logic stored within the IIS and exposed via WCF, and that business logic to be responsible of getting the images and transforming them to PDF, that because the IIS and the MySQL Server are the same physical machine). I ask about where to do the transformation because the app must be accessible from multiple devices, and for example, for mobile devices, the pdf maybe is not necessary.
Thank you very much in advance.
WCF, only consider remoting if WCF presents some issue such as performance in your use case. You have many many more scaling and customisation options available under WCF.
Depends. If sending the images over the net presents an issue then it may have to be done locally. However as in (1) your existing suggestion seems ok.
See .Net Remoting vs. WCF for a similar question.
Definitely remoting if this is an option
Transformation - same box that the service is; since the service is going to funnel the images anyway - this is the best place. I would not put it on DB server, to better distribute the load and to separate non-db load from db specific load.
In addition, look into .Net 4.0 RIA services. They allow you best combo of .Net Remoting and WCF

Application Architecture using WCF and System.AddIn

A little background -- we're designing an application that uses a client/server architecture consisting of:
A server which loads server-side modules, potentially developed by other teams.
A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module).
The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.)
Environment is .NET 3.5, and client side is WPF.
The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues.
My thinking so far:
A Windows Service for the server.
Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules.
The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.)
Similarly, the client uses System.AddIn to find, load, and communicate with the client modules.
Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF.
For maximum resilience, each module will run in a separate app-domain.
In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.)
My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start.
So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road?
Thanks.
This sounds too complicated. Consider an architecture where each added module includes both the client side code (use System.AddIn if you like), but where the server side module is a new service.svc file. The client would know the URL to the corresponding service.
Alternatively, you should look into Microsoft Extensibility Framework (MEF) for the add-in feature. That's what they'll be starting to use for Visual Studio extensibility in the coming release.

WCF and embedded systems

I am working on a project that involves an embedded system which runs a non-microsoft OS with a C program for the application and am developing .NET software for its end user applications. For remote configuring with the .NET software (which can go across firewalls), I am considering using WCF. I know only a little about WCF so far but I've read that it is supposed to be interoperable with environments other than .NET. The embedded environment has an HTTP stack but no built in support for web services. Does anyone have any experience with this kind of thing or know if it would be appropriate at all? If so please provide some advice or point me in the right direction.
Thanks!
WCF is interoperable because it's accessed over HTTP. Visual Studio can help you build client libraries very quickly for WCF, but client access to WCF doesn't require anything other than HTTP calls with the proper payload. If you're looking at a remote server call and your built-in support in your embedded environment is basic HTTP, look at building your server-side as REST-formatted methods. Your debugger will thank you.
What kinds of data are you planning on transferring back and forth? For something this low level and proprietary I would recommend sticking with good old fashioned Sockets.
I will be passing configuration data back and forth...basically to enable technical support staff to remotely program the device. If I were using sockets this could be binary data, but there is a requirement that customers with firewalls shouldn't need to open any ports. Because of this I was thinking of sending XML over HTTP. So, is it better to use WCF or REST on the server side? Or WCF with REST?
I'm curious about your "customers with firewalls" requirement. Sockets with binary data or XML over HTTP can use any port (not just port 80), and I'm curious if your device will be "listening" on the network, or just making an outbound connection. If your device is listening, you will need to open a port on the firewall. Making an outbound connection ("phoning home") is much easier on the firewall.
So I think you could use sockets and binary data. However, I have faced similar issues on the last two projects, and I really wanted to implement WCF using REST on the embedded device, but no one else wanted to do it - I'm hoping someone else will be first, and publish some results!
Good Luck! (and post your results!)