Why not directly connect to SQL servers from client? Why do we need application servers in client-server model? - sql

Many applications use the following model:
Browsers or other clients interact with application servers.
Application servers (web servers or RPC servers) interact with data store servers (SQL servers or non-SQL storage).
For internet applications, they need application servers because they must keep simple feature on data servers for performance. But I can't see why they need application servers on intranet.
For example, can we develop an Adobe AIR application, which directly connect to a PostgreSQL server? I guess we can deploy a center PostgreSQL server which has many stored procedures and set strict permission, and let the Adobe AIR application fetch (and modify) data only by invoking the stored procedure.
Why don't the most of applications choose a simplier solution?

In general, there is no reason why you couldn't get an independent application to talk to a PostgreSQL server directly. Some applications do this and it works fine.
I'm not familiar enough with Adobe AIR to say whether it's possible in this context. In principle, if you can get a PostgreSQL driver, or if you can write your own using TCP sockets (the PostgreSQL network protocol is documented in details in the official documentation), you could certainly connect directly.
This being said, having a form of application server between the end-client and the database server isn't purely for performance.
Web-based development allows the SQL queries to be controlled by the server. Instead of exposing complete SQL access, you expose the features that the client can use. If you need to tweak the queries later (bug, change of data structure, ...), you can do this rather centrally on your application server, without having the need to deploy a new version of the client to each user.
Of course, you can do some abstraction like this user server programming directly, but this isn't suitable for all applications. This may depend on what other features your application needs, for example if it needs to make use of a library programmed in another language. You can use some procedural languages bindings, but it's not always suitable: pl/Python is an "untrusted" language (which may cause security problems) and pl/Java needs a external add-on, for example.
In addition, not all applications are ultimately reserved for intranet usage nowadays. It often makes sense not to restrict yourself to intranet usage when you start designing an application.

I initially started with a direct access design and quickly found it useful to move to an application server where I talked to the DB via web services. Reasons included:
Handling DB restart, local connection loss, client IP address change, etc is much easier when you're talking to the DB over a stateless protocol like HTTP. This is more of an issue for remote workers.
Transactions are clearly demarcated and isolated in server-side transactional methods (I used EJB3 and container managed transactions)
It's much easier to add new clients like a phone app as they can share more of the code and business logic. Stored procedures in the database are very useful, but can be limited and occasionally frustrating.
Some tools/languages don't have built-in tools for talking to PostgreSQL directly, but can easily talk to a RESTful web service with XML or JSON request/response format.
DB admin is easier if you're dealing only with a single application server connection pool
The main downside is of course the extra layer means extra work and extra maintenance.

You can, but...
Browser languages/libraries tend to have poor database support
What happens when someone wants to use this application remotely?
If you're not talking about browser-based applications, then that is exactly what many do. There are plenty of traditional installed client applications talking to a backend database either directly or via a wrapper (odbc/jdbc).

Related

Can we use Public STUN servers for creating our commercial applications ?

I have just started out with learning WebRTC for implementing audio and video application and know there are various Public stun servers available for peer connection. But i am a bit confused can I use these Public servers for a commercial application?
Also I would like to know if there is any tutorial or guide available from where i can understand how to make and deploy my own stun or turn server if i want to create a commercial app?
Whether you can use public STUN servers for commercial applications depends entirely on the licensing/Terms of Service agreement of the operator of said servers. Peruse those if available. If not indicated otherwise, I wouldn't distinguish "commercial" use from any other use.
There are many many implementations of STUN/TURN servers available you can set up yourself on any machine you happen to have (in practice that probably means an instance on AWS, Azure or the like). Search for and pick one you like. STUN servers use relatively little resources, while TURN servers typically need powerful CPUs and fast internet connections to be useful (they must relay the entire video stream as quickly as possible).
Operating such a server yourself may become expensive, depending on your usage. Using a commercial provider for TURN servers may be the better option; personally I've had good experiences with Twilio in this regard, but do shop around for other offerings.

Custom ADO.NET implementation as a client for a WCF service?

We use a particular ODBC driver here to access a legacy database. Our homemade software (a 2 tier vb.net winform application that connects to an sql-server database) could really use it for some operations. Unfortunately, due to licencing restrictions we cannot deploy the ODBC driver on more than one computer. I'm looking for a way to go around that.
My initial thought was a WCF service and POCOs. However, since the app references a library with a rich set of generic ADO.NET helper functions, I really want to reuse these to communicate with the server. So I'm thinking of making my own ADO.NET implementation to access the WCF service that will, in turn, expose session objects to process queries sent by the client.
Anybody did something like this before? What challenges awaits with implementing my own ADO provider? Also, is there something like this that already exists, before I go and reinvent the wheel?
You can use an ODBC-ODBC Bridge to access you legacy ODBC driver from any other machine and still access it via ODBC. Sounds to me like this would be a lot less effort.
Update: I can only describe the Easysoft ODBC-ODBC Bridge as I've not seen the code of any other bridge. At the client end you install the OOB client ODBC driver. On the server end you install a service. The client end effectively sends your ODBC calls and data to the server where they are redirected to the actual ODBC driver you want to use. Of course, there are loads of optimisations performed both in the ODBC APIs and the protocol. There are a lot of advantages to this a) you can use a driver you cannot get for the platform you want to code on b) you can use a 32 bit application to talk to a 64 bit driver or vice versa c) you might only be able to or want to use one license for the driver/database on the server d) you can cross networks to access a remote driver etc.
Transactions are handled properly in the Easysoft OOB.

How to call VB6 DLL from another machine (DLL as a service)

I have a vb6-mysql client-server desktop application which is distributed as a setup file.
It uses DLL all logical operations as well as database operations. The EXE and the DLL are installed in the server as well as the client machines. When I say server which only means the database resides in that machine no other difference in EXE or DLLs.
As all the database operations are done in the DLL when connected from a client machine, performance would be less. It is not possible now to change all the logic into database.
Is it possible to store the DLL in the server machine only and use the same DLL by the client machine also so that database connection will always be from the server itself?
Is converting the DLL to a windows service the possible solution for this?
How can I to convert it to a service?
And finally, if it is possible to make the DLL act as a service, what would the connection issues be?
You appear to be trying to rediscover n-tier application development.
The usual way this would be done using VB6 within a LAN would be to create an ActiveX EXE instead of a DLL so you can use DCOM. However DCOM isn't something you'd want to expose over the Internet.
For such cases it is more typical to use a commonly-open-port protocol such as HTTP or HTTPS. Almost everyone has firewall settings permitting outbound HTTP and HTTPS connections and most of the major Web servers undergo regular hardening to make them safer to expose to the Internet.
The classic way to do this with VB6 was to use IIS to host the Remote Data Service, which uses a form of Web Service "under the covers" where your program doesn't deal with the gory details. However this is a deprecated approach, and today configuring IIS and the RDS components can be a chore since they are locked down hard by default.
This leaves you with such things as the deprecated SOAP Toolkit or 3rd party tools such as those in the PocketSOAP suite... or you can roll your own.
Doing this from scratch can be a bit of work but is more flexible, allowing REST instead of SOAP which can have advantages in itself. You could use whatever Web server you choose that can work with VB6 (via CGI, etc.).
The hardest approach to justify might seem the simplest on the surface: create your own protocol over TCP and write a Windows Service. This can be the most flexible of all but it can be more work than other options and you are on your own as far as making it and keeping it secure. You'll probably also face firewall issues depending on where your clients are and what the local firewall policies are there.
When we could rely on DCOM the issues were relatively small aside from security configuration headaches. With the Internet in the picture it is an entirely different story.
This really isn't something you undertake casually. Even assuming that your database is safe exposed to the Internet is naive and should be rethought.

Start with remoting or with WCF

I'm just starting with distributed application development. I need to create (all by myself) an enterprise application for document management. That application will run on an intranet (within the firewall, no internet access is required now, BUT is probably that will be later).
The application needs to manage images that will be stored within MySQL Server (as blobs) and those images will be then recovered by the app and eventually one or more of them will be converted to PDF.
Performance is the most important non-functional requirement.
I have a couple of doubts.
What do you suggest to use, .NET Remoting or WCF over TCP-IP (I think second one is the best for the moment I need to expose the business logic over internet, changing the protocol).
Where do you suggest to make the transformation of the images to pdf files, I'm using iText. (I have thought to have the business logic stored within the IIS and exposed via WCF, and that business logic to be responsible of getting the images and transforming them to PDF, that because the IIS and the MySQL Server are the same physical machine). I ask about where to do the transformation because the app must be accessible from multiple devices, and for example, for mobile devices, the pdf maybe is not necessary.
Thank you very much in advance.
WCF, only consider remoting if WCF presents some issue such as performance in your use case. You have many many more scaling and customisation options available under WCF.
Depends. If sending the images over the net presents an issue then it may have to be done locally. However as in (1) your existing suggestion seems ok.
See .Net Remoting vs. WCF for a similar question.
Definitely remoting if this is an option
Transformation - same box that the service is; since the service is going to funnel the images anyway - this is the best place. I would not put it on DB server, to better distribute the load and to separate non-db load from db specific load.
In addition, look into .Net 4.0 RIA services. They allow you best combo of .Net Remoting and WCF

Application Architecture using WCF and System.AddIn

A little background -- we're designing an application that uses a client/server architecture consisting of:
A server which loads server-side modules, potentially developed by other teams.
A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module).
The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.)
Environment is .NET 3.5, and client side is WPF.
The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues.
My thinking so far:
A Windows Service for the server.
Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules.
The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.)
Similarly, the client uses System.AddIn to find, load, and communicate with the client modules.
Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF.
For maximum resilience, each module will run in a separate app-domain.
In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.)
My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start.
So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road?
Thanks.
This sounds too complicated. Consider an architecture where each added module includes both the client side code (use System.AddIn if you like), but where the server side module is a new service.svc file. The client would know the URL to the corresponding service.
Alternatively, you should look into Microsoft Extensibility Framework (MEF) for the add-in feature. That's what they'll be starting to use for Visual Studio extensibility in the coming release.