Indy Server and AdoQuery are conflicting - sql

I have two programs one is a client the other is a server for the client.
The client sends some data to the server like this and then reads the response:
idtcpclient1.WriteLn(command); //Command contains data that the server needs eg. name and surname
progressbar1.StepIt;
sresult := idtcpclient1.ReadLn();
The server then reads the line, manipulates it and creates a SQL Query.
adoquery1.Close;
adoquery1.SQL.Text := 'select * from '+sGrade;
adoquery1.Open;
But as soon as it opens the connection to the database the client gives an error "Connection closed gracefully"
I have tested the server code without the client by simulating the input and it works fine.
I think Indy and AdoQuery are conflicting
If so why and how can I fix it
If not then what is the problem and how should I fix it?

ADO uses apartment-threaded COM objects that have an affinity to the thread that creates them. They cannot be used across thread boundaries unless they are marshalled.
Indy's TCP server is multi-threaded. Each client runs in its own thread.
A thread must call CoInitialize/Ex() to establish its relationship with COM before it can then access any COM objects, and call CoUninitialize() when it is done using COM.
Your server fails because it is raising an uncaught exception that disconnects the client. Most likely because you did not initialize COM.
You need to create ADO objects on a per-client basis, do not use them from the main thread. In the server's OnConnect event, call CoInitialize/Ex(). In the OnDisconnect event, call CoUninitialize(). In the OnExecute event, dynamically create and use new ADO objects as needed.
This does mean that each client will need its own database connection. If you do not want that, then move your ADO logic to a dedicated thread that clients can post requests to when needed. Stay away from doing the database work in the main thread, it does not belong there.

If you use datamodules: you can create one instance of the datamodule per client, to avoid threading errors. Indy can hold a reference to the client's datamodule in the context. Or use a pool of datamodule instances, depending on available resources and traffic.

Related

Does COM provide methods to delay shutdown until all RPCs are done?

I have two processes: a Client and a Server. Client makes a call that the Server starts processing, but the Server can start shutting down before the call is finished. This can cause objects required by the call to suddenly become destroyed, leading to a crash.
The Client and Server communicate through COM. Something that tells the amount of currently active RPCs from and to a given Server process would be extremely helpful in this case.
Does COM, as the layer of communication between these two processes, provide any aid in delaying shutdown when there is active interaction them?
I don't know which langage has been used to implement your COM client/server.
But as far as I understand, it looks like you are facing a COM multithreading issue. What is the threading model of your COM server? (I suppose it multithreaded)
If it's the case, you should synchronize your threads.
The over way would be to transform the threading model of your COM server into a single threaded model. In that case, server shutting down call will be executed after previous client call finishes.
I suspect you really want CoAddRefServerProcess inside your C++ object's constructor (and CoReleaseServerProcess in the destructor).
This will keep your server alive until the C++ objects go away.
However, this won't prevent the client from requesting new instances, so you may also want:
CoRevokeClassObject to prevent new instances of the client from obtaining proxies.
If you're feeling really nasty, CoDisconnectObject will forcibly disconnect the proxy from the server.
*

python twisted: enforcing a single connection per id

I have a twisted server using SSL sockets and using certificates to identify the different clients that connect to the server. I'd like to enforce the state where there is only one connection by each possible id. The two ways I can think of is to keep track of connected ids and then not allow a second connection by the same id or allow the second connection and immediately terminate the first. I'm trying to do the later but am having some issues (I'll explain my choice at the end)
I'm storing a list of connections in the factory class and then after the SSL handshake I compare the client's id with that list. If it's already in that list I try to call .transport.abortConnection() on it. I then want to do the normal things I do to record the new connection in my database. However, the call to abortConnection() doesn't seem to call connectionLost() directly which is where I do my cleanup and calls to the database to say that a connection was lost. So, my code then records that the id connected but later a call is made to connectionLost() resulting in the database appearing to have that id disconnected.
Is there some sort of way to block the incoming second connection from further processing until the first connection has finished processing the disconnection?
Choice explanation: The whole reason I'm doing this is I have clients behind NATs that appear to be changing their IP address on a fairly regular basis (once a every 1-3 days). The devices connecting will just have their connections uncleanly severed and then they try to reconnect with the new IP. However, my server isn't notified about the disconnect and usually has to timeout the connection. Before the server times out the connection, though, the client sometimes manages to reconnect and the server then is in a state with two apparent connections by the same client. So, typically the first connection is the one I really want to terminate.
Once you have determined the ID of the connection, you can call self.transport.pauseProducing() on the "new" connection's transport, which will prevent any notifications until you call self.transport.resumeProducing(). You can then call newConnection.transport.resumeProducing() from oldConnection.connectionLost(), if a new connection exists.

Out of process COM server with MTA

I have an out of proc COM (ATL) Server that has been created as free threaded (CComMultiThreadModel)
I am slightly confused as to how that relates to the re-entrancy of calls into my object, for example I assumed that I would be allowed to call from multiple clients simultaneously and have those requests processed simultaneously however it seems (according to my logs) that each request is serialized.
What am I missing, does simply creating a class as MTA mean it truly is or is there something else I have to do. Note that I am referring here to multiple processes all making concurrent calls and not threads within a single process and thus COINIT_MULTITHREADED is not the problem.
This snippet from some MS documentation on MTA would seem everything should work out of the box:
Multiple clients can simultaneously call, from different threads, an object that supports free-threading. In free-threaded out-of-process servers, COM, through the RPC subsystem, creates a pool of threads in the server process and a client call (or multiple client calls) can be delivered by any of these threads at any time
No sooner than I asked it I found the answer, you need to specify #define _ATL_FREE_THREADED in stdafx.h

How do I properly handle a faulted WCF connection?

In my client program, there is a WCF connection that is opened at startup and supposedly stays connected til shutdown. However, there is a chance that the server closes due to unforeseeable circumstances (imagine someone pulling the cable).
Since the client uses a lot of contract methods in a lot of places, I don't want to add a try/catch on every method call.
I've got 2 ideas for handling this issue:
Create a method that takes a delegate and executes the delegate inside a try/catch and returns an Exception in case of a known exception, or null else. The caller has to deal with nun-null results.
Listen to the Faulted event of the underlying CommunicationObject. But I don't see how I could handle the event except for displaying some error message and shutting down.
Are there some best practices for faulted WCF connection that exist for app lifetime?
If you do have both ends of the wire under your control - both the server and the client are .NET apps - you could think about this approach instead:
put all your service and data contracts into a shared assembly, that both the server and the client will use
create the ChannelFactory<IYourService> at startup time and cache it; since it needs to have access to the service contract, this only works if you can share the actual service contract between server and client. This operation is the expensive part of building the WCF client
ChannelFactory<IYourService> factory = new ChannelFactory<IYourService>();
create the actual communications channel between client and server each time you make a call, based on the ChannelFactory. This is pretty cheap and doesn't cost much time - and you can totally skip any thoughts about having to detect or deal with faulted channels.....
IYourService client = factory.CreateChannel();
client.CallYourServiceMethod();
Otherwise, what you basically need to do is wrap all service calls into a method, which will first check for a channel's faulted state, and if the client proxy is faulted, aborts the current one and re-creates a new one.
I wrote a blog post on exceptions in WCF that deals with how to handle this: http://jamescbender.com/bendersblog/Default.aspx

In COM, how can I get notified when a client dies?

I've got a COM solution with a client (an EXE) and a server (a service EXE). When the client is killed, or dies abnormally, I'd like to know about it in the server. Unfortunately, it appears that COM doesn't call Release when this happens.
How can I tell that a COM client has died, so that I can clean up for it?
Update: Answers so far require that the server have a pointer to the client, and call a method on it periodically (ping). I'd rather not do this, because:
I don't currently have a callback pointer (server->client), and I'd prefer to avoid introducing one, unless I really have to.
This still won't call Release on the object that represents the client endpoint, which means that I'll have to maintain the other client-specific resources separately, and hold a weak pointer from the "endpoint" to the other resources. I can't really call Release myself, in case COM decides to do it later.
Further to the second: Is there any way, in COM, to kill the client stub, in lieu of calling Release? Can I get hold of the stub manager for my object and interface and tell it to do cleanup?
Killing is rather extremal process, so neither CORBA, nor COM/DCOM nor Java's RMI has no explicit work around. Instead you can create very simple callback to implement 'ping'. It can be for example time based or on occasional base.
Also you can think about third EXE - that works as monitor for your client and provides notification to server (a service).
Simplest solution is for the server to run a PING test on a timer.
In a multi threaded apartment setup this can run on a background thread.
This test should call from server to client with a call that's guaranteed to make it to the client if it is alive e.g. a QueryInterface call on a client object.
Unexpected failures can be treated as an indication that the client is dead.
The server will need to manage the list of clients it pings intelligently and should ensure that the ping logic itself doesn't keep the client alive.