I have an out of proc COM (ATL) Server that has been created as free threaded (CComMultiThreadModel)
I am slightly confused as to how that relates to the re-entrancy of calls into my object, for example I assumed that I would be allowed to call from multiple clients simultaneously and have those requests processed simultaneously however it seems (according to my logs) that each request is serialized.
What am I missing, does simply creating a class as MTA mean it truly is or is there something else I have to do. Note that I am referring here to multiple processes all making concurrent calls and not threads within a single process and thus COINIT_MULTITHREADED is not the problem.
This snippet from some MS documentation on MTA would seem everything should work out of the box:
Multiple clients can simultaneously call, from different threads, an object that supports free-threading. In free-threaded out-of-process servers, COM, through the RPC subsystem, creates a pool of threads in the server process and a client call (or multiple client calls) can be delivered by any of these threads at any time
No sooner than I asked it I found the answer, you need to specify #define _ATL_FREE_THREADED in stdafx.h
Related
I read there are web servers their behaviors are called blocking whereas Node.js's is said non-blocking.
Would a blocking web server get hung up to the sense it needs restarting, if many http clients send requests at most in parallel?
As a complement, I don't say that it needs restarting while it potentially works fine again after the flood of parallel requests have stopped.
And I currently don't understand how request buffers and overflows work for web servers.
Although technically it could be possible to make a single-thread, single-process blocking server that can only handle 1 request at a time, it doesn't really practically make sense. Concurrency is kind of important.
The three main paradigms for parallelism (that I know of) are:
Multi-process/forking
Threading
Using an event loop/reactor pattern
Node falls in the third category, and also a bit in the second category depending on how you look at it.
Most languages can look at a socket and read from it, and immediately move on if there was nothing to read. Therefore most languages can have this non-blocking behavior.
I have two processes: a Client and a Server. Client makes a call that the Server starts processing, but the Server can start shutting down before the call is finished. This can cause objects required by the call to suddenly become destroyed, leading to a crash.
The Client and Server communicate through COM. Something that tells the amount of currently active RPCs from and to a given Server process would be extremely helpful in this case.
Does COM, as the layer of communication between these two processes, provide any aid in delaying shutdown when there is active interaction them?
I don't know which langage has been used to implement your COM client/server.
But as far as I understand, it looks like you are facing a COM multithreading issue. What is the threading model of your COM server? (I suppose it multithreaded)
If it's the case, you should synchronize your threads.
The over way would be to transform the threading model of your COM server into a single threaded model. In that case, server shutting down call will be executed after previous client call finishes.
I suspect you really want CoAddRefServerProcess inside your C++ object's constructor (and CoReleaseServerProcess in the destructor).
This will keep your server alive until the C++ objects go away.
However, this won't prevent the client from requesting new instances, so you may also want:
CoRevokeClassObject to prevent new instances of the client from obtaining proxies.
If you're feeling really nasty, CoDisconnectObject will forcibly disconnect the proxy from the server.
*
I have an issue with a 3rd party DLL, which is NOT thread-safe, but which I need to call within an orchestration.
I'm making the DLL call within an Expression shape. The same DLL is called in a number of different orchestrations.
The problem I have is that for a series of incoming messages, BizTalk will run multiple orchestrations (or multiple instances of an orchestration) in parallel - which leads to exceptions within the DLL.
Is there any way around this, given that refactoring the DLL isn't an option. Or, is there a way to throttle BizTalk to only run one orchestration at any one time. (I've seen some hacks restricting the working pool to the number of processors, but this doesn't seem to help. We can't downgrade to a single-core machine!)
I would rather find a way of keeping the DLL happy (though I can't think how) than throttle BizTalk - but if there is a way to throttle that would be an acceptable short-term solution whilst we discuss with the 3rd party. (who are a large organisation and really should know better!)
Even on a single core machine, BizTalk will run concurrent orchestrations.
You could throttle the orchestration by implementing the singleton pattern in the orchestration.
You do this by creating a loop in the orchestration and having two receive shapes, one before the start of the loop and one inside the loop.
Both these receive are bound to the same inbound logical port.
You create a correlation set which specifies something like BTS.MessageType and set the first receive shape to initiate the correlation and the second receive to follow the correlation.
As long as the loop does not end you can guarantee that any message of a certain type will always be processed by the same instance of the orchestration.
However, using singletons is a design decision which comes with drawbacks. For example, throughput suffers, and you have to ensure that your singleton cannot suspend, else it will create a block for all subsequent messages.
Hope this helps.
I have been tasked with creating a set of web services. We are a Microsoft shop, so I will be using WCF for this project. There is an interesting design consideration that I haven't been able to figure out a solution for yet. I'll try to explain it with an example:
My WCF service exposes a method named Foo().
10 different users call Foo() at roughly the same time.
I have 5 special resources called R1, R2, R3, R4, and R5. We don't really need to know what the resource is, other than the fact that a particular resource can only be in use by one caller at a time.
Foo() is responsible to performing an action using one of these special resources. So, in a round-robin fashion, Foo() needs to find a resource that is not in use. If no resources are available, it must wait for one to be freed up.
At first, this seems like an easy task. I could maybe create a singleton that keeps track of which resources are currently in use. The big problem is the fact that I need this solution to be viable in a web farm scenario.
I'm sure there is a good solution to this problem, but I've just never run across this scenario before. I need some sort of resource tracker / provider that can be shared between multiple WCF hosts.
Any ideas from the architects out there would be greatly appreciated!
Create another central service which only the web services know about. This service takes on the role of the resource manager.
All of the web services in the farm will communicate with this central service to query for resource availability and to "check out" and "check in" resources.
You could track the resource usage in a database table, which all the servers on the farm could access.
Each resource would have a record in the database, with fields that indicate whether (or since when) it is in use. You could get fancy and implement a timeout feature as well.
For troubleshooting purposes, you could also record who is using the resource.
If you record when each resource is being used (in another table), you would be able to verify that your round-robin is functioning as you expect, decide whether you should add more copies of the resource, etc.
There are any number of approaches to solving this problem, each with their own costs and benefits. For example:
Using MSMQ to queue all requests, worker processes pull messages from the queue, pass to Rn and post responses back to a response queue for Foo to dispatch back to the appropriate caller.
Using an in-memory or locally persisted message dispatcher to send the next request to an on-box service (e.g. via Named Pipes for perf) based upon some distribution algorithm of your choice.
etc.
Alas, you don't indicate whether your requests have to survive power outages, if they need to be transactionally aware, whether the callers are also WCF, how quickly these calls will be received, how long it takes for Rn to return with results, etc.
Whichever solution you choose, I strongly encourage you to split your call to Foo() into a RequestFoo() and GetFooResponse() pair or implement a WCF callback hosted by the caller to receive results asynchronously.
If you do NOT do this, you're likely to find that your entire system will grind to a halt the second traffic exceeds your resources' abilty to satisfy the workload.
I've got a COM solution with a client (an EXE) and a server (a service EXE). When the client is killed, or dies abnormally, I'd like to know about it in the server. Unfortunately, it appears that COM doesn't call Release when this happens.
How can I tell that a COM client has died, so that I can clean up for it?
Update: Answers so far require that the server have a pointer to the client, and call a method on it periodically (ping). I'd rather not do this, because:
I don't currently have a callback pointer (server->client), and I'd prefer to avoid introducing one, unless I really have to.
This still won't call Release on the object that represents the client endpoint, which means that I'll have to maintain the other client-specific resources separately, and hold a weak pointer from the "endpoint" to the other resources. I can't really call Release myself, in case COM decides to do it later.
Further to the second: Is there any way, in COM, to kill the client stub, in lieu of calling Release? Can I get hold of the stub manager for my object and interface and tell it to do cleanup?
Killing is rather extremal process, so neither CORBA, nor COM/DCOM nor Java's RMI has no explicit work around. Instead you can create very simple callback to implement 'ping'. It can be for example time based or on occasional base.
Also you can think about third EXE - that works as monitor for your client and provides notification to server (a service).
Simplest solution is for the server to run a PING test on a timer.
In a multi threaded apartment setup this can run on a background thread.
This test should call from server to client with a call that's guaranteed to make it to the client if it is alive e.g. a QueryInterface call on a client object.
Unexpected failures can be treated as an indication that the client is dead.
The server will need to manage the list of clients it pings intelligently and should ensure that the ping logic itself doesn't keep the client alive.