I am working on a bug that related to an unmanaged MTA COM object. The object has Lock and Unlock methods and uses a mutex that requires the same thread that called Lock to call Unlock.
The problem is that when Lock and Unlock are called from a managed STA thread (using COM interop), the calls come into the COM object on a RPC callback thread but the callback thread that is used is not always the same for both calls. When it is not the same, the Unlock call fails because it can't unlock the mutex.
In other words:
Managed STA thread 1 -> RPC callback (thread 11) -> Lock
Managed STA thread 1 -> RPC callback (thread 12) -> Unlock -> Error
I am trying to evaluate all possible solutions before making any decisions on a fix. As such, I am trying to find out:
1) Is there is a way to prevent a RPC callback thread from being used in the first place? In my testing, if I make the calls to the object from an unmanaged STA thread, the calls seem to come in on the calling thread itself. What is different when the call is coming from .Net that necessitates the use of an RPC callback thread? Is there any way to prevent RPC callbacks from being used? (except for using an MTA calling thread)
2) If not, is there a way to force a consistent RPC callback thread to be used from the same managed STA thread?
This is by design for a free-threaded server. COM takes your word for it and allows stubs to use arbitrary RPC threads. You cannot make any assumptions about the thread identity, the RPC thread is picked from a pool and is recycled. Unfortunately it often picks the same one when the calls are sequenced so it will look like it works fine at first. But trouble starts as soon as more than one concurrent server call is made. There is no option to make it selective, a free-threaded server promises to not care. Nor could that work well in practice, it would either scale horribly or induce deadlock.
You therefore cannot use a mutex to implement locking, it has thread affinity. A semaphore is a good choice.
Related
Can any body tell me what is the difference between loading dynamic library (which is internally calling COM dll) in main thread and worker thread.
Thanks in advance
Mostly, due to the support of the application development language, there is little need for special on the main thread to use COM.
For example, check the OLE/COM option with the project creation wizard.
However, when using multiple worker threads and using COM in the worker thread, the following actions are necessary.
Worker threads using COM must initialize OLE at the beginning of the thread (before creating/using COM objects).
For Win32 API, it is CoInitialize()/CoInitializeEx(). Alternatively, depending on the application development language, there will be equivalent functions and libraries, so please call it.
Worker threads that use COM must perform their own message processing loop independently of Windows message processing loop performed by the main thread responsible for UI.
Please pay attention to the COM component being used.
If the value of ThreadingModel in the registry in which the COM component is registered is an empty string (nothing is set), an event may not be notified to the work thread and an exception may be raised.
If there is no value in this registry please write "Apartment".
Please use COM object basically only from the created thread.
If a COM object is called from another thread that is not the thread that created the COM object, an error may occur or normal operation may not be performed.
Additional notes:
In order to terminate the worker thread, it is necessary to perform the above cleaning up.
Terminate and release COM object, stop message processing loop, call CoUnintialize(), and so on.
Resources created/allocated within the worker thread must be terminated/released.
I have my application in which listen and process messages from both internet sockets and unix domain sockets. Now I need to add SSL to the internet sockets, I was using a single io_service object for all the sockets in the application. It seems now I need to add separate io_service objects for network sockets and unix domain sockets. I don't have any threads in my application and I use async_send and async_recieve and async_accept to process data and connections. Please point me to any examples using multiple io_service objects with async handlers.
The question has a degree of uncertainty as if multiple io_service objects are required. I could not locate anything in the reference documentation, or the overview for SSL and UNIX Domain Sockets that mandated separate io_service objects. Regardless, here are a few options:
Single io_service:
Try to use a single io_service.
If you do not have a direct handle to the io_service object, but you have a handle to a Boost.Asio I/O object, such as a socket, then a handle to the associated io_service object can be obtained by calling socket.get_io_service().
Use a thread per io_service:
If multiple io_service objects are required, then dedicate a thread to each io_service. This approach is used in Boost.Asio's HTTP Server 2 example.
boost::asio::io_service service1;
boost::asio::io_service service2;
boost::thread_group threads;
threads.create_thread(boost::bind(&boost::asio::io_service::run, &service1));
service2.run();
threads.join_all();
One consequence of this approach is that the it may require thread-safety guarantees to be made by the application. For example, if service1 and service2 both have completion handlers that invoke message_processor.process(), then message_processor.process() needs to either be thread-safe or called in a thread-safe manner.
Poll io_service:
io_service provides non-blocking alternatives to run(). Where as io_service::run() will block until all work has finished, io_service::poll() will run handlers that are ready to run and will not block. This allows for a single thread to execute the event loop on multiple io_service objects:
while (!service1.stopped() &&
!service2.stopped())
{
std::size_t ran = 0;
ran += service1.poll();
ran += service2.poll();
// If no handlers ran, then sleep.
if (0 == ran)
{
boost::this_thread::sleep_for(boost::chrono::seconds(1));
}
}
To prevent a tight-busy loop when there are no ready-to-run handlers, it may be worth adding in a sleep. Be aware that this sleep may introduce latency in the overall handling of events.
Transfer handlers to a single io_service:
One interesting approach is to use a strand to transfer completion handlers to a single io_service. This allows for a thread per io_service, while preventing the need to have the application make thread-safety guarantees, as all completion handlers will post through a single service, whose event loop is only being processed by a single thread.
boost::asio::io_service service1;
boost::asio::io_service service2;
// strand2 will be used by service2 to post handlers to service1.
boost::asio::strand strand2(service1);
boost::asio::io_service::work work2(service2);
socket.async_read_some(buffer, strand2.wrap(read_some_handler));
boost::thread_group threads;
threads.create_thread(boost::bind(&boost::asio::io_service::run, &service1));
service2.run();
threads.join_all();
This approach does have some consequences:
It requires handlers that are intended to by ran by the main io_service to be wrapped via strand::wrap().
The asynchronous chain now runs through two io_services, creating an additional level of complexity. It is important to account for the case where the secondary io_service no longer has work, causing its run() to return.
It is common for an asynchronous chains to occur within the same io_service. Thus, the service never runs out of work, as a completion handler will post additional work onto the io_service.
| .------------------------------------------.
V V |
read_some_handler() |
{ |
socket.async_read_some(..., read_some_handler) --'
}
On the other hand, when a strand is used to transfer work to another io_service, the wrapped handler is invoked within service2, causing it to post the completion handler into service1. If the wrapped handler was the only work in service2, then service2 no longer has work, causing servce2.run() to return.
service1 service2
====================================================
.----------------- wrapped(read_some_handler)
| .
V .
read_some_handler NO WORK
| .
| .
'----------------> wrapped(read_some_handler)
To account for this, the example code uses an io_service::work for service2 so that run() remains blocked until explicitly told to stop().
Looks like you are writing a server and not a client. Don't know if this helps, but I am using ASIO to communicate with 6 servers from my client. It uses TCP/IP SSL/TSL. You can find a link to the code here
You should be able to use just one io_service object with multiple socket objects. But, if you decide that you really want to have multiple io_service objects, then it should be fairly easy to do so. In my class, the io_service object is static. So, just remove the static keyword along with the logic in the constructor that only creates one instance of the io_service object. Depending on the number of connections expected for your server, you would probably be better off using a thread pool dedicated to handling socket I/O rather than creating a thread for each new socket connection.
I have an app where the network activity is done in its separate thread (and the network thread continuously gets data from the server and updates the display - the display calls are made back on the main thread). When the user logs out, the main thread calls a disconnect method on the network thread as follows:
[self performSelector:#selector(disconnectWithErrorOnNetworkThread:) onThread:nThread withObject:e waitUntilDone:YES];
This selector gets called most of the time and everything works fine. However, there are times (maybe 2 out of ten times) that this call never returns (in other words the selector never gets executed) and the thread and the app just hang. Anyone know why performSelector is behaving erratically?
Please note that I need to wait until the call gets executed, that's why waitUntilDone is YES, so changing that to NO is not an option for me. Also the network thread has its run loop running (I explicitly start it when the thread is created).
Please also note that due to the continuous nature of the data transfer, I need to explicitly use NSThreads and not GCD or Operations queues.
That'll hang if:
it is attempting to perform a selector on the same thread the method was called from
the call to perform the selector is to a thread from which a synchronous call was made that triggered the perform selector
When your program is hung, have a look at the backtraces of all threads.
Note that when implementing any kind of networking concurrency, it is generally really bad to have synchronous calls from the networking code into the UI layers or onto other threads. The networking thread needs to be very responsive and, thus, just like blocking the main thread is bad, anything that can block the networking thread is a bad, too.
Note also that some APIs with callbacks don't necessarily guarantee which thread the callback will be delivered on. This can lead to intermittent lockups, as described.
Finally, don't do any active polling. Your networking thread should be fully quiescent unless some event arrives. Any kind of looped polling is bad for battery life and responsiveness.
new to WCF.
I have a client which is deadlocking when calling a WCF service.
The service will invoke a callback to the client at the time of the call which is marked as IsOneWay. I have confirmed that the service is not blocking on the callback.
The client then immediately calls the same service again (in a tight loop), without having yet serviced the callback. The client then deadlocks (and a breakpoint on the service side never gets triggered).
So to recap:
CLIENT SERVICE
Call service -----------------------> (service breakpoint triggers)
(waiting for dispatch thread) <------ Invoke callback (IsOneWay - doesn't block)
Service returns
Call service again immediately -----? (service breakpoint doesn't trigger)
(deadlock)
I am assuming that the callback has grabbed some WCF lock at the client end, and then the second service call from the client also wants that lock, so deadlock results. But this is just assumption.
I have read about ConcurrencyMode but I can't decide which mode to use, or where to put it because I'm not 100% clear on what is going on, and what is being blocked exactly.
I would also prefer to keep all callbacks being serviced by the dispatch thread if possible as it keeps the code simpler.
Can any WCF experts shed light on exactly what is going on here?
Many thanks
OK, think I've sussed it.
WCF services default to single threaded. All calls and callbacks get marshalled to a single thread (or SynchronizationContext to be more accurate).
My app is a single threaded WPF app, so the SynchronizationContext gets set to the dispatch thread.
When the callback comes in it tries to marshal the call to the dispatch thread, which of course is sat blocking on the original service call. I'm not clear it locks exactly, but there's obviously some global lock that it tries to get before waiting for the dispatch thread.
When the dispatch thread then calls the service again, it deadlocks on this global lock.
Two ways around it:
1) Create the service proxy on a different thread in the first place. All calls will get marshalled through this thread instead and it won't matter that the dispatch thread is blocked.
2) Apply [CallbackBehavior(UseSynchronizationContext = false)] attribute to the client class that implements the callback. This means WCF will ignore the synchronisation context when the callback comes in, and it will service it on any available thread.
I went with 2. Obviously this means I need to marshal callbacks that could update the GUI to the dispatch thread myself, but luckily my callback implementation is a small wrapper anyway, so I just do a _dispatcher.BeginInvoke() in each callback method to marshal ASYNCHRONOUSLY. The dispatch thread will then service when it gets a chance which is what I wanted in the first place.
The sequence that you have depicted resembles a synchronous call. While in an async call, the sequence would be:
Client Server
Call service --------------->ProcessRequest(1) //Your for loop for instance.
Call service --------------->ProcessRequest(2)
Call service --------------->ProcessRequest(3)
Call service --------------->ProcessRequest(4)
Call service --------------->ProcessRequest(5)
Callback awake <---------------Response1 //Responses tends to pour in...
Callback awake <---------------Response2
Callback awake <---------------Response3
Callback awake <---------------Response4...
In each case of each async web service call, the system creates a separate IO thread(IOCP thread), and processes the request. In this, seldom you will find a deadlock.
I have found this way, even when called within a loop, to be working very well.
You can, for instance, register for the event .OnProcessComplete, and then call the ProcessCompleteAsync method.
I have a WCF service method that I want to perform some action asynchronously (so that there's little extra delay in returning to the caller). Is it safe to spawn a System.ComponentModel.BackgroundWorker within the method? I'd actually be using it to call one of the other service methods, so if there were a way to call one of them asynchronously, that would work to.
Is a BackgroundWorker the way to go, or is there a better way or a problem with doing that in a WCF service?
BackgroundWorker is really more for use within a UI. On the server you should look into using a ThreadPool instead.
when-to-use-thread-pool-in-c has a good write-up on when to use thread pools. Essentially, when handling requests on a server it is generally better to use a thread pool for many reasons. For example, over time you will not incur the extra overhead of creating new threads, and the pool places a limit on the total number of active threads at any given time, which helps conserve system resources while under load.
Generally BackgroundWorker is discussed when a background task needs to be performed by a GUI application. For example, the MSDN page for System.ComponentModel.BackgroundWorker specifically refers to a UI use case:
The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution.
That is not to say that it could not be used server-side, but the intent of the class is for use within a UI.