How to receive data from python Thread in a greenlet without blocking all greenlets? - python-multithreading

We have an existing codebase that is heavily Thread based which we're trying to expose through flask-socketio. I can't find a mechanism to have a greenlet wait for data from a Thread without blocking all of gevent or through a polling loop.
I thought maybe I could use an unbounded gevent Queue and call put from the Thread but that didn't seem to work. Also, the application is not performant enough with long polling so we can't use threads for socketio.
Is there a mechanism to receive data from the Thread in a greenlet without the greenlet blocking all of gevent?

Related

How to figure out if mule flow message processing is in progress

I have a requirement where I need to make sure only one message is being processed at a time by a mule flow.Flow is triggered by a quartz scheduler which reads one file from FTP server every time
My proposed solution is to keep a global variable "FLOW_STATUS" which will be set to "RUNNING" when a message is received and would be reset to "STOPPED" once the processing of message is done.
Any messages fed to the flow will check for this variable and abort if "FLOW_STATUS" is "RUNNING".
This setup seems to be working , but I was wondering if there is a better way to do it.
Is there any best practices around this or any inbuilt mule helper functions to achieve the same instead of relying on global variables
It seems like a more simple solution would be to set the maxActiveThreads for the flow to 1. In Mule, each message processed gets it's own thread. So setting the maxActiveThreads to 1 would effectively make your flow singled threaded. Other pending requests will wait in the receiver threads. You will need to make sure your receiver thread pool is large enough to accommodate all of the potential waiting threads. That may mean throttling back your quartz scheduler to allow time process the files so the receiver thread pool doesn't fill up. For more information on the thread pools and how to tune performance, here is a good link: http://www.mulesoft.org/documentation/display/current/Tuning+Performance

Is it possible to dictate use of RPC callback threads?

I am working on a bug that related to an unmanaged MTA COM object. The object has Lock and Unlock methods and uses a mutex that requires the same thread that called Lock to call Unlock.
The problem is that when Lock and Unlock are called from a managed STA thread (using COM interop), the calls come into the COM object on a RPC callback thread but the callback thread that is used is not always the same for both calls. When it is not the same, the Unlock call fails because it can't unlock the mutex.
In other words:
Managed STA thread 1 -> RPC callback (thread 11) -> Lock
Managed STA thread 1 -> RPC callback (thread 12) -> Unlock -> Error
I am trying to evaluate all possible solutions before making any decisions on a fix. As such, I am trying to find out:
1) Is there is a way to prevent a RPC callback thread from being used in the first place? In my testing, if I make the calls to the object from an unmanaged STA thread, the calls seem to come in on the calling thread itself. What is different when the call is coming from .Net that necessitates the use of an RPC callback thread? Is there any way to prevent RPC callbacks from being used? (except for using an MTA calling thread)
2) If not, is there a way to force a consistent RPC callback thread to be used from the same managed STA thread?
This is by design for a free-threaded server. COM takes your word for it and allows stubs to use arbitrary RPC threads. You cannot make any assumptions about the thread identity, the RPC thread is picked from a pool and is recycled. Unfortunately it often picks the same one when the calls are sequenced so it will look like it works fine at first. But trouble starts as soon as more than one concurrent server call is made. There is no option to make it selective, a free-threaded server promises to not care. Nor could that work well in practice, it would either scale horribly or induce deadlock.
You therefore cannot use a mutex to implement locking, it has thread affinity. A semaphore is a good choice.

performSelector:OnThread:waitUntilDone not executing the selector all the time

I have an app where the network activity is done in its separate thread (and the network thread continuously gets data from the server and updates the display - the display calls are made back on the main thread). When the user logs out, the main thread calls a disconnect method on the network thread as follows:
[self performSelector:#selector(disconnectWithErrorOnNetworkThread:) onThread:nThread withObject:e waitUntilDone:YES];
This selector gets called most of the time and everything works fine. However, there are times (maybe 2 out of ten times) that this call never returns (in other words the selector never gets executed) and the thread and the app just hang. Anyone know why performSelector is behaving erratically?
Please note that I need to wait until the call gets executed, that's why waitUntilDone is YES, so changing that to NO is not an option for me. Also the network thread has its run loop running (I explicitly start it when the thread is created).
Please also note that due to the continuous nature of the data transfer, I need to explicitly use NSThreads and not GCD or Operations queues.
That'll hang if:
it is attempting to perform a selector on the same thread the method was called from
the call to perform the selector is to a thread from which a synchronous call was made that triggered the perform selector
When your program is hung, have a look at the backtraces of all threads.
Note that when implementing any kind of networking concurrency, it is generally really bad to have synchronous calls from the networking code into the UI layers or onto other threads. The networking thread needs to be very responsive and, thus, just like blocking the main thread is bad, anything that can block the networking thread is a bad, too.
Note also that some APIs with callbacks don't necessarily guarantee which thread the callback will be delivered on. This can lead to intermittent lockups, as described.
Finally, don't do any active polling. Your networking thread should be fully quiescent unless some event arrives. Any kind of looped polling is bad for battery life and responsiveness.

boost::asio timeouts example - writing data is expensive

boost:: asio provides an example of how to use the library to implement asynchronous timeouts; client sends server periodic heartbeat messages to server, which echoes heartbeat back to client. failure to respond within N seconds causes disconnect. see boost_asio/example/timeouts/server.cpp
The pattern outlined in these examples would be a good starting point for part of a project i will be working on shortly, but for one wrinkle:
in addition to heartbeats, both client and server need to send messages to each other.
The timeouts example pushes heartbeat echo messages onto a queue, and a subsequent timeout causes an asynchronous handler for the timeout to actually write the data to the socket.
Introducing data for the socket to write cannot be done on the thread running io_service, because it is blocked on run(). run_once() doesn't help, you still block until there is a handler to run, and introduce the complexity of managing work for the io_service.
In asio, asynchronous handlers - writes to the socket being one of them - are called on the thread running io_service.
Therefore, to introduce messages randomly, data to be sent is pushed onto a queue from a thread other than the io_service thread, which implies protecting the queue and notification timer with a mutex. There are then two mutexes per message, one for pushing the data to the queue, and one for the handler which dequeues the data for write to socket.
This is actually a more general question than asio timeouts alone: is there a pattern, when the io_service thread is blocked on run(), by which data can be asynchronously written to the socket without taking two mutexes per message?
The following things could be of interest: boost::asio strands is a mechanism of synchronising handlers. You only need to do this though if you are calling io_service::run from multiple threads AFAIK.
Also useful is the io_service::post method, which allows you execute code from the thread that has invoked io_service::run.

WCF: Is it safe to spawn an asynchronous worker thread on the server?

I have a WCF service method that I want to perform some action asynchronously (so that there's little extra delay in returning to the caller). Is it safe to spawn a System.ComponentModel.BackgroundWorker within the method? I'd actually be using it to call one of the other service methods, so if there were a way to call one of them asynchronously, that would work to.
Is a BackgroundWorker the way to go, or is there a better way or a problem with doing that in a WCF service?
BackgroundWorker is really more for use within a UI. On the server you should look into using a ThreadPool instead.
when-to-use-thread-pool-in-c has a good write-up on when to use thread pools. Essentially, when handling requests on a server it is generally better to use a thread pool for many reasons. For example, over time you will not incur the extra overhead of creating new threads, and the pool places a limit on the total number of active threads at any given time, which helps conserve system resources while under load.
Generally BackgroundWorker is discussed when a background task needs to be performed by a GUI application. For example, the MSDN page for System.ComponentModel.BackgroundWorker specifically refers to a UI use case:
The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution.
That is not to say that it could not be used server-side, but the intent of the class is for use within a UI.