Need to hold request for a thread until previous request is finished - vb.net

I am looking for a technique to hold off on requesting a thread (background worker, Task, etc,) from starting while a previous thread is still processing. The thread has an object writer and if it is busy I cannot use it in the next thread until it finishes its write.
Note, that the processing that occurs before each thread request is sufficiently long enough that there should not be an issue, this is just precautionary.
I am guessing that how I request the thread here is critical to having some sort of response back that will allow the next thread to get called. But I could use some help on how to set this up. If anyone has a specific scenario of similar design I would be happy researching the recommended technique. Sort of new to this sort of thread processing.
vb.net

I'm not sure how you plan on implementing this, but you should try and use the TPL vs. using Threads directly. With Tasks, you can wait on them to complete.
See the following example https://msdn.microsoft.com/en-us/library/dd537610(v=vs.100).aspx
And read the following on Threads vs. Tasks if you need more information on the differences.
http://blog.slaks.net/2013-10-11/threads-vs-tasks/

Typically mutexes are used for synchronization.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684266(v=vs.85).aspx
Note that you'll also need handle WAIT_ABANDONED, which is the status when a thread that had the mutex dies instead of finishing.
Examples and more info for .Net here: https://msdn.microsoft.com/en-us/library/system.threading.mutex(v=vs.110).aspx

Related

How does ActiveMQ AMQ_SCHEDULED_DELAY message works?

We want to use delay feature from activeMQ to delay particural event. How does AMQ_SCHEDULED_DELAY work internaly? In documentation is information about scheduler but no information what mechanism it utilize to delay message. For that reason we are not sure how delaying is going to affect activeMQ. Does activeMQ utilize pooling or async to achive delay.
I ask this question because people from my organization want to pick diffrent technology. I do not have any proof delay from activeMQ is any better.
Here is link to source code. I was thinking of looking up code but I'm not good in java. Can anyone help?
Default implementation of ActiveMQ does utilize the polling.
Active MQ internally keep polling for the scheduled (or delayed) messages by a background scheduler thread. This thread read the list of scheduled events (or messages) and fires the jobs, reschedule repeating jobs as needed before firing the job event.
The list of scheduled events is stored in a sorted order in internal storage of activemq. So during poll, it just read event which are scheduled for earliest processing. Since the messages are persisted during enquing, scheduling many not have visible performance impact during processing.
However before adopting, you can setup your benchmark, without worries much internal implementation detail, to see that your performance/SLA requirement are getting met.
For more details, you may refer to Javadoc of job scheduler API. For default implementation can you refers to the code.
Hope this helps.
In looking at the source code mentioned by #skadya, the term "polling" is not what I interpret. It appears to use the Java Object class' wait(long timeout) method to determine when to "wake up" the thread that runs the jobs.
So, I wouldn't call it polling. I would call it an asynchronous mechanism in which the delay / timeout is set such that the thread will wake up (e.g. to run the next scheduled job at the appropriate time) via the timeout set to a value that is appropriate for the next scheduled job's commencement.
Javadoc for Object.wait(long timeout)
Note that the implementation for Object.wait is a native (i.e. non-java) implementation provided by the JDK / JRE / JVM for a given platform. For what that's worth.
It is possible to do performance test with activemq web console. There is an option to send message with configurable delay and number of messages to send. It doesn't answer my question but it seems like best option to compare two approaches.

How to figure out if mule flow message processing is in progress

I have a requirement where I need to make sure only one message is being processed at a time by a mule flow.Flow is triggered by a quartz scheduler which reads one file from FTP server every time
My proposed solution is to keep a global variable "FLOW_STATUS" which will be set to "RUNNING" when a message is received and would be reset to "STOPPED" once the processing of message is done.
Any messages fed to the flow will check for this variable and abort if "FLOW_STATUS" is "RUNNING".
This setup seems to be working , but I was wondering if there is a better way to do it.
Is there any best practices around this or any inbuilt mule helper functions to achieve the same instead of relying on global variables
It seems like a more simple solution would be to set the maxActiveThreads for the flow to 1. In Mule, each message processed gets it's own thread. So setting the maxActiveThreads to 1 would effectively make your flow singled threaded. Other pending requests will wait in the receiver threads. You will need to make sure your receiver thread pool is large enough to accommodate all of the potential waiting threads. That may mean throttling back your quartz scheduler to allow time process the files so the receiver thread pool doesn't fill up. For more information on the thread pools and how to tune performance, here is a good link: http://www.mulesoft.org/documentation/display/current/Tuning+Performance

Synchronizing dependent asychnronized functions Objective C

So I am running into a race condition and I have a few solutions on how to fix the issue. I am new to threading so obviously, my opinion and research is limited. I have a large amount of asynchronization calls that can happen if a user receives certain messages from server. Thus, my design is poor due to the dependent nature of my objects.
Lets say I have a function called
adduser:(NSString s){
does some asynchronize activity
}
Messageuser:(NSString s)
{
Does some more asychronize activity
}
if a user were to recieve a message telling it to addUser "Ryan". he would than create a thread and proceed with looking up Ryan and storing him. However, if the user has the application in suspended mode, and in the buffered of messages waiting to be recieved there is a addUser request and a MessageUser request, a race condition occures because it takes longer to complete Adduser than it does to complete MessageUser. Thus, If messageUser is called , and (in our example) "Ryan" has not been fully added, it throws an error.
What would be a possible solution to this issue. I looked into locks and semaphores, and what I am trying to do is, when MessageUser recieves a call, check to make sure there is no thread currently proccessing addUser. If there is none, proceed. Else wait, than proceed after it has finished.
Well it depends on how the messages are being issued in the first place and what the async response events are.
If the operations have dependencies (ordering requirements) then perhaps a background serial queue would be appropriate? That is a simple way to ensure the messages are processed in order.
If the async operations take completion blocks, then you could have the completion block issue the request for the next operation to be performed, though you may not know about that ahead of time.
If you need to solve this in a more general way then you need some kind of system for tracking prerequisites so you can skip work items that don't have their prerequisites met yet. That probably means your own background thread that monitors a list of waiting tasks and receives notification of all task completions so it can scan for items waiting on that completion and issue them.
It seems really complicated though... I suspect you don't really have such strong async parallel processing requirements and a much simpler design would be just as effective. Given your situation where you are receiving messages from a server, I think a serial queue would be the best option. Then you can process messages in the order the server sent them and keep things simple.
//do this once at app startup
dispatch_queue_t queue = dispatch_queue_create("com.example.myapp", NULL);
//handle server responses
dispatch_async(queue, ^{
//handle server message here, one at a time
});
In reality, depending on how you connect to your server you might be able to just move the entire connection handling to the background queue and communicate with it via messages from the UI, and update the UI by dispatching to the dispatch_get_main_queue() which will be the UI thread.

performSelector:OnThread:waitUntilDone not executing the selector all the time

I have an app where the network activity is done in its separate thread (and the network thread continuously gets data from the server and updates the display - the display calls are made back on the main thread). When the user logs out, the main thread calls a disconnect method on the network thread as follows:
[self performSelector:#selector(disconnectWithErrorOnNetworkThread:) onThread:nThread withObject:e waitUntilDone:YES];
This selector gets called most of the time and everything works fine. However, there are times (maybe 2 out of ten times) that this call never returns (in other words the selector never gets executed) and the thread and the app just hang. Anyone know why performSelector is behaving erratically?
Please note that I need to wait until the call gets executed, that's why waitUntilDone is YES, so changing that to NO is not an option for me. Also the network thread has its run loop running (I explicitly start it when the thread is created).
Please also note that due to the continuous nature of the data transfer, I need to explicitly use NSThreads and not GCD or Operations queues.
That'll hang if:
it is attempting to perform a selector on the same thread the method was called from
the call to perform the selector is to a thread from which a synchronous call was made that triggered the perform selector
When your program is hung, have a look at the backtraces of all threads.
Note that when implementing any kind of networking concurrency, it is generally really bad to have synchronous calls from the networking code into the UI layers or onto other threads. The networking thread needs to be very responsive and, thus, just like blocking the main thread is bad, anything that can block the networking thread is a bad, too.
Note also that some APIs with callbacks don't necessarily guarantee which thread the callback will be delivered on. This can lead to intermittent lockups, as described.
Finally, don't do any active polling. Your networking thread should be fully quiescent unless some event arrives. Any kind of looped polling is bad for battery life and responsiveness.

WCF: Is it safe to spawn an asynchronous worker thread on the server?

I have a WCF service method that I want to perform some action asynchronously (so that there's little extra delay in returning to the caller). Is it safe to spawn a System.ComponentModel.BackgroundWorker within the method? I'd actually be using it to call one of the other service methods, so if there were a way to call one of them asynchronously, that would work to.
Is a BackgroundWorker the way to go, or is there a better way or a problem with doing that in a WCF service?
BackgroundWorker is really more for use within a UI. On the server you should look into using a ThreadPool instead.
when-to-use-thread-pool-in-c has a good write-up on when to use thread pools. Essentially, when handling requests on a server it is generally better to use a thread pool for many reasons. For example, over time you will not incur the extra overhead of creating new threads, and the pool places a limit on the total number of active threads at any given time, which helps conserve system resources while under load.
Generally BackgroundWorker is discussed when a background task needs to be performed by a GUI application. For example, the MSDN page for System.ComponentModel.BackgroundWorker specifically refers to a UI use case:
The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution.
That is not to say that it could not be used server-side, but the intent of the class is for use within a UI.