How to stop the execution of Executor ThreadPool in java? - concurrent-programming

I am working on the Executors in java to concurrently run more threads at a time.
I have a set of Runnable Objects and i assign it to the Exceutors.The Executor is working fine
and every thing is fine.But after all the tasks are executed in the pool ,the java program is not terminated,i think the Executor takes some time to kill the threads.please anyone help me to reduce the time taken by the executor after executing all tasks.

The ExecutorService class has 2 methods just for this: shutdown() and shutdownNow().
After using the shutdown() method, you can call awaitTermination() to block until all of the started tasks have completed. You can even provide a timeout to prevent waiting forever.
You might want to click on some of these links that I'm providing. They go straight to the docs where you can readup on this stuff yourself.

executor.shutdown() with awaitTermination(timeout) does not kill threads. (ie), if your runnable task is inside a polling loop, it does not kill the task. All it does is to interrupt its runnable tasks when the timeout is reached. So, in the code for your runnable class, if you wait on some condition, you may want to change the condition as,
while (flagcondition && !Thread.currentThread().isInterrupted()) {}
This ensures that the task stops when the thread is interrupted as the while loop terminates. Alternately, you might want to catch the interrupted exception and set
flag=false in the catch block to terminate the thread.
try {
// do some stuff and perform a wait that might throw an InterruptedException
} catch (InterruptedException e) {
flagcondition = false;
}
You might also want to use a profiler to examine why some threads have not proceeded to completion.

Related

what's different between the Blocked and Busy Waiting?

I known the implement of Busy Waiting. it's a death loop like this:
//main thread
while (true) {
msg = msgQueue.next();
msg.runnable.run();
}
//....msg queue
public Message next() {
while (true) {
if (!queue.isEmpty()) {
return queue.dequeue();
}
}
}
so, the method "next()" just looks like blocked, actually it runs all the time.
this was called "busy waiting" on book.
and what's the "process blocked"? what about its implement details?
is a death loop too? or some others? like signal mechanism?
For instance:
cat xxx | grep "abc"
process "cat" read a file and output them.
process "grep" waiting for input from "cat".
so before the "cat" output data, "grep" should be blocked, waiting for input and go on.
what details about this "blocked", a death loop read the input stream all the time? or really stop running, waiting a signal to wake up it to run?
The difference is basically in what happens to the process:
1. Busy Waiting
A process that is busy waiting is essentially continuously running, asking "Are we there yet? Are we there yet? How about now, are we there yet?" which consumes 100% of CPU cycles with this question:
bool are_we_there = false;
while(!are_we_there)
{
// ask if we're there (without blocking)
are_we_there = ask_if_we_are_there();
}
2. A process that is blocked (or that blocks)
A process that is blocked is suspended by the operating system and will be automatically notified when the data that it is waiting on becomes available. This cannot be accomplished without assistance from the operating system.
And example is a process that is waiting for a long-running I/O operation, or waiting for a timer to expire:
// use a system call to create a waitable timer
var timer = CreateWaitableTime()
// use another system call that waits on a waitable object
WaitFor(timer); // this will block the current thread until the timer is signaled
// .. some time in the future, the timer might expire and it's object will be signaled
// causing the WaitFor(timer) call to resume operation
UPDATE
Waitable objects may be implemented in different ways at the operating system level, but generally it's probably going to be a combination of hardware timers, interrupts and lists of waitable objects that are registered with the operating system by client code. When an interrupt occurs, the operating system's interrupt handler is called which in turn will scan though any waitable objects associated with that event, and invoke certain callback which in turn will eventually signal the waitable objects (put them in a signaled state). This is an over-simplification but if you'd like to learn more you could read up on interrupts and hardware timers.
When you say "a process is blocked" you actually mean "a thread is blocked" because those are the only schedulable entities getting CPU time. When a thread is busy waiting, it wastes CPU time in a loop. When a thread is blocked, the kernel code inside the system call sees that data or lock is not immediately available so it marks the thread as waiting. It then jumps to the scheduler which picks up another thread ready for execution. Such a code in a blocking system call might look like this:
100: if (data_available()) {
101: return;
102: } else {
103: jump_to_scheduler();
104: }
Later on the thread is rescheduled and restarts at line 100 but it immediately gets to the else branch and gets off the CPU again. When data becomes available, the system call finally returns.
Don't take this verbatim, it's my guess based on what I know about operating systems, but you should get the idea.

How to ACK celery tasks with parallel code in reactor?

I have a celery task that, when called, simply ignites the execution of some parallel code inside a twisted reactor. Here's some sample (not runnable) code to illustrate:
def run_task_in_reactor():
# this takes a while to run
do_something()
do_something_more()
#celery.task
def run_task():
print "Started reactor"
reactor.callFromThread(run_task_in_reactor)
(For the sake of simplicity, please assume that the reactor is already running when the task is received by the worker; I used the signal #worker_process_init.connect to start my reactor in another thread as soon as the worker comes up)
When I call run_task.delay(), the task finishes pretty quickly (since it does not wait for run_task_in_reactor() to finish, only schedules its execution in the reactor). And, when run_task_in_reactor() finally runs, do_something() or do_something_more() can throw an exception, which will go unoticed.
Using pika to consume from my queue, I can use an ACK inside do_something_more() to make the worker notify the correct completion of the task, for instance. However, inside Celery, this does not seems to be possible (or, at least, I do't know how to accomplish the same effect)
Also, I cannot remove the reactor, since it is a requirement of some third-party code I'm using. Other ways to achieve the same result are appreciated as well.
Use reactor.blockingCallFromThread instead.

Strategy for feeding a watchdog in a multitask environment

Having moved some embedded code to FreeRTOS, I'm left with an interesting dilemma about the watchdog. The watchdog timer is a must for our application. Using FreeRTOS has been a huge boon for us too. When the application was more single-tasked, it fed the watchdog at timely points in its logic flow so that we could make sure the task was making logical progress in a timely fashion.
With multiple tasks though, that's not easy. One task could be bound up for some reason, not making progress, but another is doing just fine and making enough progress to keep the watchdog fed happily.
One thought was to launch a separate task solely to feed the watchdog, and then use some counters that the other tasks increment regularly, when the watchdog task ticks, it would make sure that all the counters looked like progress was being made on all the other tasks, and if so, go ahead and feed the watchdog.
I'm curious what others have done in situations like this?
A watchdog task that monitors the status of all the other tasks is a good solution. But instead of a counter, consider using a status flag for each task. The status flag should have three possible values: UNKNOWN, ALIVE, and ASLEEP. When a periodic task runs, it sets the flag to ALIVE. Tasks that block on an asynchronous event should set their flag to ASLEEP before they block and ALIVE when the run. When the watchdog monitor task runs it should kick the watchdog if every task is either ALIVE or ASLEEP. Then the watchdog monitor task should set all of the ALIVE flags to UNKNOWN. (ASLEEP flags should remain ASLEEP.) The tasks with the UNKNOWN flag must run and set their flags to ALIVE or ASLEEP again before the monitor task will kick the watchdog again.
See the "Multitasking" section of this article for more details: http://www.embedded.com/design/debug-and-optimization/4402288/Watchdog-Timers
This is indeed a big pain with watchdog timers.
My boards have an LED on a GPIO line, so I flash that in a while/sleep loop, (750ms on, 250ms off), in a next-to-lowest priority thread, (lowest is idle thread which just goes onto low power mode in a loop). I have put a wdog feed in the LED-flash thread.
This helps with complete crashes and higher-priority threads that CPU loop, but doesn't help if the system deadlocks. Luckily, my message-passing designs do not deadlock, (well, not often, anyway:).
Do not forget to handle possible situation where tasks are deleted, or dormant for longer periods of time. If those tasks were previously checked in with a watchdog task, they also need to have a 'check out' mechanism.
In other words, the list of tasks for which a watchdog task is responsible should be dynamic, and it should be organized so that some wild code cannot easily delete the task from the list.
I know, easier said then done...
I've design the solution using the FreeRTOS timers:
SystemSupervisor SW Timer which feed the HW WD. FreeRTOS Failure
causes reset.
Each task creates "its own" SW timer with SystemReset function.
Each task responsible to "manually" reload its timer before it expired.
SystemReset function saves data before commiting a suiside
Here is some pseudo-code listing:
//---------------------------------
//
// System WD
//
void WD_init(void)
{
HW_WD_Init();
// Read Saved Failure data, Send to Monitor
// Create Monitor timer
xTimerCreate( "System WD", // Name
HW_WD_INTERVAL/2, // Reload value
TRUE, // Auto Reload
0, // Timed ID (Data per timer)
SYS_WD_Feed);
}
void SYS_WD_Feed(void)
{
HW_WD_Feed();
}
//-------------------------
// Tasks WD
//
WD_Handler WD_Create()
{
return xTimerCreate( "", // Name
100, // Dummy Reload value
FALSE, // Auto Reload
pxCurrentTCB, // Timed ID (Data per timer)
Task_WD_Reset);
}
Task_WD_Reset(pxTimer)
{
TaskHandler_t th = pvTimerGetTimerID(pxTimer)
// Save Task Name and Status
// Reset
}
Task_WD_Feed(WD_Handler, ms)
{
xTimerChangePeriod(WD_Handler, ms / portTICK_PERIOD_MS, 100);
}

What happens when a thread makes kernel disable the interrupts and then that thread goes to sleep

I have this kernel code where I disable the interrupt to make this lock acquire operation atomic, but if u see the last else condition i.e. when lock is not available thread goes to sleep and interrupts are enable only after thread comes back from sleep. My question is so interrupts are disabled for whole OS until this thread comes out of sleep?
void Lock::Acquire()
{
IntStatus oldLevel = interrupt->SetLevel(IntOff); // Disabling the interrups to make the following statements atomic
if(lockOwnerThread == currentThread) //Checking if the requesting thread already owns lock
{
//printf("SM:error:%s already owns the lock\n",currentThread->getName());
DEBUG('z', "SM:error:%s already owns the lock\n",currentThread->getName());
(void) interrupt->SetLevel(oldLevel);
return;
}
if(lockOwnerThread==NULL)
{
lockOwnerThread = currentThread; // Lock owner ship is given to current thread
DEBUG('z', "SM:The ownership of the lock %s is given to %s \n",name,currentThread->getName());
}
else
{
DEBUG('z', "SM:Adding thread %s to request queue and putting it to sleep\n",currentThread->getName());
queueForLock->Append((void *)currentThread); // Lock is busy so add the thread to queue;
currentThread->Sleep(); // And go to sleep
}
(void) interrupt->SetLevel(oldLevel); // Enable the interrupts
}
I don't know the NACHOS and I would not make any assumptions on my own. So you have to test it.
The idea is simple. If this interrupt enable/disable functionality is local to the current process context then the following should happen when you call Sleep():
the process is marked as not-running, i.e. it is excluded from the list of processes the scheduler will consider to give a CPU time. Then the Sleep() function enforces the scheduler to do it's regular work - to find a process to run. If the list of running processes is not empty, the scheduler picks up a next available process and makes a context switch to this process. After this the state of interrupt management is restored from this new context.
If there are no processes to run then scheduler enters the Idle loop state and usually enables the interrupts. While the scheduler is in Idle loop it continues to poll the queue of the running processes until it get something to schedule.
Your process will get the control when it will be marked as running again. This could happen if some other process calls WakeUp() (or a like, as I mentioned the API is unknown to me)
When the scheduler will pick up your process to switch to it performs the usual (for your system) context switch that has the interrupts enabled flag set to false, so the execution continues at statement after the Sleep() call with interrupts disabled.
If the assumptions above are incorrect and the interrupts enabled flag is global, then there are two possibilities: either the system hangs as it can't serve the interrupts, or it has some workaround for such a situations.
So, you need to try. The best way is to read the kernel sources of course, if you have the access.))

Make Thread sleep first before it runs

How can I make my thread sleep first before it runs? I know how to get sleep to work, however, whenever my program is run, the thread immediately runs. I want it to WAIT once it is first created to start running. (I am using handlers)
You cannot control when threads are scheduled. If you want it to go to sleep, have the first statement in the thread subroutine do a wait on a condition or something like that and when you are ready you can broadcast to that condition. In pseudo-code:
get-lock
if (we-are-still-supposed-to-sleep)
pthread_cond_wait()
release-lock
I suppose you could have the parent hold the lock while creating the children and then all they have to do is:
get-lock
release-lock
and avoid the condition thing.
What OS? Windoze allows you to create threads in a suspended state. When you have loaded up the thread fields in the ctor, you can resume the thread. Failing that, pass some synchro object in the thread start parameter for the new thread to wait on.
Rgds,
Martin.