When using NSThread's detachNewThreadSelector:toTarget:withObject:, I'm finding that the thread will fully complete its execution before the application is terminated normally if the user were to attempt to quit the application while the background process was executing.
In this case, this is the behavior I desire, but I couldn't find anything in Apple's docs that suggests that this will always be the case. The only relevant information I was able to find was the following, from Apple's Threading Programming Guide:
Important: At application exit time, detached threads can be terminated immediately but joinable threads cannot. Each joinable thread must be joined before the process is allowed to exit. Joinable threads may therefore be preferable in cases where the thread is doing critical work that should not be interrupted, such as saving data to disk.
So from this, I know that detached threads can be terminated at the time of application exit, but will they ever be terminated automatically? Or, am I always safe to assume the thread will complete its execution before the application quits?
You cannot assume that any thread -- including the main thread -- will ever complete execution normally, regardless of the documentation.
This is because the user can quit an application at any time, the system may lose power/panic, or the app may crash.
As for detached threads, it would not be unheard of for the system frameworks to automatically terminate the app forcibly after some timeout once the main event loop has given up the ghost.
Related
I have a question about the following diagram from Operating Systems Concepts: http://unboltingbinary.in/wp-content/uploads/2015/04/image028.jpg
This diagram seems to imply that after every I/O operation, the process is placed back on the ready queue before being sent to the CPU again. However, is it possible for a process to terminate after I/O but before being sent to the ready queue?
Suppose we have a program that computes a number and then writes it to storage. In this case, does the process really need to return to the CPU after the I/O operation? It seems to me that the process should be allowed to terminate right after I/O. That way, there would be no need for a context switch.
Once one process has successfully executed a termination request on another, the threads of the terminated process should never run again, no matter what state they were in - blocked on I/O, blocked on inter-thread comms, running on a core, sleeping, whatever - they all must be stopped immediately if running and all be put in a state where they will never run again.
Anything else would be a security issue - terminated threads should not be given execution at all, (else it may not be possible to terminate the process).
Process termination requires the cpu. Changes to kernel mode structures on process exit, returning memory resources, etc. all require the cpu.
A process simply just does not evaporate. The term you want here is process rundown - I think.
I have a simple thread pool written in pthreads implemented using a pool of locks so I know which threads are available. Each thread also has a condition variable it waits on so I can signal it to do work.
When work comes in, I pick a thread by looking finding an available thread from the lock pool. I then set a data structure associated with the thread that contains the work it needs to do and signal on the condition variable that the thread should start working.
The problem is when the thread completes work. I need to unlock the thread in the lock pool so it's available for more work. However, the controlling thread is the one which set the lock, so the thread can't free this lock itself. (And the controlling thread doesn't know when work is done.)
Any suggestions?
I could rearchitect my thread pool to use a queue where all threads are signaled when work is added so one thread can grab it. However, in the future, thread affinity will likely be a problem for incoming work and the lock pool makes implementation of this easier.
It seems to me that the piece of data that you're trying to synchronize access to is the free/busy status of each thread.
So, have a table (array) that records the free/busy status of each thread, and use a mutex to protect access to that table. Any thread (controller or worker) that wants to examine/change the thread status needs to seize the mutex, but the lock needs to be held only while the status is being examined/changed, not for the entire duration of the thread's work.
To assign work to a thread, you would do:
pthread_mutex_lock(&thread_status_table_lock);
-- search table for available thread
-- assign work to that thread
-- set thread status to "busy"
pthread_mutex_unlock(&thread_status_table_lock);
-- signal the thread
And when the thread finishes its work, it would change its status back to "free":
pthread_mutex_lock(&thread_status_table_lock);
-- set thread status to "free"
pthread_mutex_unlock(&thread_status_table_lock);
I have a VB.NET application that uses threads to asynchronously process some tasks in a "Scheduled Task" (console application).
We are limiting this application to run 10 threads at once, like so:
(pseudo-code)
- Create a generic list of 10 threads
- Spawn off the threadproc for each one
- Do a thread.join statement for each thread to wait for the longest running one to complete.
I am finding that if the code called by the threadproc contains any "Debug.Writeline" or "Trace.Traceinformation" statements, the thread hangs. I can see the thread in the Debug - Windows - Threads window and switch to it, but it highlights the Debug.Writeline statement and never gets past it.
Is there something special about the Debug or Trace statements that make them non-thread-safe?
Why would this hang things up? If I leave the debug statement in, the thread never completes. If I take the debug statement out, the thread completes in less than 5 seconds.
Yes and no.
Internally, Debug.WriteLine ends up calling into TraceInternal.WriteLine. This particular function does not explicitly stop thread execution but it does acquire a process global lock during the execution of the method. This lock protects both the list of trace listeners and serializes the processing of WriteLine commands.
It's possible for 2 threads to simultaneously hit this WriteLine statement and hence have one thread pause for a short period of time. It is also possible for a custom trace listener to be doing a very long lived or blocking operation which would essentially freeze all other threads for a noticable period of time.
Use Visual Studio to check and see what other threads are currently broken in this function. See if that gives you a clue as to what is holding up this process.
You may have a trace listener that maybe is interfering with the Debug.WriteLine.
There is a custom trace listener in this application. Once I commented it out, my locking problems were solved. Now if I could only track down the original developer to find out what they were doing with this custom listener...
I want to write a program, that should be notified by O.S. whenever any running process on that OS dies.
I don't want to myself poll and compare everytime if a previously existing process has died. I want my program to be alerted by OS whenever a process termination happens.
How do I go about it? Some sample code would be very helpful.
PS: Looking for approaches in Java/C++.
Sounds like you want PsSetCreateProcessNotifyRoutine(). See this article to get started:
http://www.codeproject.com/KB/threads/procmon.aspx
Under Unix, you could use the sigchld signal to get notified of the death of the process. This requires, however, that the process being monitored is a child process of the monitoring process.
Under Windows, you might need to have a valid handle to the process. If you spawn the process yourself using CreateProcess, you get the handle for free, otherwise you must acquire by other means. It might then be possible to wait for the process to terminate by calling WaitForSingleObject on the handle.
Sorry, I don't have any example code for this. I am not even sure, that waiting on the process handle under Windows really awaits termination of the process (as opposed to some other "significant" condition, which causes the process handle to enter "signalled" state or something).
I don't have a code sample ready but one idea – on Linux – might be to find out the ID of the process you'd like to watch when first starting your watcher program (e.g. using $ pgrep) and then using inotify to watch /proc/<PID>/ – which gets deleted when the process dies. In contrast to polling, this doesn't cost any significant CPU resources.
Now, procfs is not completely supported by inotify, so I can't guarantee this approach would actually work but it is certainly worth looking into.
Can anyone confirm that this statement "Select to recycle worker processes after a specific period of inactivity" in this Microsoft help file is wrong and should in fact not have the "of inactivity" at the end of it?
Yes, that seems wrong. As far as I'm aware, this option just recycles the processes regardless of whether they are idle or not.
This article seems to confirm that too.
The statement you quoted is correct. It's a way to allow you to free up resources that aren't being used.
When it is necessary to conserve
system resources by terminating unused
worker processes, you can configure a
worker process to gracefully close
after a specified period of time. You
can use this feature to better manage
the resources when the processing load
is heavy, when identified applications
consistently fall into an idle state,
or when new processing space is not
available. You can also start
additional worker processes to replace
a worker process that is finished.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/83b35271-c93c-49f4-b923-7fdca6fae1cf.mspx?mfr=true