Are System.Threading.Tasks capable of running as background threads? - .net-4.0

One feature of Threads is that you can set the .IsBackground property to true, and it will not prevent the process from terminating (ie, the framework calls Thread.Abort() on all running background threads at termination)
I can't seem to find a similar feature in Tasks. I used background threads a lot when I create services, where if the thread has not ended gracefully after the timeout period, the framework just kills it. This prevents the service manager from hanging getting into that weird task failed to stop scenario.
Is there a way to treat tasks as background? Or do I have to add the necessary code to abort tasks myself?

Tasks already run as background threads.

Related

ProcessPoolExecutor stuck indefinitely when child process dies

I have a script running on one of my linux servers which handles batch file processing with a ProcessPoolExecutor and generally runs fine days or even weeks on end without any issue. Sometimes though it looks like a few of my child processes just die (I have no error message or exception at all and can't reproduce it even with killing cp's from the shell) and lead to the parent process just waiting for the return indefinitely...
Thats the call (the initializer doesn't have any effect in this case, it's just to handle the reverse scenario described in another very helpful thread on s.o.)
with ProcessPoolExecutor(max_workers=int(config['PERFORMANCE']['NumberOfProcesses']),
initializer=start_thread_to_terminate_when_parent_process_dies,
initargs=(os.getpid(),)
) as executor:
executor.map(process_main, file_list)
From what I've gathere the Pool should be able to recover in exactly the described scenario:
https://bugs.python.org/issue9205
Anyone got any idea? (thought about switching to the pebble library with it's timeout functionality or creating a separate watchdog script)

Hangfire 1.3.4 - deleted jobs stuck in queue

We are running hangfire single threaded using BackgroundJobServerOptions.WorkerCount = 1 (because we have a requiement for ordered processing).
Most of the time this is no problem, but occasionally a job gets stuck for entirely expected reasons (eg, the actual code it is running goes into an infintite loop), but because we are running single threaded this prevents other jobs in the queue from starting.
In order to try and work around this, we delete the job, but then it stays on the queue, blocking any other job from starting:
The only way I have found to resolve this is to drop and recreate the hangfire DB which is obviously not great.
Why does deleting a running job in hangfire not also remove it from the queue? Is this weird delete behavior a bug which to be fixed in a later version, or is this behavior by design because we're running single threaded?
If this is by design then how do you cancel a processing job in a way which removes it from the queue?
Well it seems that this behavior is by design.
If the IIS app pool worker is recycled, Hangfire will start processing the next task immediately. However, without this restart Hangfire will "hang" indefinitely.
An issue was raised on github about this, however it has not been solved yet:
https://github.com/HangfireIO/Hangfire/issues/80
With no way to cancel or manually "fail" a job, this makes hangfire a lot less useful in a single threaded scenario.
Update: this has been partially or fully addressed in some later version of Hangfire.

Will detached NSThreads always complete prior to application exit?

When using NSThread's detachNewThreadSelector:toTarget:withObject:, I'm finding that the thread will fully complete its execution before the application is terminated normally if the user were to attempt to quit the application while the background process was executing.
In this case, this is the behavior I desire, but I couldn't find anything in Apple's docs that suggests that this will always be the case. The only relevant information I was able to find was the following, from Apple's Threading Programming Guide:
Important: At application exit time, detached threads can be terminated immediately but joinable threads cannot. Each joinable thread must be joined before the process is allowed to exit. Joinable threads may therefore be preferable in cases where the thread is doing critical work that should not be interrupted, such as saving data to disk.
So from this, I know that detached threads can be terminated at the time of application exit, but will they ever be terminated automatically? Or, am I always safe to assume the thread will complete its execution before the application quits?
You cannot assume that any thread -- including the main thread -- will ever complete execution normally, regardless of the documentation.
This is because the user can quit an application at any time, the system may lose power/panic, or the app may crash.
As for detached threads, it would not be unheard of for the system frameworks to automatically terminate the app forcibly after some timeout once the main event loop has given up the ghost.

Does Debug.Writeline in VB.NET stop thread execution?

I have a VB.NET application that uses threads to asynchronously process some tasks in a "Scheduled Task" (console application).
We are limiting this application to run 10 threads at once, like so:
(pseudo-code)
- Create a generic list of 10 threads
- Spawn off the threadproc for each one
- Do a thread.join statement for each thread to wait for the longest running one to complete.
I am finding that if the code called by the threadproc contains any "Debug.Writeline" or "Trace.Traceinformation" statements, the thread hangs. I can see the thread in the Debug - Windows - Threads window and switch to it, but it highlights the Debug.Writeline statement and never gets past it.
Is there something special about the Debug or Trace statements that make them non-thread-safe?
Why would this hang things up? If I leave the debug statement in, the thread never completes. If I take the debug statement out, the thread completes in less than 5 seconds.
Yes and no.
Internally, Debug.WriteLine ends up calling into TraceInternal.WriteLine. This particular function does not explicitly stop thread execution but it does acquire a process global lock during the execution of the method. This lock protects both the list of trace listeners and serializes the processing of WriteLine commands.
It's possible for 2 threads to simultaneously hit this WriteLine statement and hence have one thread pause for a short period of time. It is also possible for a custom trace listener to be doing a very long lived or blocking operation which would essentially freeze all other threads for a noticable period of time.
Use Visual Studio to check and see what other threads are currently broken in this function. See if that gives you a clue as to what is holding up this process.
You may have a trace listener that maybe is interfering with the Debug.WriteLine.
There is a custom trace listener in this application. Once I commented it out, my locking problems were solved. Now if I could only track down the original developer to find out what they were doing with this custom listener...

weblogic 10 TimerManager avoiding propagation of security context to the scheduled tasks

We are using weblogic 10 and I am using the commonj's TimerManager which is part of weblogic to schedule a task, everything is fine but I have one problem. The securitycontext of the thread which scheduled the TimerListener task is somehow stored in the TimerListener task and is being used for the work done in the TimeListener task and this is causing the problem for me. Can anyone of you pls point me on how to avoid propagation of security context to the scheduled tasks from the thread which scheduled those tasks?
This is way late but anyway, one way to avoid propagating context is to use unmanaged threads i.e. spawn threads without commonj. Of this throws the baby out with the bathwater.