I'm working with .NET new TPL library and faced with some strange behavior for me I cannot explain. For some reason nested task is not started in my case. I've simplified the solution I have to the following:
bool flag = false;
for (int i = 0; i < 5; i++)
{
Task.Factory.StartNew(() =>
{
while (true) // a lot of newcoming tasks
{
Thread.Sleep(200); //do some work
Task.Factory.StartNew(() =>
{
flag = true;
});
}
});
}
Thread.Sleep(2000);
Assert.IsTrue(flag);
I have 5 Tasks that running concurrently. Each tasks retrieve some elements from pending queue, performs some operation and then try to run nested task for the results of this operation. The problem is that if there are too many elements (while(true) simulates this) and all 5 tasks are constantly running nested tasks are not started. The can only be started after most of tasks with while loop finish their execution.
It seems something wrong with while statements that blocks nested tasks to run, but I don't know what :)
Task.Factory.StartNew doesn't start a task, it adds the task to the list of tasks to be scheduled and the scheduler gets to decide when to run the task based on things like; number of available cores (size of thread pool), current CPU load and the throughput of existing work.
You should read the section about task scheduling here:
http://parallelpatterns.codeplex.com/releases/view/48562
Page 63 of the PDF onwards.
The LongRunning option "fixes" your problem by bypassing the thread pool entirely. This has some disadvantages in that it will allow you to create more threads than your system should really be using, this will degrade the performance by causing excessive context switching.
Experiments like the code you have above using thread sleep are misleading because they "fool" the scheduler. It sees that it added more work and yet the CPU load hasn't increased.63 You should replace the sleep with a tight loop which contains math (like calculating Sqrt() for example.
Why not simply have a single outer loop which reads items from a queue and executes them on Task. That way your application will make the most use of available parallelism of the system without overloading it.
The following answer might be worth a look:
Parallel Task Library WaitAny Design
I think you'll find that the library only starts parallel tasks roughly based on the number of cores you have available. i.e. it's not a good choice for tasks which are I/O bound where you might actually want to use significantly more threads than you have CPUs.
You're not really saying that the nested tasks don't start, are you? You're just saying that they don't get started at the point you'd like them to, but start later.
Related
How do you debug an RTOS application? I am using KEIL µVision and when I hit debug, the program steps through the main function until the function that initializes the RTOS kernel and then you can't step any further. The code itself works though. It is not mine btw, but I have to work on it. Is this normal behavior with RTOS applications or is this related to the program?
Yes, this is normal. You need to set breakpoints in the source code for the tasks that were created in main(): the only purpose of main() in a FreeRTOS application is to :
initialize the hardware,
create the resources (timers, semaphores...) and tasks your application will need,
start the scheduler
The application should never return from vTaskStartScheduler() if they were enough resources available.
Put break-points at the entry point of each task you need to debug. When you step the over the scheduler start (or simply run) the debugger will halts at the first task that runs. When that task blocks, some other task will be selected to run according to the scheduling rules.
Generally when debugging and you reach a blocking call, step-over it, other tasks may run and the debugger will stop at the next line only when the task becomes ready (depending on the nature of the blocking call). Often you will want to predict what task will run as a result of the call, and put a breakpoint in that task. For example if you issue a message send, you might place a breakpoint after the message receive call of the receiving task.
The point is you cannot "step-through" a context switch unless you have the RTOS source or do it at the assembler level, which is seldom useful or productive, and will not work for preemption.
You get a somewhat better RTOS debug experience and tool support in Keil if you use Keil's own RTX5 RTOS rather then FreeRTOS, but all of the above remains true.
Yes, this is an expected behaviour. The best way to debug a RTOS application is to place breakpoints at all tasks, key function entry points and step debug.
The debugger supports various methods of single-stepping through an application as in below link.
http://www.keil.com/products/uvision/db_exe_step.asp
Typical challenges in debugging RTOS application can be dealing with interrupt handling, synchronization issues and register/memory corruption.
Keil µVision's System Analyzer enables one to view the program execution time frame, status of each thread. It shall also help in viewing interrupts, exceptions if tracer is enabled.
I've got a long-running loop which involves a fair amount of UI functions. This loop therefore must be run on the main thread. However, I also want to display progress of this task, so this must also run on the main thread as displaying the current progress would involve updating the UI. I am really struggling to find a way of allowing the UI to update with current progress on the main thread when the main loop is also running on the main thread. What happens is that the UI is frozen during the loop and then updates to show that the process is finished when it's done.
This is not for a production app, it's for a personal project that will never be release. So it is of no concern that the UI is frozen from a UX perspective. If the solution involves putting the processing in the background then this refactoring is fine, but I'm not sure how to do it when a lot of the heavy lifting during this loop involves UI stuff too.
Isn't it funny how you sometimes come up with a solution just after posting the question?! The key seemed to be rather than using a for loop for the processing, instead putting the processing function inside a separate method and repeatedly calling it, passing the array of objects to process to it. Doing this, you can call the function using [self performSelector:withObject:afterDelay:]. Even if you provide a value of zero for the delay, it causes the method to be called on the next run loop. This means you can update the UI, process the next item, and repeat this process until the array of items is empty. Here's my completed solution. If anybody knows a better way I'd still love to hear it, but for now this is at least working!
Edit - I packaged this solution up into a class of its own to make it easier to manage, and put it on my Github. Maybe it will help somebody else out :)
Edit 2 - made the processing class more flexible by making it run loops instead of iterating through arrays. You can of course use it to iterate through an array yourself, as per the example in the readme. But if you're not working with an array, you can just run the loop runCount times and do whatever you need to do in the processingBlock.
https://github.com/mashers/BackgroundLoopProcessor
I'm implementing small service using asyncio using a loop that is structured as follows:
pending = {...}
while True:
done, pending = yield from asyncio.wait(
pending,
return_when=asyncio.FIRST_COMPLETED,
)
for future in done:
if future is x:
# ...
if future is y:
# ...
This loop is currently controlling a sub-process, but having written a bunch of ZMQ-based services, this style feels very natural, so I'll likely be writing more of these in the near future.
I have something that runs just the way I like, but I'm kind of at a loss as to how I would write automated tests for this.
I would like to have my tests start this loop until it blocks on asyncio.wait() and inject one particular event to test the loop's handling of that particular event. That way, I can test each possible event handling and know I cover all cases as expected.
However, I can't find anything in asyncio that provides for this. If I simply yield from this coroutine, the test does not unblock until the coroutine completes.
Any ideas on how to test this particular kind of loop?
Edit: OK, so I managed to get something running using a socketpair() by patching sys.stdout with the write end and wrapping the other end in a StreamReader.
This works when I run pytest without capture, but as soon as remove the -s argument, the test seems to deadlock.
Any ideas?
I have a custom MSBuild task for xUnit.net. When the task is running, if I hit Ctrl+C, it 'tries' to cancel the task, but of course it fails (since my task doesn't support cancelation). No amount of MSDN doc searchs or Google-fu have landed on a solution. Since I can't find an obvious interface to implement, I'm guessing maybe cancelation is supported by way of some convention.
Has anybody done this before, and knows what's required to get cancelation to work?
Your task needs to implement ICancelableTask. It's a very simple interface added in 4.0.
Basically you just add a Cancel() method. It must be ready to be called on a different thread, at any time, and return promptly. Your task must then return from Execute promptly. Typically you'd set a boolean flag inside Cancel(). Then inside your task you'd typically have a loop processing each input in turn -- for example, copying one file after another -- and in each iteration, check the flag; if it's true, break out. It doesn't matter whether you return true or false from Execute in this context.
If you're deriving from ToolTask -- if your task spawns a tool, it's very strongly recommended that you do this, as it saves a great deal of code, handles async logging, and other things -- then it already handles Cancel automatically. When Cancel happens, it kills the tool it spawned and all its children. The C++ team's tasks in some cases override this default behavior, so that their compiler/linker has a few seconds to clean up their half-written outputs before returning.
(Trivia: when I first implemented this in MSBuild, I accidentally made VS bluescreen the box occasionally. This nearly shipped in VS10 beta but was discovered just in time. The bluescreen was because the logic for figuring out the process tree was wrong, and would sometimes kill a system process. Oops.)
Dan
I know you're well aware of the Task hierarchy, but on the offchance this is what you're looking for and it's just the fact that you're not implementing a ToolTask...
Inside MSBuild 2nd ed says (p118) of ToolTask.Cancel
This method is called to cancel the task execution. Once this method is called by MSBuild, if the task does not complete, it will be forcefully terminated
There are no other references to cancellation in it.
I have written a task script using vb.net that have thread used in the code, the problem is how i can know when will be finished all the threads so i can return the success result.
Thanks alot.
i think you need to use a waitHandle object and the waitAll method
more info here: http://msdn.microsoft.com/en-us/library/system.threading.waithandle.aspx
That being said, I suspect you can refactor the design of your package to let the script task handle the execution, and let SSIS handle the execution scheduling. this gives you the parallelism you want without any of the hassle of multi threaded programming in .net.
a simple setup would be n foreach loops (which execute in serial) each running a partitioned chunk of the work load.
Another simpler option is have the package driven by variables and spawn multiple executions of the package. This could occur across 1-N servers to scale out.