difference between non preemptive and cooperative and rate-monotonic scheduler? - embedded

I have Read about co-operative Scheduler which not let higher priority task run till lower priority task block itself. so if there is no delay in task the lower task will take the CPU forever is it correct? because I have thought the non preemptive is another name for cooperative but there is another article which has confused me which say in non preemptive higher task can interrupt lower task at sys tick not in the middle between ticks so what's correct ?
is actually cooperative and non preemptive are the same?
and Rate monotonic is one type of preemptive scheduler right?
it's priority didn't set manually the scheduler Algo decide priority based on execution time or deadline it is correct?
is it rate monotonic better than fixed priority preemptive kernel (the one which FreeRtos Used)?

These terms can never fully cover the range of possibilities that can exist. The truth is that people can write whatever kind of scheduler they like, and then other people try to put what is written into one or more categories.
Pre-emptive implies that an interrupt (eg: from a clock or peripheral) can cause a task switch to occur, as well as it can occur when a scheduling OS function is called (like a delay or taking or giving a semaphore).
Co-operative means that the task function must either return or else call an OS function to cause a task switch.
Some OS might have one specific timer interrupt which causes context switches. The ARM systick interrupt is suitable for this purpose. Because the tasks themselves don't have to call a scheduling function then this is one kind of pre-emption.
If a scheduler uses a timer to allow multiple tasks of equal priority to share processor time then one common name for this is a "round-robin scheduler". I have not heard the term "rate monotonic" but I assume it means something very similar.
It sounds like the article you have read describes a very simple pre-emptive scheduler, where tasks do have different priorities, but task switching can only occur when the timer interrupt runs.

Co-operative scheduling is non-preemptive, but "non-preemptive" might describe any scheduler that does not use preemption. It is a rather non-specific term.
The article you describe (without citation) however, seems confused. Context switching on a tick event is preemption if the interrupted task did not explicitly yield. Not everything you read in the Internet is true or authoritative; always check your sources to determine thier level of expertise. Enthusiastic amateurs abound.
A fully preemptive priority based scheduler can context switch on "scheduling events" which include not just the timer tick, but also whenever a running thread or interrupt handler triggers an IPC or synchronisation mechanism on which a higher-priority thread than the current thread is waiting.
What you describe as "non-preemptive" I would suggest is in fact a time triggered preemptive scheduler, where a context switch occurs only in a tick event and not asynchronously on say a message queue post or a semaphore give for example.
A rate-monotonic scheduler does not necessarily determine the priority automatically (in fact I have never come across one that did). Rather the priority is set (manually) according to rate-monotonic analysis of the tasks to be executed. It is "rate-monotonic" in the sense that it supports rate-monotonic scheduling. It is still possible for the system designer to apply entirely inappropriate priorities or partition tasks in such a way that they are insufficiently deterministic for RMS to actually occur.
Most RTOS schedulers support RMS, including FreeRTOS. Most RTOS also support variable task priority as both a priority inversion mitigation, and via an API. But to be honest if your application relies on either I would argue that it is a failed design.

Related

WDT in a preemptive RTOS kernel

I heard that the best way to use a watch dog timer in a preemptive kernel is to assign it to the lowest task/idle task and refresh it there, I fail to understand why though,what if high priority tasks keeps running and idle task doest run before timeout.
any clarifications?
Thanks.
fail to understand why though,what if high priority tasks keeps running and idle task doest run before timeout.
Well, that is kind of the point. If the lowest priority thread (or better, the idle thread) is starved, then your system will be missing deadlines, and is either poorly designed or some unexpected condition has occurred.
If it were reset at a high priority or interrupt, then all lower priority threads could be in a failed state, either running busy or never running at all, and the watchdog would be uselessly maintained while not providing any protection whatsoever.
It is nonetheless only a partial solution to system integrity monitoring. It addresses the problem of an errant task hogging the CPU, but it does not deal with the issue of a task blocking and never being scheduled as intended. There are any number of ways of dealing with that, but a simple approach is to have "software watchdogs", counters that get reset by each task, and decremented in a high- priority timer thread or handler. If any thread counter reaches zero, then the respective thread blocked for longer than intended, and action may be taken. This requires that each thread runs at an interval shorter than its watchdog counter reset value. For threads that otherwise block indefinitely waiting for infrequent aperiodic events, you might use a blocking timeout just to update the software watchdog.
There is no absolute rule about the priority of a watchdog task. It depends on your design and goals.
Generally speaking, if the watchdog task is the lowest priority task then it will fail to run (and the watchdog will reset) if any higher priority task becomes stuck or consumes too much of the CPU time. Consider that if the high priority task(s) is running 100% of the time then that's probably too much because lower priority tasks are getting starved. And maybe you want the watchdog to reset if lower priority tasks are getting starved.
But that general idea isn't a complete design. See this answer, and especially the "Multitasking" section of this article (https://www.embedded.com/watchdog-timers/) for a more complete watchdog task design. The article suggests making the watchdog task the highest priority task but discusses the trade-offs of the alternative.

How to restart a task

How to restart a lower priority task from a higher priority task?
This is a general question about how RTOS in embedded systems work.
I have multiple tasks with different priorities. The lower priority task has certain steps e.g., step1, step2, step3.
The highest priority task handles system malfunctions. If a malfunction occurs then an ISR in the system will cause the higher priority task to immediately run.
My question ...
If system malfunction occurs while the lower priority task in middle e.g., step2 and we do not want to run the rest of the steps in the lower priority task, then how do we accomplish it?
My understanding is that when the scheduler is ready to run the lower priority task then it will continue from where it left off prior to system malfunction. So step3 will be executed.
Does embedded RTOS (e.g., Keil RTX or FreeRTOS) provide such a mechanism so that on certain signal from higher priority task/ISR the lower priority task can restart?
The mechanism you are suggesting is unlikely to be supported by an RTOS since it would be non-deterministic in its behaviour. For example such an RTOS mechanism would have no knowledge of resource allocation to initialisation within the task and whether it would be safe to simply "restart", or how to clean-up if it were not.
Moreover the RTOS preempts at the machine instruction level, not between logical functional "steps" - there is no determination of where it is in the process.
Any such mechanism must be built into the task's implementation at the application level not the RTOS level in order that the process is controlled and deterministic. For example you might have a structure such as:
for(;;)
{
step1() ;
if( restart() )
{
clean_up() ;
continue ;
}
step2() ;
if( restart() )
{
clean_up() ;
continue ;
}
step3() ;
}
Where the malfunction requests a restart, and the request is polled through restart() at specific points in the task where a restart is valid or safe. The clean_up() performs any necessary resource management, and the continue causes a jump to the start of the task loop (in a more complex situation, a goto might be used, but this is already probably a bad idea - don't make it worse!).
Fundamentally the point is you have to code the task to handle the malfunction appropriately and the RTOS cannot "know" what is appropriate.
While there is no generic RTOS mechanism for what you are suggesting, it is possible perhaps to implement a framework to support the required behaviour, but it would require you to write all your tasks to a specific pattern dictated by such a framework - implementing a comprehensive solution that handles resource clean-up in a simple manner however is non-trivial.
QNX Neutrino has a "High Availability Framework" for example that supports process restart and automatic clean-up. It is an example of what can be done, but it is of course specific to that RTOS. If this behaviour is critical to your application, then you need to select your RTOS accordingly rather then rely on "traditional" mechanisms available to any RTOS.

operating system - context switches

I have been confused about the issue of context switches between processes, given round robin scheduler of certain time slice (which is what unix/windows both use in a basic sense).
So, suppose we have 200 processes running on a single core machine. If the scheduler is using even 1ms time slice, each process would get its share every 200ms, which is probably not the case (imagine a Java high-frequency app, I would not assume it gets scheduled every 200ms to serve requests). Having said that, what am I missing in the picture?
Furthermore, java and other languages allows to put the running thread to sleep for e.g. 100ms. Am I correct in saying that this does not cause context switch, and if so, how is this achieved?
So, suppose we have 200 processes running on a single core machine. If
the scheduler is using even 1ms time slice, each process would get its
share every 200ms, which is probably not the case (imagine a Java
high-frequency app, I would not assume it gets scheduled every 200ms
to serve requests). Having said that, what am I missing in the
picture?
No, you aren't missing anything. It's the same case in the case of non-pre-emptive systems. Those having pre-emptive rights(meaning high priority as compared to other processes) can easily swap the less useful process, up to an extent that a high-priority process would run 10 times(say/assume --- actual results are totally depending on the situation and implementation) than the lowest priority process till the former doesn't produce the condition of starvation of the least priority process.
Talking about the processes of similar priority, it totally depends on the Round-Robin Algorithm which you've mentioned, though which process would be picked first is again based on the implementation. And, Windows and Unix have same process scheduling algorithms. Windows and Unix does utilise Round-Robin, but, Linux task scheduler is called Completely Fair Scheduler (CFS).
Furthermore, java and other languages allows to put the running thread
to sleep for e.g. 100ms. Am I correct in saying that this does not
cause context switch, and if so, how is this achieved?
Programming languages and libraries implement "sleep" functionality with the aid of the kernel. Without kernel-level support, they'd have to busy-wait, spinning in a tight loop, until the requested sleep duration elapsed. This would wastefully consume the processor.
Talking about the threads which are caused to sleep(Thread.sleep(long millis)) generally the following is done in most of the systems :
Suspend execution of the process and mark it as not runnable.
Set a timer for the given wait time. Systems provide hardware timers that let the kernel register to receive an interrupt at a given point in the future.
When the timer hits, mark the process as runnable.
I hope you might be aware of threading models like one to one, many to one, and many to many. So, I am not getting into much detail, jut a reference for yourself.
It might appear to you as if it increases the overhead/complexity. But, that's how threads(user-threads created in JVM) are operated upon. And, then the selection is based upon those memory models which I mentioned above. Check this Quora question and answers to that one, and please go through the best answer given by Robert-Love.
For further reading, I'd suggest you to read from Scheduling Algorithms explanation on OSDev.org and Operating System Concepts book by Galvin, Gagne, Silberschatz.

What is the difference between scheduler and dispatcher in context of process scheduling

I am currently pursuing an undergraduate level course in Operating Systems. I'm somewhat confused about the functions of dispatcher and scheduler in process scheduling. Based on what I've learnt, the medium term scheduler selects the process for swapping out and in , and once the processes are selected, the actual swap operation is performed by Dispatcher by context switching. Also the short term scheduler is responsible for scheduling the processes and allocate them CPU time, based on the scheduling algorithm followed.
Please correct me if I'm wrong. I'm really confused about the functions of medium term scheduler vs dispatcher, and differences between Swapping & context switching.
You describing things in system specific terms.
The scheduler and the dispatcher could be all the same thing. However, the frequently are divided so that the scheduler maintains a queue of processes and the dispatcher handles the actual context switch.
If you divide the scheduler into long term, medium term, and short term, that division (if it exists at all) is specific to the operating system.
Swapping in the process of removing a process from memory. A process can be made non-executable through a context switch but may not be swapped out. Swapping is generally independent of scheduling. However, a process must be swapped in to run and the memory management will try to avoid swapping out executing processes.
A scheduler evaluate the requirement of the request to be serviced and thus imposes ordering.
Basically,whatever you have known about scheduler and dispatcher is correct.Sometimes they are referred to as a same unit or scheduler(short time in this case) contains dispatcher as a single unit and together are responsible for allocating a process to CPU for execution.Sometimes they are referred as two separate units,the scheduler selects a process according to some algorithm and the dispatcher is a software that is responsible for actual context switching.

Is there a way to make threads run slow?

This question is probably opposite of what every developer wants their system to do.
I am creating a software that looks into a directory for specific files and reads them in and does certain things. This can create high CPU load. I use GCD to make threads which are put into NSOperationQueue. What I was wondering, is it possible to make this operation not take this huge CPU load? I want to run things way slower, as speed is not an issue, but that the app should play very nice in the background is very important.
In short. Can I make NSOperationQueue or threads in general run slowly without using things like sleep?
The app traverses a directory structure, finds all images and creates thumbnails. Just the traversing of the directories makes the CPU load quite high.
Process priority: nice / renice.
See:
https://superuser.com/questions/42817/is-there-any-way-to-set-the-priority-of-a-process-in-mac-os-x#42819
but you can also do it programmatically.
Your threads are being CPU-intensive. This leads to two questions:
Do they need to be so CPU-intensive? What are they doing that's CPU-intensive? Profile the code. Are you using (say) a quadratic algorithm when you could be using a linear one?
Playing nicely with other processes on the box. If there's nothing else on the box then you /want/ to use all of the available CPU resource: otherwise you're just wasting time. However, it there are other things running then you want to defer to them (within reason), which means giving your process a lower priority (i.e. /higher/ nice value) than other processes. Processes by default have nice value 0, so just make it bigger (say +10). You have to be root to give a process negative niceness.
The Operation Queues section of the in the Concurrency Programming Guide describes the process for changing the priority of a NSOperation:
Changing the Underlying Thread Priority
In OS X v10.6 and later, it is possible to configure the execution priority of an operation’s underlying thread. Thread policies in the system are themselves managed by the kernel, but in general higher-priority threads are given more opportunities to run than lower-priority threads. In an operation object, you specify the thread priority as a floating-point value in the range 0.0 to 1.0, with 0.0 being the lowest priority and 1.0 being the highest priority. If you do not specify an explicit thread priority, the operation runs with the default thread priority of 0.5.
To set an operation’s thread priority, you must call the setThreadPriority: method of your operation object before adding it to a queue (or executing it manually). When it comes time to execute the operation, the default start method uses the value you specified to modify the priority of the current thread. This new priority remains in effect for the duration of your operation’s main method only. All other code (including your operation’s completion block) is run with the default thread priority. If you create a concurrent operation, and therefore override the start method, you must configure the thread priority yourself.
Having said that, I'm not sure how much of a performance difference you'll really see from adjusting the thread priority. For more dramatic performance changes, you may have to use timers, sleep/suspend the thread, etc.
If you're scanning the file system looking for changed files, you might want to refer to the the File System Events Programming Guide for guidance of lightweight techniques for responding to file system changes.