If I have a non-RTOS single core system, can one task, say taskA interrupt another task, say taskB, where neither taskA or taskB are interrupt routines? Or is interruption of one task by another only possible through ISR(interrupt service routines) on non-RTOS systems?
For your system to have more than one non-ISR thread implies that there is some sort of multi-tasking - and multi-tasking is not exclusive to an RTOS. One task "interrupting" another is known as preemption. Preemption requires a preemptive scheduler, while an RTOS is necessarily a pre-emptive scheduler, so also are Windows and Linux for example - but these are not real-time since scheduling and preemption is not deterministic.
Preemptive multi-tasking is necessary to support preemption, but real-time deterministic scheduling is not required. Preemption however is not necessary to multi-tasking; some systems (Notably 16 bit versions of Windows prior to Win95, and MacOS prior to OSX) are cooperative multitasking systems where a running task must yield the CPU to allow other tasks to run.
In a preemptive multitasking system, the scheduler executes on exit from the interrupt context and whenever a task invokes a schedulable event (such as giving a semaphore, queuing a message or releasing a mutex. If when the scheduler runs a task becomes ready to run and the scheduling policy requires or allows it to preempt the current task, a context switch will occur.
So in short one non-ISR thread or process "interrupting" another requires an OS that supports preemption, which need not be an RTOS.
Control must be given to the task scheduler in order for a context switch to occur. That can happen as a result of an interrupt if the interrupt handler is designed to call the scheduler. Or it can happen as a result of some function call (such as yield, post, or pend) if that function calls the scheduler.
This task scheduler could be part of an RTOS. Or maybe it's some minimal task switching kernel that you don't consider to be an RTOS . Regardless, some sort of scheduler must get control in order to perform a task context switch.
Related
I have Read about co-operative Scheduler which not let higher priority task run till lower priority task block itself. so if there is no delay in task the lower task will take the CPU forever is it correct? because I have thought the non preemptive is another name for cooperative but there is another article which has confused me which say in non preemptive higher task can interrupt lower task at sys tick not in the middle between ticks so what's correct ?
is actually cooperative and non preemptive are the same?
and Rate monotonic is one type of preemptive scheduler right?
it's priority didn't set manually the scheduler Algo decide priority based on execution time or deadline it is correct?
is it rate monotonic better than fixed priority preemptive kernel (the one which FreeRtos Used)?
These terms can never fully cover the range of possibilities that can exist. The truth is that people can write whatever kind of scheduler they like, and then other people try to put what is written into one or more categories.
Pre-emptive implies that an interrupt (eg: from a clock or peripheral) can cause a task switch to occur, as well as it can occur when a scheduling OS function is called (like a delay or taking or giving a semaphore).
Co-operative means that the task function must either return or else call an OS function to cause a task switch.
Some OS might have one specific timer interrupt which causes context switches. The ARM systick interrupt is suitable for this purpose. Because the tasks themselves don't have to call a scheduling function then this is one kind of pre-emption.
If a scheduler uses a timer to allow multiple tasks of equal priority to share processor time then one common name for this is a "round-robin scheduler". I have not heard the term "rate monotonic" but I assume it means something very similar.
It sounds like the article you have read describes a very simple pre-emptive scheduler, where tasks do have different priorities, but task switching can only occur when the timer interrupt runs.
Co-operative scheduling is non-preemptive, but "non-preemptive" might describe any scheduler that does not use preemption. It is a rather non-specific term.
The article you describe (without citation) however, seems confused. Context switching on a tick event is preemption if the interrupted task did not explicitly yield. Not everything you read in the Internet is true or authoritative; always check your sources to determine thier level of expertise. Enthusiastic amateurs abound.
A fully preemptive priority based scheduler can context switch on "scheduling events" which include not just the timer tick, but also whenever a running thread or interrupt handler triggers an IPC or synchronisation mechanism on which a higher-priority thread than the current thread is waiting.
What you describe as "non-preemptive" I would suggest is in fact a time triggered preemptive scheduler, where a context switch occurs only in a tick event and not asynchronously on say a message queue post or a semaphore give for example.
A rate-monotonic scheduler does not necessarily determine the priority automatically (in fact I have never come across one that did). Rather the priority is set (manually) according to rate-monotonic analysis of the tasks to be executed. It is "rate-monotonic" in the sense that it supports rate-monotonic scheduling. It is still possible for the system designer to apply entirely inappropriate priorities or partition tasks in such a way that they are insufficiently deterministic for RMS to actually occur.
Most RTOS schedulers support RMS, including FreeRTOS. Most RTOS also support variable task priority as both a priority inversion mitigation, and via an API. But to be honest if your application relies on either I would argue that it is a failed design.
Here is process state diagram from Modern Operating Systems. Transition from running to ready happens when the scheduler picks another process.
Here is process state diagram from Operating System Concepts.
What does "Interrupt" mean for transition from running to ready? Is it the same as "the scheduler picks another process" in the above?
Thanks.
There are two ways for a process to transition from the running state to the ready state depending on the OS implements multitasking:
With preemptive multitasking, the OS uses timer interrupts (there is one timer for each core or processor in the system) to regularly interrupt whatever process is currently running. The interrupt handler then invokes the OS scheduler to determine whether to schedule another process or continue running the same process. If the scheduler decided to run another process, then the current process transition from the running state to the ready state.
With cooperative multitasking, the OS does not use interrupts to scheduler processes. Instead, a running process should voluntarily yield control to the scheduler to allow it to schedule another process. So processes do not transition between the running and ready states using interrupts, but only voluntarily.
It seems to me that the figure from the Modern Operating Systems book applies to both multitasking methods while the figure from the Operating System Concepts is specifically about preemptive multitasking. Although by changing the word "interrupt" to something more inclusive like "yield," then the other figure would also apply to cooperative multitasking.
I had this question in mind from long time and may sound little vacuous. We know that operating system is responsible for handling memory allocation, process management etc. CPU can perform only one task at a time(assuming it to be single core). Suppose an operating system has allocated a CPU cycle to some user initiated process and CPU is executing that. Now where is operating system running? If some other process is using the CPU, then, is operating system not running for that moment? as OS itself must need CPU to run. If in case OS is not running, then who is handling process management, device management etc for that period?
The question is mixing up who's in control of the memory and who's in control of the CPU. The wording “running” is imprecise: on a single CPU, a single task is running at any given time in the sense that the processor is executing its instructions; but many tasks are executing in the sense that their state is stored in memory and their execution can resume at any time.
While a process is executing on the CPU, the kernel is not executing. Its state is saved in memory. The execution of the kernel can resume:
if the process code makes a jump into kernel code — this is called a system call.
if an interrupt occurs.
If the operating system provides preemptive multitasking, it will schedule an interrupt to happen after an interval of time (called a time slice). On a non-preemptive operating system, the process will run forever if it doesn't yield the CPU. See What mechanisms prevent a process from taking over the processor forever? for an explanation of how preemption works.
Tasks such as process management and device management are triggered by some event. If the event is a request by the process, the request will take the form of a system call, which executes kernel code. If the event is triggered from hardware, it will take the form of an interrupt, which executes kernel code.
(Note: in this answer, I use “CPU” and “processor” synonymously, to mean a single execution thread: a single core, or whatever the hardware architecture is.)
The OS kernel does nothing at all until it is entered via an interrupt. It may be entered because of a hardware interrupt that causes a driver to run and the driver chooses to exit via the OS, or a running thread may make a syscall interrupt.
Unless an interrupt occurs, the OS kernel does nothing at all. It does not need to do anything.
Edit:
DMA is, (usually), used for bulk I/O and is handled by a hardware subsystem that handles requests issued by a system call, (software interrupt). When a DMA operation is complete, the DMA hardware raises a hardware interrupt, so running a driver that can further signal the OS of the completion, maybe changing the set of running threads, so DMA is managed by interrupts.
A new process/thread can only be loaded by an existing thread that has issued a system call, (software interrupt), and so new processes are initiated by interrupts.
It's interrupts, all the way down :)
It depends on which type of CPU Scheduling you are using : (in case of single core)
if your process is executing with preemptive scheduling then you can interrupt the process in between for some time duration and you can use the CPU for some other Process or O.S. but in case of Non-Preemptive Scheduling process is not going to yield the CPU before completing there execution.
In case of single Core, if there is a single process then it will execute with given instruction and if there are multiple process then states stored in the PCB. which make process queue and execute one after another, if no interrupts occur.
PCB is responsible for any process management.
when a process initialize it calls to Library function that's System calls and execution of Kernel get invoke if some process get failed during execution or interrupt occur.
I'm doing some fill in the blanks from a sample exam for my class and I was hoping you could double check my terminology.
The various scheduling queues used by the operating system would consist of lists of processes.
Interrupt handling is the technique of periodically checking to see if a condition (such as completion of some requested I/O operation) has been met.
When the CPU is in kernel mode, a running program has access to a restricted set of CPU functionality.
The job of the CPU scheduler is to select a process on the ready queue and change its state.
The CPU normally supports a vector of interrupts so the OS can respond appropriately when some event of interest occurs in the hardware.
Using traps, a device controller can use idle time on the bus to read from or write to main memory.
During a context switch, the state of one process is copied from the CPU and saved, and the state of a different process is restored.
An operating system consists of a kernel and a collection of application programs that run as user processes and either provide OS services to the user or work in the background to keep the computer running smooth.
There are so many terms from our chapters, I am not quite sure if I am using the correct ones.
My thoughts:
1. Processes and/or threads. Jobs and tasks aren't unheard of either. There can be other things. E.g. in MS Windows there are also Deferred Procedure Calls (DPCs) that can be queued.
2. This must be polling.
4. Why CPU scheduler? Why not just scheduler?
6. I'm not sure about traps in the hardware/bus context.
When a process executing in the user space issues a system call or triggers an exception, it enters into the kernel space and kernel starts executing on behalf of the process. Kernel is said to be executing in the process context. Similarly when an interrupt occurs kernel executes in the interrupt context. I have studied about kernel execution in kernel thread, where kernel processes runs in the background.
My Questions are :
Does the kernel execute in any other contexts?
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
The kernel runs periodically, it sets a timer to fire an interrupt at some predefined frequency (100 Hz (Linux 2.4/x86), 1000Hz (early Linux 2.6/x86), 250Hz (newer Linux 2.6/x86)).
The kernel need to do this in order to do preemptive multitasking. OTOH, OSes only doing cooperative multitasking (Windows 3.1, classic Mac OS) needn't do this, and only switch tasks on response to some call from the running task (which could lead to runaway tasks hanging the whole system).
Note that there is some effort to optimize the use of this timer: newer Linux is smarter when there are no runnable tasks, it sets the timer as far in the future as it can, to allow the CPU to sleep longer and deeper, and preserve power (the CONFIG_NOHZ kernel config option). Running powertop will show the number of wakeups per second, which on an idle system can be much lower than the 250 wakeups per second you'd expect of a traditional implementation.
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
Assume you have a process p that is running the following code: while(1);. This code will never call into the kernel and won't cause any faults. (It might have set an alarm(3) earlier, causing a signal to be delivered in the future, or it might exceed the setrlimit(2) CPU limit, in which cases the kernel will deliver a signal to the process.)
Or, if another process sends p a signal via kill(2), the kernel will deliver that signal to the process as well.
The signal delivery will either cause a signal handler to run, do nothing (if the signal is ignored or masked), or take the default signal action (which might be nothing or termination).
And, of course, the process execution can be interrupted so the processor can handle interrupts; or a higher-priority process can preempt it.