How does a VxWorks scheduler get executed? - vxworks

Would like to know how the scheduler gets called so that it can switch tasks. As in even if its preemptive scheduling or round robin scheduling - the scheduler should come in to picture to do any kind of task switching. Supposing a low priority task has an infinite loop - when does the scheduler intervene and switch to a higher priority task?
Query is:
1. Who calls the scheduler? [in VxWorks]
2. If it gets called at regular intervals - how is that mechanism implemented?
Thanks in advance.
--Ashwin

The simple answer is that vxWorks takes control through a hardware interrupt from the system timer that occurs continually at fixed intervals while the system is running.
Here's more detail:
When vxWorks starts, it configures your hardware to generate a timer interrupt every n milliseconds, where n is often 10 but completely depends on your hardware. The timer interval is generally set up by vxWorks in your Board Support Package (BSP) when it starts.
Every time the timer fires an interrupt, the system starts executing the timer interrupt handler. The timer interrupt handler is part of vxWorks, so now vxWorks has control. The first thing it does is save the CPU state (such as registers) into the Task Control Block (TCB) of the currently running task.
Then eventually vxWorks runs the scheduler to determine who runs next. To run a task, vxWorks copies the state of the task from its TCB into the machine registers, and after it does that the task has control of the CPU.
Bonus info:
vxWorks provides hooks into the task switching logic so you can have a function get called whenever your task gets preempted.

indiv provides a very good answer, but it is only partially accurate.
The actual working of the system is slightly more complex.
The scheduler can be executed as a result of either synchronous or asynchronous operations.
Synchronous refers to operations that are caused as a result of the code in the currently executing task. A prime example of this would be to take a semaphore (semTake).
If the semaphore is not available, the currently executing task will pend and no longer be available to execute. At this point, the scheduler will be invoked and determine the next task that should execute and will perform a context switch.
Asynchronous operations essentially refer to interrupts. Timer interrupts were very well described by indiv. However, a number of different elements could cause an interrupt to execute: network traffic, sensor, serial data, etc...
It is also good to remember that the timer interrupt does not necessarily cause a context switch! Yes, the interrupt will occur, and delayed task and the time slice counters will be decremented. However, if the time slice is not expired, or no higher priority task transitions from the pended to the ready state, then the scheduler will not actually be invoked, and you will return back to the original task, at the exact point where execution was interrupted.
Note that the scheduler does not have its own context; it is not a task. It is simply code that executes in whatever context it is invoked from. Either from the interrupt context (asynchronous) or from the invoking task context (synchronous).

Unless you have a majorily-customized target build, the scheduler is invoked by the Timer interrupt. Details are platform-specific, though.

The scheduler also gets invoked if current task gets completed or blocks.

Related

Process of State

I learned that when an interrupt occurs, the process goes to the ready queue rather than going through the Blocked Queue. However, in this picture, the interrupted process has moved to the blocked queue(which is a circle with pink color). I'm confused that which case goes to the ready queue and which goes to the blocking queue.
Process management in general is much more complex than this. A task is often tied to one specific processor core. Several tasks are tied to the same processor core and each of these tasks can be blocked waiting for IO. It means that any task can be interrupted at any time by an interrupt triggered by a device controller even if the task currently running on the core had nothing to do with that specific interrupt.
The diagram is thus incomplete. It doesn't take in account the complete process lifecycle. In your diagram, the process goes on the blocked queue if it is waiting for IO (after a syscall like read()). It goes to the ready queue if it was preempted by the kernel for another process to have some time on that core.
I think people often have the misconception that each process will run all the time until completion. It cannot be that way otherwise most processes would never get time on any core. Instead, if the amount of processes is higher than the amount of cores, the kernel uses the per core local APIC's timer (local APIC is on x86-64 but you will have similar mechanisms on every architecture) to give every process tied to that core a time slice. When a certain process is scheduled for a certain core, the kernel starts the timer with its time slice. When the time slice has elapsed, the local APIC triggers an interrupt letting the kernel know that another process should be scheduled on that core. This is why a process can be preempted in the middle of its execution. The process is still considered to be ready to run. It is simply that its time slice was exhausted so the kernel decides to give some time to another process. The preempted process will be given some more timer later. Since, in human terms, the time slice of each process is very short, it gives the impression that each process is running consistently without interruption when it is not really the case. (By the way this diagram is very Linux kernel specific)

Vulkan - How to efficiently copy data to CPU *and* wait for it

Let's say I want to execute the following commands:
cmd_buff start
dispatch (write to texture1)
copy (texture1 on gpu to buffer1 host-visible)
dispatch (write to texture2)
cmd_buff end
I'd like to know as soon as possible when buffer1's data are available.
My idea here is to have a waiting thread on which I'd wait for the copy to have completed. What I'd do is first split the above list of cmds into:
cmd_buff_1 start
dispatch (write to texture1)
copy (texture1 on gpu to buffer1 host-visible)
cmd_buff_1 end
and:
cmd_buff_2 start
dispatch (write to texture2)
cmd_buff_2 end
Now, I'd call vkQueueSubmit with cmd_buff_1 and with some fence1, followed by a call to another vkQueueSubmit with cmd_buff_2 with NULL fence.
On the waiting thread I'd call vkWaitForFences( fence1 ).
That's how I see such an operation. However, I'm wondering if that is optimal and if there was actually any way to put a direct sync still within cmd_buff_1 so that I wouldn't need to split the cmd buffer into two?
Never break up submit operations just to test fences; submit operations are too heavyweight to do that. If the CPU needs to check to see if work on the GPU has reached a specific point, there are many options other than a fence.
The simplest mechanism for something like this is to use an event. Set the event after the transfer operation, then use vkGetEventStatus on the CPU to see when it is ready. That's a polling function, so a waiting CPU thread won't immediately wake up when the data is ready (but then, there's no guarantee that would happen with a non-polling function either).
If timeline semaphores are available to you, you can wait for them to reach a particular counter value on the CPU with vkWaitSemaphores. This requires that you break the batch up into two batches, but they can both be submitted in the same submit command.

How to choose proper watchdog timer value

The question is:
How should I configure the Watchdog Timer if I have 3 tasks with different priorities and different execution time?
Say:
Task1: Highest Priority , Exec. Time = 5 ms
Task2: Medium Priority , Exec. Time = 10 ms
Task3: Lowest Priority , Exec. Time = 15 ms
The proper way to do this is
Create a special watchdog task that waits on 3 semaphores/mutexes/message queues (sequentially) in a loop
Feed those three semaphores from your worker tasks (each task feeds one semaphore of the watchdog task)
re-set the watchdog timer in the watchdog task's loop to the sum of the loop timing of all worker tasks (worst case) plus some headroom.
If any of your worker tasks or the watchdog tasks hangs, it will eventually block the watchdog task and the watchdog will expire. You want to make sure the watchdog is only re-triggered when all tasks are running properly. Use the simplest inter-task communication means your RTOS provides to make it as robust as possible against crashes.
Look at this definition
A watchdog timer is an electronic timer that is used to detect and recover from computer malfunctions. During normal operation, the computer regularly resets the watchdog timer to prevent it from elapsing, or "timing out"
So you set the watchdog timer value, that trigger watchdog when you are sure none of 3 tasks is running. To be more accurate, you reset the timer when you are sure all of the tasks are running. When a single task stopped due to unknown reason, you want to trigger watchdog (you can read more on it)
Now the real thing, what should be time for watchdog timer? you need to set a timer when you want to restart the program, so include all wait time for a task, delays in tasks and check worst-case time (max time) for all tasks to be executed at least once. then set the timer value a little bit more than this max value.

How processor get to know to switch process with high prioirity process?

I red that, process scheduler will replace the process that is currently processing by cpu
with high priority process. At any point only one process will be executed by processor in that case where the scheduler is running to notify cpu about high priority process, when cpu is busy in executing low priority process ?
The process scheduler is the component of the operating system that is
responsible for deciding whether the currently running process should
continue running and, if not, which process should run next.
To help the scheduler monitor processes and the amount of CPU time that they use, a programmable interval timer interrupts the processor periodically (typically 50 or 60 times per second). This timer is programmed when the operating system initializes itself. At each interrupt, the operating system’s scheduler gets to run and decide whether the currently running process should be allowed to continue running or whether it should be suspended and another ready process allowed to run. This is the mechanism used for preemptive scheduling.
So,basically,the process scheduler runs in the same main memory,when active, but are only activated after getting invoked by interrupts. Hence,they aren't all time running.
BTW,that was a great conceptual question to answer.Best wishes for your topic.
The higher-priority thread/process will preempt the lower-priority thread when an interrupt causes the scheduler to be run to decide on what set of threads to run next, and the scheduler algorithm decides that the lower-priority thread needs to be replaced by the higher-priority one.
Interrupts come in two flavours:
Software interrupts, (syscalls) from threads that are already running and change the state of threads, eg. by signaling an event, mutex or semaphore upon which another thread is waiting, and so making it ready to run.
Hardware interrupts that cause a driver to run and that driver chooses to invoke the scheduler on exit because an I/O operation has completed or some timeout interval has expired that needs to change the set of running threads, (eg. disk, NIC, KB, mouse, timer).

Who runs the scheduler in operating systems when CPU is given to user processes?

If there are 10 processes P1,P2...P10 and are scheduled using round robin policy by the scheduler to access the CPU.
Now when Process P1 is using the CPU and the current time slice has expired, P1 needs to be preempted and P2 needs to scheduled. But since P1 is using the CPU, who preempts the P1 and schedules P2 ?
We may Scheduler does this, but how does scheduler run when CPU is held by P1 ?
It's exactly like jcoder said but let me elaborate (and make an answer instead of a comment)
Basically, when your OS boots, it initializes an interrupts vector where the CPU upon a given interrupt calls the appropriate interrupt handler.
The OS, also during boot time, will check for the available hardware and it'll detect that your board has x number of timers.
Timers are simply hardware circuits that tick using a given clock speed and they can be set to send an interrupt after a given time (each with a different precision usually, depending on its clock speed and other things)
After the OS detects the timers, it sets one of them, for example, to send an interrupt every 50 ms; now every 50 ms the CPU will stop whatever it's doing and invoke that interrupt handler, usually the scheduler code, which in turn will check what's the currently running process and make a decision to keep it or not depending on the scheduling policy.
The scheduler, like most of the OS actually, is a passive thing that acts only when there's some event.
Based on your Question P1 needs to be preempted and P2 needs to scheduled so there is a concept of CPU scheduler (CPU scheduler is the process of Operating System, that continuously watching the running process) responsibility to selects process among the processes in memory that are ready to execute, and allocates the CPU to one of them.
CPU scheduling is take place if a process:
List item
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Dispatcher module gives control of the CPU to the process selected by the CPU scheduler;