Consider : process(a)
According to the text i have :
A process is first entered at the time
of simulation, at which time it is
executed until it suspends itself due
to a wait statement or a sensitivity
list.
Am i right in inferring that a process WILL have to run once even without any events on the sensitivity list? Also, what if there are multiple processes inside an architecture, are they all executed once?
AFAIK, the sensitivity list (eg, process (x,y) )is just a shorthand for wait on x,y; just before the end process of a procedure (pg 152, "A Designer's Guide to VHDL" 3rd edition). So all procedures will run at least once.
There are 3 stages involved in running a VHDL simulation. These are elaboration, initialisation and simulation.
At the beginning of the initialisation phase, the current time is set to 0. The simulation kernel then places all of the simulation processes in the active processes queue. Each simulation process is then taken from this queue and executed until it suspends. The order of execution of simuation processes during initialisation is not important. The initial execution of each simulation process ensures that all initial transactions are scheduled so that the simulation can continue.
A simulation process is suspended either implicity or explicity. A process with a sensitivity list is suspended implicity after its sequential statements have been executed to the end of the process. A process with one or more wait statements is suspended explicitly when its first wait statement is executed.
When the active processes queue is empty, the initialisation phase is complete.
So to answer your question, all processes will run once during the initialisation phase.
Related
I have Read about co-operative Scheduler which not let higher priority task run till lower priority task block itself. so if there is no delay in task the lower task will take the CPU forever is it correct? because I have thought the non preemptive is another name for cooperative but there is another article which has confused me which say in non preemptive higher task can interrupt lower task at sys tick not in the middle between ticks so what's correct ?
is actually cooperative and non preemptive are the same?
and Rate monotonic is one type of preemptive scheduler right?
it's priority didn't set manually the scheduler Algo decide priority based on execution time or deadline it is correct?
is it rate monotonic better than fixed priority preemptive kernel (the one which FreeRtos Used)?
These terms can never fully cover the range of possibilities that can exist. The truth is that people can write whatever kind of scheduler they like, and then other people try to put what is written into one or more categories.
Pre-emptive implies that an interrupt (eg: from a clock or peripheral) can cause a task switch to occur, as well as it can occur when a scheduling OS function is called (like a delay or taking or giving a semaphore).
Co-operative means that the task function must either return or else call an OS function to cause a task switch.
Some OS might have one specific timer interrupt which causes context switches. The ARM systick interrupt is suitable for this purpose. Because the tasks themselves don't have to call a scheduling function then this is one kind of pre-emption.
If a scheduler uses a timer to allow multiple tasks of equal priority to share processor time then one common name for this is a "round-robin scheduler". I have not heard the term "rate monotonic" but I assume it means something very similar.
It sounds like the article you have read describes a very simple pre-emptive scheduler, where tasks do have different priorities, but task switching can only occur when the timer interrupt runs.
Co-operative scheduling is non-preemptive, but "non-preemptive" might describe any scheduler that does not use preemption. It is a rather non-specific term.
The article you describe (without citation) however, seems confused. Context switching on a tick event is preemption if the interrupted task did not explicitly yield. Not everything you read in the Internet is true or authoritative; always check your sources to determine thier level of expertise. Enthusiastic amateurs abound.
A fully preemptive priority based scheduler can context switch on "scheduling events" which include not just the timer tick, but also whenever a running thread or interrupt handler triggers an IPC or synchronisation mechanism on which a higher-priority thread than the current thread is waiting.
What you describe as "non-preemptive" I would suggest is in fact a time triggered preemptive scheduler, where a context switch occurs only in a tick event and not asynchronously on say a message queue post or a semaphore give for example.
A rate-monotonic scheduler does not necessarily determine the priority automatically (in fact I have never come across one that did). Rather the priority is set (manually) according to rate-monotonic analysis of the tasks to be executed. It is "rate-monotonic" in the sense that it supports rate-monotonic scheduling. It is still possible for the system designer to apply entirely inappropriate priorities or partition tasks in such a way that they are insufficiently deterministic for RMS to actually occur.
Most RTOS schedulers support RMS, including FreeRTOS. Most RTOS also support variable task priority as both a priority inversion mitigation, and via an API. But to be honest if your application relies on either I would argue that it is a failed design.
I read that exceptions are raised by instructions and interrupts are raised by external events.
But I couldn't understand few things:
In which type we stop immediately and in which we let the current line of code finish?
In both cases and after doing 1 we run the next program, is that right?
The terminology is used differently by different authors.
The hardware exception mechanism is used by both software caused exceptions, and by external interrupts.
In which type we stop immediately and in which we let the current line of code finish?
The processor can do what its designers want up to a point.
If there is an external interrupt, the processor may choose to finish some of the work in progress before taking the interrupt.
If there is a software caused interrupt, the processor simply cannot execute that instruction (or any instructions after) and must abort its execution, and take the exception without completing the offending instruction.
In both cases and after doing 1 we run the next program, is that right?
In the case of external interrupt, the most efficient is to resume the program that was interrupted, as the caches and page tables are hot for that process. However, an external interrupt may change the runnable status of another higher priority process, or may end the timeslice of the interrupted process, indicating another process should now get CPU time instead of the one interrupted.
In the case of a software caused exception, it is could be a non-resumable situation (null pointer dereference), in which case, yes another process get the CPU after. It could be a resumable situation, like I/O request or page fault. If servicing the situation requires interfacing with peripherals, that may block the interrupted process, so another process would get some CPU time.
With reference to question SystemC module not working with SC_THREAD, a timed simulation is imitated using next_trigger(). As I understood from this article, this restarts the thread after the specified time:
next_trigger(double, sc_time_unit): The process shall be triggered when specified time has elapsed.
I.e. it effectively executes the operations after the occurrence of this instruction after the time specified, but also executes the operations found before that instruction. I have the feeling that the repeated utilization of next_trigger within an SC_THREAD may result in 'glitches' in the simulation.
Q1: Is my feeling correct?
Q2: Is there another possibility to delay execution (something that suspending the thread for the given time, rather than restarting it)
First of all next_trigger can only be used with SC_METHOD's as mentioned here:
next_trigger() is used with process methods, one's which are not threads.
Here are a few pointer's in term of SystemC processes:
SC_METHOD's are processes which must complete it's execution at one pass.(e.g.: a simple function call)
Note: Do not use while(1) loops in SC_METHOD's.
SC_THREAD's are processes which are separate thread of execution, one must explicitly use wait() statements here to synchronize the SystemC kernel simulation. This is the place where you will mostly find while(1) (infinite) loops in use.
For suspending the thread for some simulation time you can use the wait() statement to introduce the perceived delay.
But for better understanding you need to understand the difference between static and dynamic sensitivity in SystemC refer here for more information.
I am reading process management,and I have a few doubts-
What is meant by an I/o request,for E.g.-A process is executing and
hence it is in running state,it is in waiting state if it is waiting
for the completion of an I/O request.I am not getting by what is meant by an I/O request,Can you
please give an example to elaborate.
Another doubt is -Lets say that a process is executing and suddenly
an interrupt occurs,then the process stops its execution and will be
put in the ready state,is it possible that some other process began
its execution while the interrupt is also being processed?
Regarding the first question:
A simple way to think about it...
Your computer has lots of components. CPU, Hard Drive, network card, sound card, gpu, etc. All those work in parallel and independent of each other. They are also generally slower than the CPU.
This means that whenever a process makes a call that down the line (on the OS side) ends up communicating with an external device, there is no point for the OS to be stuck waiting for the result since the time it takes for that operation to complete is probably an eternity (in the CPU view point of things).
So, the OS fires up whatever communication the process requested (call it IO request), flags the process as waiting for IO, and switches execution to another process so the CPU can do something useful instead of sitting around blocked waiting for the IO request to complete.
When the external device finishes whatever operation was requested, it generates an interrupt, so the OS is informed the work is done, and it can then flag the blocked process as ready again.
This is all a very simplified view of course, but that's the main idea. It allows the CPU to do useful work instead of waiting for IO requests to complete.
Regarding the second question:
It's tricky, even for single CPU machines, and depends on how the OS handles interrupts.
For code simplicity, a simple OS might for example, whenever an interrupt happens process the interrupt in one go, then resume whatever process it decides it's appropriate whenever the interrupt handling is done. So in this case, no other process would run until the interrupt handling is complete.
In practice, things get a bit more complicated for performance and latency reasons.
If you think about an interrupt lifetime as just another task for the CPU (From when the interrupt starts to the point the OS considers that handling complete), you can effectively code the interrupt handling to run in parallel with other things.
Just think of the interrupt as notification for the OS to start another task (that interrupt handling). It grabs whatever context it needs at the point the interrupt started, then keeps processing that task in parallel with other processes.
I/O request generally just means request to do either Input , Output or both. The exact meaning varies depending on your context like HTTP, Networks, Console Ops, or may be some process in the CPU.
A process is waiting for IO: Say for example you were writing a program in C to accept user's name on command line, and then would like to print 'Hello User' back. Your code will go into waiting state until user enters their name and hits Enter. This is a higher level example, but even on a very low level process executing in your computer's processor works on same basic principle
Can Processor work on other processes when current is interrupted and waiting on something? Yes! You better hope it does. Thats what scheduling algorithms and stacks are for. However the real answer depending on what Architecture you are on, does it support parallel or serial processing etc.
When a process executing in the user space issues a system call or triggers an exception, it enters into the kernel space and kernel starts executing on behalf of the process. Kernel is said to be executing in the process context. Similarly when an interrupt occurs kernel executes in the interrupt context. I have studied about kernel execution in kernel thread, where kernel processes runs in the background.
My Questions are :
Does the kernel execute in any other contexts?
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
The kernel runs periodically, it sets a timer to fire an interrupt at some predefined frequency (100 Hz (Linux 2.4/x86), 1000Hz (early Linux 2.6/x86), 250Hz (newer Linux 2.6/x86)).
The kernel need to do this in order to do preemptive multitasking. OTOH, OSes only doing cooperative multitasking (Windows 3.1, classic Mac OS) needn't do this, and only switch tasks on response to some call from the running task (which could lead to runaway tasks hanging the whole system).
Note that there is some effort to optimize the use of this timer: newer Linux is smarter when there are no runnable tasks, it sets the timer as far in the future as it can, to allow the CPU to sleep longer and deeper, and preserve power (the CONFIG_NOHZ kernel config option). Running powertop will show the number of wakeups per second, which on an idle system can be much lower than the 250 wakeups per second you'd expect of a traditional implementation.
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
Assume you have a process p that is running the following code: while(1);. This code will never call into the kernel and won't cause any faults. (It might have set an alarm(3) earlier, causing a signal to be delivered in the future, or it might exceed the setrlimit(2) CPU limit, in which cases the kernel will deliver a signal to the process.)
Or, if another process sends p a signal via kill(2), the kernel will deliver that signal to the process as well.
The signal delivery will either cause a signal handler to run, do nothing (if the signal is ignored or masked), or take the default signal action (which might be nothing or termination).
And, of course, the process execution can be interrupted so the processor can handle interrupts; or a higher-priority process can preempt it.