Timed simulation with SystemC - systemc

With reference to question SystemC module not working with SC_THREAD, a timed simulation is imitated using next_trigger(). As I understood from this article, this restarts the thread after the specified time:
next_trigger(double, sc_time_unit): The process shall be triggered when specified time has elapsed.
I.e. it effectively executes the operations after the occurrence of this instruction after the time specified, but also executes the operations found before that instruction. I have the feeling that the repeated utilization of next_trigger within an SC_THREAD may result in 'glitches' in the simulation.
Q1: Is my feeling correct?
Q2: Is there another possibility to delay execution (something that suspending the thread for the given time, rather than restarting it)

First of all next_trigger can only be used with SC_METHOD's as mentioned here:
next_trigger() is used with process methods, one's which are not threads.
Here are a few pointer's in term of SystemC processes:
SC_METHOD's are processes which must complete it's execution at one pass.(e.g.: a simple function call)
Note: Do not use while(1) loops in SC_METHOD's.
SC_THREAD's are processes which are separate thread of execution, one must explicitly use wait() statements here to synchronize the SystemC kernel simulation. This is the place where you will mostly find while(1) (infinite) loops in use.
For suspending the thread for some simulation time you can use the wait() statement to introduce the perceived delay.
But for better understanding you need to understand the difference between static and dynamic sensitivity in SystemC refer here for more information.

Related

difference between non preemptive and cooperative and rate-monotonic scheduler?

I have Read about co-operative Scheduler which not let higher priority task run till lower priority task block itself. so if there is no delay in task the lower task will take the CPU forever is it correct? because I have thought the non preemptive is another name for cooperative but there is another article which has confused me which say in non preemptive higher task can interrupt lower task at sys tick not in the middle between ticks so what's correct ?
is actually cooperative and non preemptive are the same?
and Rate monotonic is one type of preemptive scheduler right?
it's priority didn't set manually the scheduler Algo decide priority based on execution time or deadline it is correct?
is it rate monotonic better than fixed priority preemptive kernel (the one which FreeRtos Used)?
These terms can never fully cover the range of possibilities that can exist. The truth is that people can write whatever kind of scheduler they like, and then other people try to put what is written into one or more categories.
Pre-emptive implies that an interrupt (eg: from a clock or peripheral) can cause a task switch to occur, as well as it can occur when a scheduling OS function is called (like a delay or taking or giving a semaphore).
Co-operative means that the task function must either return or else call an OS function to cause a task switch.
Some OS might have one specific timer interrupt which causes context switches. The ARM systick interrupt is suitable for this purpose. Because the tasks themselves don't have to call a scheduling function then this is one kind of pre-emption.
If a scheduler uses a timer to allow multiple tasks of equal priority to share processor time then one common name for this is a "round-robin scheduler". I have not heard the term "rate monotonic" but I assume it means something very similar.
It sounds like the article you have read describes a very simple pre-emptive scheduler, where tasks do have different priorities, but task switching can only occur when the timer interrupt runs.
Co-operative scheduling is non-preemptive, but "non-preemptive" might describe any scheduler that does not use preemption. It is a rather non-specific term.
The article you describe (without citation) however, seems confused. Context switching on a tick event is preemption if the interrupted task did not explicitly yield. Not everything you read in the Internet is true or authoritative; always check your sources to determine thier level of expertise. Enthusiastic amateurs abound.
A fully preemptive priority based scheduler can context switch on "scheduling events" which include not just the timer tick, but also whenever a running thread or interrupt handler triggers an IPC or synchronisation mechanism on which a higher-priority thread than the current thread is waiting.
What you describe as "non-preemptive" I would suggest is in fact a time triggered preemptive scheduler, where a context switch occurs only in a tick event and not asynchronously on say a message queue post or a semaphore give for example.
A rate-monotonic scheduler does not necessarily determine the priority automatically (in fact I have never come across one that did). Rather the priority is set (manually) according to rate-monotonic analysis of the tasks to be executed. It is "rate-monotonic" in the sense that it supports rate-monotonic scheduling. It is still possible for the system designer to apply entirely inappropriate priorities or partition tasks in such a way that they are insufficiently deterministic for RMS to actually occur.
Most RTOS schedulers support RMS, including FreeRTOS. Most RTOS also support variable task priority as both a priority inversion mitigation, and via an API. But to be honest if your application relies on either I would argue that it is a failed design.

One clock cycle delay in communication between one SC_CTHREAD and another SC_CTHREAD

I am trying to model a simple direct mapped cache with main memory module which is an sc_cthread and a main memory state machine which also an SC_CTHREAD. I am observing one clock cycle delay from writing to a signal from my main memory module and receiving it on state machine.
How can I do it in only one clock cycle?
You cannot avoid the latency between threads when using an SC_CTHREAD. When writing to an sc_signal from one CTHREAD, the value change will only be visible to another CTHREAD at the next clock edge.
If you must use a CTHREAD (i.e. using high-level synthesis), then the only way to avoid the cross-thread latency is to place both functionalities within a single CTHREAD.
If you only need a behavioral model for simulation, then you could use SC_THREADs and sc_events. One thread can generate an sc_event that is being waited on by the second thread. When the second thread wakes on that event, it can observe sc_signal changes done by the first thread, and then produce an output (aligned with the clock edge if desired). Using sc_events gives the opportunity to sample and update signals "between" clock edges.

operating system - context switches

I have been confused about the issue of context switches between processes, given round robin scheduler of certain time slice (which is what unix/windows both use in a basic sense).
So, suppose we have 200 processes running on a single core machine. If the scheduler is using even 1ms time slice, each process would get its share every 200ms, which is probably not the case (imagine a Java high-frequency app, I would not assume it gets scheduled every 200ms to serve requests). Having said that, what am I missing in the picture?
Furthermore, java and other languages allows to put the running thread to sleep for e.g. 100ms. Am I correct in saying that this does not cause context switch, and if so, how is this achieved?
So, suppose we have 200 processes running on a single core machine. If
the scheduler is using even 1ms time slice, each process would get its
share every 200ms, which is probably not the case (imagine a Java
high-frequency app, I would not assume it gets scheduled every 200ms
to serve requests). Having said that, what am I missing in the
picture?
No, you aren't missing anything. It's the same case in the case of non-pre-emptive systems. Those having pre-emptive rights(meaning high priority as compared to other processes) can easily swap the less useful process, up to an extent that a high-priority process would run 10 times(say/assume --- actual results are totally depending on the situation and implementation) than the lowest priority process till the former doesn't produce the condition of starvation of the least priority process.
Talking about the processes of similar priority, it totally depends on the Round-Robin Algorithm which you've mentioned, though which process would be picked first is again based on the implementation. And, Windows and Unix have same process scheduling algorithms. Windows and Unix does utilise Round-Robin, but, Linux task scheduler is called Completely Fair Scheduler (CFS).
Furthermore, java and other languages allows to put the running thread
to sleep for e.g. 100ms. Am I correct in saying that this does not
cause context switch, and if so, how is this achieved?
Programming languages and libraries implement "sleep" functionality with the aid of the kernel. Without kernel-level support, they'd have to busy-wait, spinning in a tight loop, until the requested sleep duration elapsed. This would wastefully consume the processor.
Talking about the threads which are caused to sleep(Thread.sleep(long millis)) generally the following is done in most of the systems :
Suspend execution of the process and mark it as not runnable.
Set a timer for the given wait time. Systems provide hardware timers that let the kernel register to receive an interrupt at a given point in the future.
When the timer hits, mark the process as runnable.
I hope you might be aware of threading models like one to one, many to one, and many to many. So, I am not getting into much detail, jut a reference for yourself.
It might appear to you as if it increases the overhead/complexity. But, that's how threads(user-threads created in JVM) are operated upon. And, then the selection is based upon those memory models which I mentioned above. Check this Quora question and answers to that one, and please go through the best answer given by Robert-Love.
For further reading, I'd suggest you to read from Scheduling Algorithms explanation on OSDev.org and Operating System Concepts book by Galvin, Gagne, Silberschatz.

spin_lock on non-preemtive linux kernels

I read that on a system with 1 CPU and non preemtive linux kernel (2.6.x) a spin_lock call is equivalent to an empty call, and thus implemented that way.
I can't understand that: shouldn't it be equivalent to a sleep on a mutex? Even on non-preemtive kernels interrupt handlers may still be executed for example or I might call a function that would put the original thread to sleep. So it's not true that an empty spin_lock call is "safe" as it would be if it was implemented as a mutex.
Is there something I don't get?
If you were to use spin_lock() on a non-preemptive kernel to shield data against an interrupt handler, you'd deadlock (on a single-processor machine).
If the interrupt handler runs while other kernel code holds the lock, it will spin forever, as there is no way for the regular kernel code to resume and release the lock.
Spinlocks can only be used if the lock holder can always run to completion.
The solution for a lock that might be wanted by an interrupt handler is to use spin_lock_irqsave(), which disables interrupts while the spinlock is held. With 1 cpu, no interrupt handler can run, so there will not be a deadlock. On smp, an interrupt handler might start spinning on another cpu, but since the cpu holding the lock can't be interrupted, the lock will eventually be released.
To answer the two parts of your question:
Even on non-preemtive kernels interrupt handlers may still be executed for example ...
spin_lock() isn't supposed to protect against interrupt handlers - only user context kernel code. spin_lock_irqsave() is the interrupt-disabling version, and this isn't a no-op on a non-preemptive uniprocessor.
...or I might call a function that would put the original thread to sleep.
It is not allowed to sleep while holding a spin lock. This is the "Scheduling while atomic" bug. If you want to sleep, you have to use a mutex instead (again - these aren't a no-op on non-preemptive uniprocessor).
Quoted from «Linux Device Drivers», by Jonathan Corbet, Alessandro Rubini and Greg Kroah-Hartman:
If a nonpreemptive uniprocessor system ever went into a spin on a
lock, it would spin forever; no other thread would ever be able to
obtain the CPU to release the lock (because it couldn't yield).
Because of this, spinlock operations on uniprocessor systems without
preemption enabled are optimized to do nothing, with the exception of
the ones that change the IRQ masking status (in Linux, that would be
spin_lock_irqsave()). Because of preemption, even if you never
expect your code to run on an SMP system, you still need to implement
proper locking.
If you're interested in a spinlock that can be taken by code running in interrupt context (hardware or software), you must use a form of spin_lock_* that disables interrupts. Not doing so will deadlock the system as soon as an interrupt arrives while you have entered your critical section.
By definition, if you're using a non-preemptive kernel, you won't be preempted. If you do your own multitasking, that's not the kernel's problem; that's your problem. Interrupt handlers may still be executed, but they won't cause context switches.

VHDL - When does a process() run for the first time?

Consider : process(a)
According to the text i have :
A process is first entered at the time
of simulation, at which time it is
executed until it suspends itself due
to a wait statement or a sensitivity
list.
Am i right in inferring that a process WILL have to run once even without any events on the sensitivity list? Also, what if there are multiple processes inside an architecture, are they all executed once?
AFAIK, the sensitivity list (eg, process (x,y) )is just a shorthand for wait on x,y; just before the end process of a procedure (pg 152, "A Designer's Guide to VHDL" 3rd edition). So all procedures will run at least once.
There are 3 stages involved in running a VHDL simulation. These are elaboration, initialisation and simulation.
At the beginning of the initialisation phase, the current time is set to 0. The simulation kernel then places all of the simulation processes in the active processes queue. Each simulation process is then taken from this queue and executed until it suspends. The order of execution of simuation processes during initialisation is not important. The initial execution of each simulation process ensures that all initial transactions are scheduled so that the simulation can continue.
A simulation process is suspended either implicity or explicity. A process with a sensitivity list is suspended implicity after its sequential statements have been executed to the end of the process. A process with one or more wait statements is suspended explicitly when its first wait statement is executed.
When the active processes queue is empty, the initialisation phase is complete.
So to answer your question, all processes will run once during the initialisation phase.