What are the delta cycle and delta notification in SystemC? - systemc

In SystemC, there is a kind of notification called delta notification, which can be called in the following two methods.
event.notify(SC_ZERO_TIME);
or
event.notify(0, SC_NS);
It defines that in a delta notification call, processes sensitive to the event will run during the evaluation phase of the next delta cycle.
So, what's the so called "delta cycle"? Is it defined just like the clock cycle or a certain period of time?

The delta cycle is not clock cycle and no time having advanced. Delta cycle is used to simulate new updates and event triggered processes to be simulated from current execution phase of current time.
A brief simulation steps are as follows,
evaluation phase: execute all schedule processes in current run queue
update phase: update value and add new triggered runnable processes to waiting queue or queue of t + N.
if the queue (t+0) is not empty, move queue (t+0) to run queue and go to step 1
if waiting queue (t+0) is empty, advance time to closest time step,
and move queue (t+X) to run queue and go to step 1
if queue is empty, it means no event needs to be simulated, then simulation ends.
So if you are using delta notification, the event and its triggered processes are schedule to be run immediately after current execution & update phase. So when this time of execution phase has done, but it still has other schedule processes to be run at current time, it goes to evaluation phase again to run those processes, and no time has advanced due to the simulation is still in the same timestamp.
There is another terminology called immediate notification, which is calling notify() without any argument. Then the process will be immediately schedule to current queue of execution, not awaiting to next delta cycle.

event.notify(SC_ZERO_TIME); or event.notify(0, SC_NS);
is called delayed notification. Processes waiting for a delayed notification(i.e. waiting on event) will execute only after all waiting processes have executed or in other words executes on the next delta-cycle (after an update phase).

Related

Process of State

I learned that when an interrupt occurs, the process goes to the ready queue rather than going through the Blocked Queue. However, in this picture, the interrupted process has moved to the blocked queue(which is a circle with pink color). I'm confused that which case goes to the ready queue and which goes to the blocking queue.
Process management in general is much more complex than this. A task is often tied to one specific processor core. Several tasks are tied to the same processor core and each of these tasks can be blocked waiting for IO. It means that any task can be interrupted at any time by an interrupt triggered by a device controller even if the task currently running on the core had nothing to do with that specific interrupt.
The diagram is thus incomplete. It doesn't take in account the complete process lifecycle. In your diagram, the process goes on the blocked queue if it is waiting for IO (after a syscall like read()). It goes to the ready queue if it was preempted by the kernel for another process to have some time on that core.
I think people often have the misconception that each process will run all the time until completion. It cannot be that way otherwise most processes would never get time on any core. Instead, if the amount of processes is higher than the amount of cores, the kernel uses the per core local APIC's timer (local APIC is on x86-64 but you will have similar mechanisms on every architecture) to give every process tied to that core a time slice. When a certain process is scheduled for a certain core, the kernel starts the timer with its time slice. When the time slice has elapsed, the local APIC triggers an interrupt letting the kernel know that another process should be scheduled on that core. This is why a process can be preempted in the middle of its execution. The process is still considered to be ready to run. It is simply that its time slice was exhausted so the kernel decides to give some time to another process. The preempted process will be given some more timer later. Since, in human terms, the time slice of each process is very short, it gives the impression that each process is running consistently without interruption when it is not really the case. (By the way this diagram is very Linux kernel specific)

Print out the enxt trigger wake up time after completing a job

I am running Advanced Python Scheduler on a server-side daemon process that has two interval jobs configured:
scheduler = BlockingScheduler()
scheduler.add_job(live_cycle, 'interval', seconds=self.tick_size.to_timedelta.total_seconds(), start_time=start_time + self.tick_offset)
scheduler.add_job(live_positions, 'interval', seconds=self.stats_refresh_frequency.to_timedelta.total_seconds(), start_time=start_time)
scheduler.start()
I am using a BlockingScheduler and single-threaded execution mode, as my use case demands quite deterministic execution. Tasks themselves are have short lifetime, should never overlap and nothing bad happens if they are delayed a bit.
My question is about the devops part of BlockingScheduler: there can be long delays before the next scheduled wake up for a job, like weeks. To make sure that the operator of this application has better understanding when to expect more log output, I would like to print out an info level logging statement along the lines:
Scheduler is now going to sleep. Next wake up is in X days Y hours, at 2022-XX-ZZ.
What would be a good way to add this functionality to my app? Should I extend BlockingScheduler or are there any event handler I can use?

Vulkan - How to efficiently copy data to CPU *and* wait for it

Let's say I want to execute the following commands:
cmd_buff start
dispatch (write to texture1)
copy (texture1 on gpu to buffer1 host-visible)
dispatch (write to texture2)
cmd_buff end
I'd like to know as soon as possible when buffer1's data are available.
My idea here is to have a waiting thread on which I'd wait for the copy to have completed. What I'd do is first split the above list of cmds into:
cmd_buff_1 start
dispatch (write to texture1)
copy (texture1 on gpu to buffer1 host-visible)
cmd_buff_1 end
and:
cmd_buff_2 start
dispatch (write to texture2)
cmd_buff_2 end
Now, I'd call vkQueueSubmit with cmd_buff_1 and with some fence1, followed by a call to another vkQueueSubmit with cmd_buff_2 with NULL fence.
On the waiting thread I'd call vkWaitForFences( fence1 ).
That's how I see such an operation. However, I'm wondering if that is optimal and if there was actually any way to put a direct sync still within cmd_buff_1 so that I wouldn't need to split the cmd buffer into two?
Never break up submit operations just to test fences; submit operations are too heavyweight to do that. If the CPU needs to check to see if work on the GPU has reached a specific point, there are many options other than a fence.
The simplest mechanism for something like this is to use an event. Set the event after the transfer operation, then use vkGetEventStatus on the CPU to see when it is ready. That's a polling function, so a waiting CPU thread won't immediately wake up when the data is ready (but then, there's no guarantee that would happen with a non-polling function either).
If timeline semaphores are available to you, you can wait for them to reach a particular counter value on the CPU with vkWaitSemaphores. This requires that you break the batch up into two batches, but they can both be submitted in the same submit command.

How to choose proper watchdog timer value

The question is:
How should I configure the Watchdog Timer if I have 3 tasks with different priorities and different execution time?
Say:
Task1: Highest Priority , Exec. Time = 5 ms
Task2: Medium Priority , Exec. Time = 10 ms
Task3: Lowest Priority , Exec. Time = 15 ms
The proper way to do this is
Create a special watchdog task that waits on 3 semaphores/mutexes/message queues (sequentially) in a loop
Feed those three semaphores from your worker tasks (each task feeds one semaphore of the watchdog task)
re-set the watchdog timer in the watchdog task's loop to the sum of the loop timing of all worker tasks (worst case) plus some headroom.
If any of your worker tasks or the watchdog tasks hangs, it will eventually block the watchdog task and the watchdog will expire. You want to make sure the watchdog is only re-triggered when all tasks are running properly. Use the simplest inter-task communication means your RTOS provides to make it as robust as possible against crashes.
Look at this definition
A watchdog timer is an electronic timer that is used to detect and recover from computer malfunctions. During normal operation, the computer regularly resets the watchdog timer to prevent it from elapsing, or "timing out"
So you set the watchdog timer value, that trigger watchdog when you are sure none of 3 tasks is running. To be more accurate, you reset the timer when you are sure all of the tasks are running. When a single task stopped due to unknown reason, you want to trigger watchdog (you can read more on it)
Now the real thing, what should be time for watchdog timer? you need to set a timer when you want to restart the program, so include all wait time for a task, delays in tasks and check worst-case time (max time) for all tasks to be executed at least once. then set the timer value a little bit more than this max value.

How does a VxWorks scheduler get executed?

Would like to know how the scheduler gets called so that it can switch tasks. As in even if its preemptive scheduling or round robin scheduling - the scheduler should come in to picture to do any kind of task switching. Supposing a low priority task has an infinite loop - when does the scheduler intervene and switch to a higher priority task?
Query is:
1. Who calls the scheduler? [in VxWorks]
2. If it gets called at regular intervals - how is that mechanism implemented?
Thanks in advance.
--Ashwin
The simple answer is that vxWorks takes control through a hardware interrupt from the system timer that occurs continually at fixed intervals while the system is running.
Here's more detail:
When vxWorks starts, it configures your hardware to generate a timer interrupt every n milliseconds, where n is often 10 but completely depends on your hardware. The timer interval is generally set up by vxWorks in your Board Support Package (BSP) when it starts.
Every time the timer fires an interrupt, the system starts executing the timer interrupt handler. The timer interrupt handler is part of vxWorks, so now vxWorks has control. The first thing it does is save the CPU state (such as registers) into the Task Control Block (TCB) of the currently running task.
Then eventually vxWorks runs the scheduler to determine who runs next. To run a task, vxWorks copies the state of the task from its TCB into the machine registers, and after it does that the task has control of the CPU.
Bonus info:
vxWorks provides hooks into the task switching logic so you can have a function get called whenever your task gets preempted.
indiv provides a very good answer, but it is only partially accurate.
The actual working of the system is slightly more complex.
The scheduler can be executed as a result of either synchronous or asynchronous operations.
Synchronous refers to operations that are caused as a result of the code in the currently executing task. A prime example of this would be to take a semaphore (semTake).
If the semaphore is not available, the currently executing task will pend and no longer be available to execute. At this point, the scheduler will be invoked and determine the next task that should execute and will perform a context switch.
Asynchronous operations essentially refer to interrupts. Timer interrupts were very well described by indiv. However, a number of different elements could cause an interrupt to execute: network traffic, sensor, serial data, etc...
It is also good to remember that the timer interrupt does not necessarily cause a context switch! Yes, the interrupt will occur, and delayed task and the time slice counters will be decremented. However, if the time slice is not expired, or no higher priority task transitions from the pended to the ready state, then the scheduler will not actually be invoked, and you will return back to the original task, at the exact point where execution was interrupted.
Note that the scheduler does not have its own context; it is not a task. It is simply code that executes in whatever context it is invoked from. Either from the interrupt context (asynchronous) or from the invoking task context (synchronous).
Unless you have a majorily-customized target build, the scheduler is invoked by the Timer interrupt. Details are platform-specific, though.
The scheduler also gets invoked if current task gets completed or blocks.