Can you explain the difference between sc_spawn and another process (SC_METHOD, SC_THREAD, SC_CTHREAD )?
Thanks all.
Hook
To understand this, you have to get an idea of the phases of elaboration and simulation of SystemC first. The phases of elaboration and simulation shall run in the following sequence (from IEEE Std 1666-2011:
Elaboration—Construction of the module hierarchy
Elaboration—Callbacks to function before_end_of_elaboration
Elaboration—Callbacks to function end_of_elaboration
Simulation—Callbacks to function start_of_simulation
Simulation—Initialization phase
Simulation—Evaluation, update, delta notification, and timed notification phases (repeated)
Simulation—Callbacks to function end_of_simulation
Simulation—Destruction of the module hierarchy
Processes are objects derived from sc_object and are created by calling SC_METHOD, SC_THREAD, SC_CTHREAD, or the sc_spawn function.
If a process is created created during elaboration (1.) or the before_end_of_elaboration (2.) it is a static process. If it is created during the the end_of_elaboration (3.) callback or during simulation, it is a dynamic process.
A process instance created by the SC_METHOD, SC_THREAD, or SC_CTHREAD macro is an unspawned process instances and is typically a static process. Spawned process instances are processes created by calling sc_spawn. Typically, they are dynamic processes, but can be static if sc_spawn is called before the end_of_elaboration phase.
This means, to wrap this up in simple words, that sc_spawn enables you to dynamically add processes during simulation. For example: there can be cases where you only need a certain process, if a certain condition during your simulation becomes true.
Now let's take a look where the processes spawn during the simulation. The actual simulation of SystemC (6.) consists of these phases:
Initialization Phase—Execute all processes (except SC_CTHREADS) in an unspecified order.
Evaluation Phase—Select a process that is ready to run and resume its execution. This may cause immediate event notifications to occur, which may result in additional processes being made ready to run in this same phase. Repeat, as long as there are still processes ready to run.
Update Phase—Execute any pending calls to update() resulting from request_uptdate() calls made in step 1 or 2.
Delta notification phase—If there are pending delta notifications (result from calls to notify()), determine which processes are ready to run due to the delayed notifications and go to step 2.
Timed notification phase—If pending timed notifications or time-outs exist:
advance simulation time to the time of the earliest pending timed notification or time-out;
determine which process instances are sensitive to the events notified and time-outs lapsing at this precise time;
add all such process instances to the set of runnable processes;
If no pending timed notifications or time-outs exist → end of simulation. Otherwise, go to evaluation phase.
If sc_spawn is called to create a spawned process instance, the new process will be added to the set of runnable processes (except if dont_initialize is called). If sc_spawn is called during the evaluation phase, it shall be runnable in the current evaluation phase (2.). If it is called during the update phase (3.), it shall be runnable in the next evaluation phase.
If sc_spawn is called during elaboration, the spawned process will be a child of the module instance which calls sc_spawn. If it is called during simulation, it will be a child of the process that called the function sc_spawn. You may call sc_spawn from a method process (SC_METHOD), a thread process (SC_THREAD), or a clocked thread process (SC_CTHREAD).
This tutorial shows the difference between implementing processes through SC_METHOD and SC_THREAD, and sc_spawn.
Related
How to model running multiple tasks/branches in parallel, and wait for just the first one to finish. Then the other (running) branches should be cancelled. To illustrate what I'm asking (what to use instead of the X gateway):
As far as I know, the exclusive gateway's join function is to immediately proceed. It neither stops/cancels the other branches, nor does it stop further executions of the output (so multiple tokens can pass through it).
Is this the answer?
Or perhaps this is even better?
I would do the following:
Starting off from your third diagramme, wrap the tasks ‘a’ and ‘b’ inside your subprocess into another transaction subprocess (but still inside the bigger subprocess that you had already used.
At the boundary of this new sub process as well as the boundary of the task ‘c’, you should add interrupting boundary signal events that lead to a None end event.
After task ‘b’ and ‘c’, add a signal end event. Each of these two signal end events should be caught by the interrupting boundary signal events of the other subprocess or task that you want to stop. So, if task ‘c’ is completed, the signal that is thrown right after that should be caught by the boundary on the transaction subprocess of tasks ‘a’ and ‘b’. The signal end event after ‘b’ should be caught by the boundary event on task ‘c’.
After the bigger subprocess, which contains tasks ‘c’ as well the inner subprocess for ‘a’ and ‘b’, you continue just like in your third diagramme with a merging exclusive gateway and the “Do once” task. I would keep the timer boundary event on the bigger subprocess like you did in your third diagramme.
Here is how this would look like:
However, you could also draw a simpler diagramme with an additional exclusive gateway before the "Do only once" activity that filters out all remaining process instances if that activity has already been carried out. You diagramme would be easier to understand but the process would be slightly different from your requirements: You would allow a situation where activity b will be carried out even though activity c has already been completed. So, instead of cancelling one process instance you would ignore it. Depending on your business context, this might have certain implications.
A third option would be to use a terminate end event instead of a none end event. That way, all remaining process instances will be deleted as soon as the first one reaches the end. However, semantically, that might not be the most elegant solution because a termination is intended to signal that your process has finished abnormally.
We have an orchestrator which gets called by timer trigger every minute. In the orchestrator, there are multiple activity triggers called in function chaining mechanism. However there was one instance, where the each activity trigger was called twice with a time difference of just 7 milliseconds.
What I am assuming is when the 1st activity trigger was called, the checkpoint was delayed, even though the process had done its job, so when the orchestrator restarted, it executed the 1st activity trigger again as it did not find data in azure storage queue. Can somebody confirm if this would be the case or is there some issue with the way activity trigger behave?
This is the replay behavior of the orchestrator that you are observing. If an orchestrator function emits log messages, the replay behavior may cause duplicate log messages to be emitted. This is normal and by-design. Take a look at this documentation for more information.
When an orchestration function is given more work to do, the orchestrator wakes up and re-executes the entire function from the start to rebuild the local state. During the replay, if the code tries to call a function (or do any other async work), the Durable Task Framework consults the execution history of the current orchestration. If it finds that the activity function has already executed and yielded a result, it replays that function's result and the orchestrator code continues to run. Replay continues until the function code is finished or until it has scheduled new async work.
I have a program where I start several process instances using a cron. For each process instance I have a maximum time, and if the execution time exceeds it, I have to consider it as failure and use some specific methods.
For now what I did was simply to check, once my process instance has finished, if the elapsed time exceeds or not the given maximum time.
But what if my process instance gets blocked for some reason (e.g. server not responding)? I need to catch this event and perform failure operations as soon as the process gets blocked and timeout is exceeded.
How can I catch these two conditions?
I had a look at the FlowableEngineEventType, but there isn’t a PROCESS_BLOCKED/SUSPENDED type of event. But, even if it were, how do I fire it only if a certain amount of time has passed?
I assume that this is the same question as this from the Flowable Forum.
If you are using the Flowable HTTP Task then have a look at the documentation to see how you can set the timeouts on it and how you can react on errors there. If you are firing GET requests from your own code you would need to write your own business logic that would throw some kind of BpmnError and you would then handle that in your process.
The Flowable Process instance does not have the concept of being blocked, and you have to manually to that in your modelling.
How to represent two lanes doing two different actions at the same time
how to represent that a POOL of lanes contacts another POOL ?
To answer your first question, I would use parallel gateway so that following tasks placed in each lanes can be run concurrently.
For the second question, several solutions exist to represent exchange between pools:
If you want a pool to start another one and wait (synchronous) for this new running pool to finish before continuing its execution you can use call activity.
The main pool can also send for example a message or signal and target pool use a message or a signal start event. This allow to start another pool asynchronously. You can then use again message to define a synchronization point later in pool execution path.
And a third option would be to use event sub-process. Based on the kind of event you choose for the sub-process it will interrupt or not the main process (see standard page 242 for the list of interrupting and non-interrupting event sub-process start events).
I am rather new to vxworks(I am using version 6.7), and I find that when I spawn a child task, the parent seems to block till the child task completes. Perhaps, my understanding is not correct, and there is some parameter to be set in taskSpawn(), telling it not to block till the code task had completed.
Is there such a parameter or is there some other mechanism to bit make the parent task wait for completion of the child?
You should check priorities for your tasks. The default VxWorks scheduler is priority-based preemptive scheduling.