In BPMN, how can I express that two different lanes are doing different actions at the same time? - bpmn

How to represent two lanes doing two different actions at the same time
how to represent that a POOL of lanes contacts another POOL ?

To answer your first question, I would use parallel gateway so that following tasks placed in each lanes can be run concurrently.
For the second question, several solutions exist to represent exchange between pools:
If you want a pool to start another one and wait (synchronous) for this new running pool to finish before continuing its execution you can use call activity.
The main pool can also send for example a message or signal and target pool use a message or a signal start event. This allow to start another pool asynchronously. You can then use again message to define a synchronization point later in pool execution path.
And a third option would be to use event sub-process. Based on the kind of event you choose for the sub-process it will interrupt or not the main process (see standard page 242 for the list of interrupting and non-interrupting event sub-process start events).

Related

FreeRTOS stuck in osDelay

I'm working on a project using a STM32F446 with a boilerplate created with STM32CubeMX (for peripherals initialization and middleware like the FreeRTOS with the CMSIS-V1 interface).
I have two threads which communicate using mailboxes but I encountered a problem: one of the thread body is
void StartDispatcherTask(void const * argument)
{
mailCommand *commandData = NULL;
mailCommandResponse *commandResponse = NULL;
osEvent event;
for(;;)
{
event = osMailGet(commandMailHandle, osWaitForever);
commandData = (mailCommand *)event.value.p;
// Here is the problem
osDelay(5000);
}
}
It gets to the delay but never gets out. Is there a problem with using the mailbox and the delay in the same thread? I tried also bringing the delay before the for(;;) and it works.
EDIT: I guess I can try to add more detail to the problem. The first thread send a mail of a certain type and then waits for a mail of another type; the thread in which I get the problem receive the mail go the first type and execute some code based on what it receive and then send the result as a mail of the second type; sometimes it is that it has to wait using osDelay and there it stop working but without going into any fault handler
I would rather use standard freeRTOS API. ARM CMSIS wrapper is rubbish.
BTW I rather suspect osMailGet(commandMailHandle, osWaitForever);
the delay is in this case not needed at all. If you wait for the data in the BLOCKED state the task does not consume any processing power
If another guesses are:
You are landing in the HF
You are stacked in the context switch (wrong interrupt priorities )
use your debugger and see what is going on.
osStatus osDelay (uint32_t millisec)
The millisec value specifies the number of timer ticks.
The exact time delay depends on the actual time elapsed since the last timer tick.
For a value of 1, the system waits until the next timer tick occurs.
=> You have to check whether timer tick is running or not.
check this link
As P__J__ pointed out in an earlier answer, you shouldn't use the osDelay() call in the loop1
because your task loop will wait at the osMailGet() call for the next request/mail until it arrives anyhow.
But this hint called my attention to another possible reason for your observation, so I'm opening this new answer:2
As the loop execution is interrupted by a delay of 5000 ticks - could it be that the producer of the mails is filling the mailbox faster than the task is consuming mails? Then, you should inspect if this situation is detected/handled in the producer context.
If the producer ignores "queue full" return values and discards the mails before they have been transmitted, the system will only process a few mails every 5000 ticks (or it may lose all but a few mails after the first fill of the mailbox, if the producer in your example only fills the mailbox queue once).
This could look like the consumer task being stuck, even if the main problem is about the producer context (task/ISR).
1
The osDelay() call can only help you if you want to avoid to process another mail within 5000 ticks if request mails are produced faster than the task processes them.
But then, you'd have a different problem, and you should open a different question...
2
Edit: I just noticed that Clifford already mentioned this option in one of his comments to the question. I think this option must be covered by an answer.

sc_spawn and other process [SystemC]

Can you explain the difference between sc_spawn and another process (SC_METHOD, SC_THREAD, SC_CTHREAD )?
Thanks all.
Hook
To understand this, you have to get an idea of the phases of elaboration and simulation of SystemC first. The phases of elaboration and simulation shall run in the following sequence (from IEEE Std 1666-2011:
Elaboration—Construction of the module hierarchy
Elaboration—Callbacks to function before_end_of_elaboration
Elaboration—Callbacks to function end_of_elaboration
Simulation—Callbacks to function start_of_simulation
Simulation—Initialization phase
Simulation—Evaluation, update, delta notification, and timed notification phases (repeated)
Simulation—Callbacks to function end_of_simulation
Simulation—Destruction of the module hierarchy
Processes are objects derived from sc_object and are created by calling SC_METHOD, SC_THREAD, SC_CTHREAD, or the sc_spawn function.
If a process is created created during elaboration (1.) or the before_end_of_elaboration (2.) it is a static process. If it is created during the the end_of_elaboration (3.) callback or during simulation, it is a dynamic process.
A process instance created by the SC_METHOD, SC_THREAD, or SC_CTHREAD macro is an unspawned process instances and is typically a static process. Spawned process instances are processes created by calling sc_spawn. Typically, they are dynamic processes, but can be static if sc_spawn is called before the end_of_elaboration phase.
This means, to wrap this up in simple words, that sc_spawn enables you to dynamically add processes during simulation. For example: there can be cases where you only need a certain process, if a certain condition during your simulation becomes true.
Now let's take a look where the processes spawn during the simulation. The actual simulation of SystemC (6.) consists of these phases:
Initialization Phase—Execute all processes (except SC_CTHREADS) in an unspecified order.
Evaluation Phase—Select a process that is ready to run and resume its execution. This may cause immediate event notifications to occur, which may result in additional processes being made ready to run in this same phase. Repeat, as long as there are still processes ready to run.
Update Phase—Execute any pending calls to update() resulting from request_uptdate() calls made in step 1 or 2.
Delta notification phase—If there are pending delta notifications (result from calls to notify()), determine which processes are ready to run due to the delayed notifications and go to step 2.
Timed notification phase—If pending timed notifications or time-outs exist:
advance simulation time to the time of the earliest pending timed notification or time-out;
determine which process instances are sensitive to the events notified and time-outs lapsing at this precise time;
add all such process instances to the set of runnable processes;
If no pending timed notifications or time-outs exist → end of simulation. Otherwise, go to evaluation phase.
If sc_spawn is called to create a spawned process instance, the new process will be added to the set of runnable processes (except if dont_initialize is called). If sc_spawn is called during the evaluation phase, it shall be runnable in the current evaluation phase (2.). If it is called during the update phase (3.), it shall be runnable in the next evaluation phase.
If sc_spawn is called during elaboration, the spawned process will be a child of the module instance which calls sc_spawn. If it is called during simulation, it will be a child of the process that called the function sc_spawn. You may call sc_spawn from a method process (SC_METHOD), a thread process (SC_THREAD), or a clocked thread process (SC_CTHREAD).
This tutorial shows the difference between implementing processes through SC_METHOD and SC_THREAD, and sc_spawn.

Two "start" needed in the same lane in BPMN 1.2

I know in BPMN there is just a "start event" for each pool. In my case I have a pool that can begin when a message is caught or because the actor decide to do it by his own decision.
How can I model that? I'm not sure I can use an event-based exclusive XOR.
Maybe a complex gateway?
As stated in many best practice how-tos, it is NOT RECOMMENDED to use multiple start events in a pool. BPMN specification 1.2 contains this note too:
9.3.2.
...
It is RECOMMENDED that
this feature be used sparingly and that
the modeler be aware that other readers of the Diagram may have difficulty
understanding the intent of the Diagram.
...
On the other side, the common rule for the case with omitted start event is
If the Start Event is not used, then all Flow Objects that do not have
an incoming Sequence Flow SHALL be instantiated when the Process is instantiated.
I assume this will be fair enough for the case of manual process start too. Even if the process has only message start event it will be correctly started because Message Start Event is a fair flow object with no incoming sequence flow and thus it complies to the above rule.
However, if you want to be 100% sure the process will go the way you want then the Event Based Exclusive Gateway (which is available since version 1.1) is your choice. Placing it before multiple different start events will make the process choose either of them for start.
Further explanation can be found in this blog.
Unlimited process instances
If you don't mind that during execution of your process the pool could be used multiple times (e. g. once started by a message and 3 times by an actor) then you can simply use multiple start events (BPMN 1.2 PDF Spec 9.3.2 page 37 allows this):
Single instance
If you can only allow a single run of the pool, you might have to instantiate it manually at the start of your execution and then decide whether to use it and when. Here is an example of how this can be done:
The Event-Based Gateway (Spec 9.5.2.4) will "decide" what to do with your pool:
If Actor decides to start or a message comes from the main pool, some actions will take place;
If the process is "sure" that additional pool will not be required, a signal is cast to terminate its instance.

priority control with semaphore

Suppose I have a semaphore to control access to a dispatch_queue_t.
I wait for the semaphore (dispatch_semaphore_wait) before scheduling a block on the dispatch queue.
dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER)
dispatch_async(queue){ //do work ; dispatch_semaphore_signal(semaphore); }
Suppose I have work waiting in several separate locations. Some "work" have higher priority than the other "work".
Is there a way to control which of the "work" will be scheduled next?
Additional information: using a serial queue without a semaphore is not an option for me because the "work" consist of its own queue with several blocks. All of the work queue has to run, or none of it. No work queues can run simultaneously. I have all of this working fine, except for the priority control.
Edit: (in response to Jeremy, moved from comments)
Ok, suppose you have a device/file/whatever like a printer. A print job consists of multiple function calls/blocks (print header, then print figure, then print text,...) grouped together in a transaction. Put these blocks on a serial queue. One queue per transaction.
However you can have multiple print jobs/transactions. Blocks from different print jobs/transactions can not be mixed. So how do you ensure that a transaction queue runs all of its jobs and that a transaction queue is not started before another queue has finished? (I am not printing, just using this as an example).
Semaphores are used to regulate the use of finite resources.
https://www.mikeash.com/pyblog/friday-qa-2009-09-25-gcd-practicum.html
Concurrency Programming Guide
The next step I am trying to figure out is how to run one transaction before another.
You are misusing the API here. You should not be using semaphores to control what gets scheduled to dispatch queues.
If you want to serialize execution of blocks on the queue, then use a serial queue rather than a concurrent queue.
If different blocks that you are enqueuing have different priority, then you should express that different priority using the QOS mechanisms added in OS X 10.10 and iOS 8.0. If you need to run on older systems, then you can use the different priority global concurrent queues for appropriate work. Beyond that, there isn't much control on older systems.
Furthermore, semaphores inherently work against priority inheritance since there is no way for the system to determine who will signal the semaphore and thus you can easily end up in a situation where a higher priority thread will be blocked for a long time waiting for a lower priority thread to signal the semaphore. This is called priority inversion.

blocking call on two Queues?

I have an algorithm (task in VxWorks) that is reading data from multiple queues to be able to manage priorities accordingly. Now , the msgQReceive( ) function, can be set to WAIT_FOREVER which would make it a blocking call until something is available to receive and process. Now how can I do this if I have multiple queues? Currently I check in a while(1) loop if any of the queues have any contents and receive them if so but if nothing is there, my algorithm just spins and spins and spins and eats CPU resources for nothing. How can I prevent this best?
You should be able to use VxWorks events coupled with a Message Queue.
See msgQEvStart function and Kernel Programmer's Guide, section 7.9.
This is akin to using a select() for I/O operation.
You do a blocking eventReceive which returns a bitmask indicating which queue has content and you then do a non-blocking msgQReceive to retrieve the data.
Or you can look at How can a task wait on multiple vxworks Queues? which I wrote a while ago,
As already mentioned, you could use events, alternatively if you can use a pipe instead of msgQ, you could potentially use select.
As another alternative, perhaps consider having multiple tasks, each servicing a single msgQ