Why observer notifications for timer and input sources are delivered before those events actually occur in run loop sequence of events - objective-c

I am learning how to use NSRunLoop and reading Apple Doc about Run Loops chapter.
I am confused about some description about how the doc states about the the run loop sequence of events
Because observer notifications for timer and input sources are delivered before those events actually occur, there may be a gap between the time of the notifications and the time of the actual events. If the timing between these events is critical, you can use the sleep and awake-from-sleep notifications to help you correlate the timing between the actual events
Here is doc link
It says observer notifications for timer and input sources are delivered before those events actually occur. Since those events being not happened, how does run loop knows those events are about to happen and sends notifications for thoes events in advance?

After many searches, It may help.
The doc says the followings in Custom Input Sources section.
In addition to defining the behavior of the custom source when an event arrives, you must also define the event delivery mechanism. This part of the source runs on a separate thread and is responsible for providing the input source with its data and for signaling it when that data is ready for processing. The event delivery mechanism is up to you but need not be overly complex.
More details link1 and link2
The event may occur, but that event may not be ready for using, cause the actual data produced by that event, processed on a separate thread, may not be enough to trigger the thread, which is listening that notification. So their is a gap between the notification posted by run loop and the finished happen event.
There are also other processes that leads to that gap, such as timer input source etc.
Anyone has better explanation?

Related

How long should a process live in Camunda

How long should a process live in a Camunda BPMN workflow?
I have a process that can run multiple times throughout the life of a product. I need to keep track of and update data points that this workflow handles for the product.
One proposal was to write a looping BPMN that listens for an event to start the process, and ends with it back on the Receive Task listening for the event to fire again.
However, this would result in processes that never actually end because they always loop back, but we have no guarantees about when or how many times this event could be fired.
I have also considered creating BPMN that just does one run and terminates. This relieves the problem of a long living process, but I loose all of the process variables that are included.
EDIT:
Here is a simplified diagram of the looping mechanism we're looking at. I don't want to re-check eligibility after the first time, but I want to verify and save the address any time it changes.
Simplified Address Diagram
Honestly, the BPMN file (aka the process definition) should be the one to dictate how long it "lives". Like if you have a process that necessitates your user to contact a customer and wait for his answer, a process could easily state that "1 month" is the time to wait before sending a reminder (or reacting in any other way to the timer's expiration).
But we also have to differentiate between "time to live / life cycle of the real life process" conceputalized through the BPMN file VS "time to live / life cycle of the process in your Camunda engine (for lack of a better term)".
Each instance of a process in Camunda has an unique identifier. You do not have to let the "memory instance of the process" live until it is completed ... you could instead instanciate it everytime an event is sent to the unique ID of a process instance to treat the event/command being and stop the instance (not the lifecycle of the process) once the event/command has been treated.
The only time I worked with Camunda, thats is what we did. Basically, we'd sent to Camunda API the name of the BPMN file, the ID of the process instance we had previously started and all the pertinent informations to treat the event/command that will affect the process (include process variables).
This way, when an event/command is successfully treated by Camunda API, you could store all the process variables into the "return message" after it has been processed and you would never really lose process variables since you would always "reload" them from the latest "state" of the process (aka the response you got last time you sent an event to a specific process instance).
Hopefully, I'm being clear ?

FreeRTOS stuck in osDelay

I'm working on a project using a STM32F446 with a boilerplate created with STM32CubeMX (for peripherals initialization and middleware like the FreeRTOS with the CMSIS-V1 interface).
I have two threads which communicate using mailboxes but I encountered a problem: one of the thread body is
void StartDispatcherTask(void const * argument)
{
mailCommand *commandData = NULL;
mailCommandResponse *commandResponse = NULL;
osEvent event;
for(;;)
{
event = osMailGet(commandMailHandle, osWaitForever);
commandData = (mailCommand *)event.value.p;
// Here is the problem
osDelay(5000);
}
}
It gets to the delay but never gets out. Is there a problem with using the mailbox and the delay in the same thread? I tried also bringing the delay before the for(;;) and it works.
EDIT: I guess I can try to add more detail to the problem. The first thread send a mail of a certain type and then waits for a mail of another type; the thread in which I get the problem receive the mail go the first type and execute some code based on what it receive and then send the result as a mail of the second type; sometimes it is that it has to wait using osDelay and there it stop working but without going into any fault handler
I would rather use standard freeRTOS API. ARM CMSIS wrapper is rubbish.
BTW I rather suspect osMailGet(commandMailHandle, osWaitForever);
the delay is in this case not needed at all. If you wait for the data in the BLOCKED state the task does not consume any processing power
If another guesses are:
You are landing in the HF
You are stacked in the context switch (wrong interrupt priorities )
use your debugger and see what is going on.
osStatus osDelay (uint32_t millisec)
The millisec value specifies the number of timer ticks.
The exact time delay depends on the actual time elapsed since the last timer tick.
For a value of 1, the system waits until the next timer tick occurs.
=> You have to check whether timer tick is running or not.
check this link
As P__J__ pointed out in an earlier answer, you shouldn't use the osDelay() call in the loop1
because your task loop will wait at the osMailGet() call for the next request/mail until it arrives anyhow.
But this hint called my attention to another possible reason for your observation, so I'm opening this new answer:2
As the loop execution is interrupted by a delay of 5000 ticks - could it be that the producer of the mails is filling the mailbox faster than the task is consuming mails? Then, you should inspect if this situation is detected/handled in the producer context.
If the producer ignores "queue full" return values and discards the mails before they have been transmitted, the system will only process a few mails every 5000 ticks (or it may lose all but a few mails after the first fill of the mailbox, if the producer in your example only fills the mailbox queue once).
This could look like the consumer task being stuck, even if the main problem is about the producer context (task/ISR).
1
The osDelay() call can only help you if you want to avoid to process another mail within 5000 ticks if request mails are produced faster than the task processes them.
But then, you'd have a different problem, and you should open a different question...
2
Edit: I just noticed that Clifford already mentioned this option in one of his comments to the question. I think this option must be covered by an answer.

Understanding Eventual Consistency, BacklogItem and Tasks example from Vaughn Vernon

I'm struggling to understand how to implement Eventual Consistency with the exposed example of BacklogItems and Tasks from Vaughn Vernon. The statement I've understood so far is (considering the case where he splits BacklogItem and Task into separate aggregate roots):
A BacklogItem can contain one or more tasks. When all remaining hours from a the tasks of a BacklogItem are 0, the status of the BacklogItem should change to "DONE"
I'm aware about the rule that says that you should not update two aggregate roots in the same transaction, and that you should accomplish that with eventual consistency.
Once a Domain Service updates the amount of hours of a Task, a TaskRemainingHoursUpdated event should be published to a DomainEventPublisher which lives in the same thread as the executing code. And here it is where I'm at a loss with the following questions:
I suppose that there should be a subscriber (also living in the same thread I guess) that should react to TaskRemainingHoursUpdated events. At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app? In the application code? Is there any reasoning to place domain subscriptors in a specific place?
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
If I use this message broker, should I have a background thread or something that just consumes events from a RabbitMQ queue and then dispatches the event to update the product?
I'd appreciate if someone can shed some clear light over this since it is quite complex to picture in its completeness.
So to start with, you need to recognize that, if the BacklogItem is the authority for whether or not it is "Done", then it needs to have all of the information to compute that for itself.
So somewhere within the BacklogItem is data that is tracking which Tasks it knows about, and the known state of those tasks. In other words, the BacklogItem has a stale copy of information about the task.
That's the "eventually consistent" bit; we're trying to arrange the system so that the cached copy of the data in the BacklogItem boundary includes the new changes to the task state.
That in turn means we need to send a command to the BacklogItem advising it of the changes to the task.
From the point of view of the backlog item, we don't really care where the command comes from. We could, for example, make it a manual process "After you complete the task, click this button here to inform the backlog item".
But for the sanity of our users, we're more likely to arrange an event handler to be running: when you see the output from the task, forward it to the corresponding backlog item.
At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app?
That seems pretty reasonable.
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
Same thread and same transaction are not necessarily coincident. It can all be coordinated in the same thread; but it probably makes more sense to let the consequences happen in the background. At their core, events and commands are just messages - write the message, put it into an inbox, and let the next thread worry about processing.
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
No; the mechanics of the plumbing matter not at all.

Allowability of Timer start event in an Event Sub-Process

I am unclear as to whether it is permissible in a BPMN 2.0 model for a timer to be the Start Event for an event sub-process, such as in the simplified example below:
The BPMN 2.0 documentation (version 2.0.1 dated 2013-09-02) on page 174 (section 10.3.5, Event Sub-processes) suggests this is not permissible:
The Start Event of an Event Sub-Process MUST have a defined trigger. The Start Event trigger (EventDefinition) MUST be from the following types: Message, Error,
Escalation, Compensation, Conditional, Signal, and Multiple(see page 259 for more details)
On page 241 (section 10.5.2, Start Event), the specification states that a Timer is allowed as a Start Event:
A Start Event can also initiate an inline Event Sub-Process (see page 174). In that case, the same Event types as for boundary Events are allowed (see Table 10.86), namely: Message, Timer, Escalation, Error, Compensation, Conditional, Signal, Multiple, and Parallel.
Which of these sections would apply in the case of the above example?
Not a BPMN expert but have some experience using BPMN 2.0 so I'll give this a go.
The example you posted doesn't look like a completely spec-approved way of doing it, but I can't be entirely sure. I see a few different ways to do this that should be within bounds.
Here are my two suggestions:
Unless you want to model a third event like "Out of stock" I would prefer option A for its simplicity.
Also I'd like to through out a recommendation for "BPMN Method and Style, 2nd ed." by Bruce Silver.
I'm going to conclude this is almost certainly an error in §10.5.2 of the spec, and that the timer as the start event in an event sub-process is allowed.
Tables 10.86 and 10.93 are both explicit in that the timer can be the trigger for an event sub-process.
The non-interrupting timer start event is only useful in an event sub-process. That symbol would have no use if a timer event were not allowed to trigger an event sub-process.
Section 10.5.6 consistently allows the use of the timer as the start event trigger
The issue was reported to OMG in 2010 (Issue 15532), although no further action was taken.
The same principle applies to Parallel Multiple events, which are similarly omitted from the same list in §10.5.2, but permitted in other sections.
I don't remember now the terminology, but what I would do to achieve what you want is put purchase parts + unpack parts in a subprocess (or sub-task?) and have a timer on it. This seems easier, clearer to read and does what you want.
Regarding documentation: I would say one part talks about the trigger and the other about the start of the event sub-process. So a timer can't trigger the event sub-process, but the start event of the event sub-process can be a timer.

Network activity indicator and asynchronous sockets

I have an app which continuously reads status updates from a server connection.
All is working well with a stream delegate to handle all the reading and writing asynchronously.
There's no part of the app that is "waiting" for a specific response from the server, it is just continuously handling status updates as they sporadically arrive from the socket. There are no requests on the client side that are waiting for responses.
I'm wondering what the best practice would be for the network activity indicator in this case.
I could turn it on in the stream event handler, and off before we leave the handler, but that would be a very short time (just enough for an non-blocking read or write to occur). Trying this, I only see the faintest flicker of the indicator; it needs to be on longer than just during the event handler.
What about turning it on in the stream delegate, and setting a timer to turn it off a short time later? (This would ensure it's on long enough to be seen, rather than the short time spent in the stream delegate.)
Note: I've tried this last idea: turning on the network activity indicator whenever there's stream activity, and note the NSDate; then in a timer (that I have fired every 1 second), if the time passsed is >.5 second, I turn off the indicator. This seems to give a reasonable indication of network activity.
Any better recommendations?
If the network activity is continuous then it sounds like it might be somewhat annoying to the user, especially if it's turning on and off all the time.
Perhaps better would be to test for lack-of-response up to a certain timeout value and then display an alert view to the user if you aren't getting any response from the server. Even that could be optional if you can provide feedback (like "Last update: 5 mins ago") to the user instead.