Requirements diagram possibilities with use cases and test cases - requirements

I am wondering what is allowed (or at least what is the best practice) in a SysML Requirements diagram regarding the use of satisfy/verify links between use-cases, test-cases and requirements.
As I understand it, generally, a use-case << satisfy >> a requirement, and a test-case << verify >> it.
Is-it possible though for a use-case to << verify >> a requirement?
I found different sources with contradictory statements on the matter.
For the classical Alarm-Clock example, with :
Req1 : To be waken at chosen time.
UseCase1 : Set an alarm time & a radio frequency.
Test1 : Given there is a station at 101.5FM and the time is correctly set, when i set an alarm future time and set the freq to 101.5FM, then I will listen to the station at the given time.
What is then the correct and/or best diagram ?
(UseCase1) -- satisfy --> [Req1] , [TestCase1] -- verify --> [Req1]
or
(UseCase1) -- satisfy --> [Req1] , [TestCase1] -- verify --> (UseCase1)
or
(UseCase1) -- verify --> [Req1] , [TestCase1] -- verify --> [Req1]
Thanks for any clarifications!

There is no formal constraint in the specification, that would disallow this. However, the semantics of the elements makes this meaningless.
How would a use case verify a requirement?
SysML: A Verify relationship is a dependency between a requirement and a test
case or other model element that can determine whether a system
fulfills the requirement.
A use case describes all the ways a system can be used to achieve a certain goal. It describes user actions as well as functions the system must have to be helpful for achieving this goal. It doesn't describe how to test the system functions. You can however derive test cases from a use case description.
How would a use case satisfy a requirement?
SysML: A Satisfy relationship is a dependency between a requirement and a
model element that fulfills the requirement.
A use case is an analysis tool to find the functions, that the system shall support - in other words, the functional requirements. How can an analysis tool that finds requirements satisfy a requirement?
About your example
What is the goal of the use case "set an alarm time and radio frequency"? The alarm time and the radio frequency are set? Well, forgive me, but this is not really helpful.
The use case refines the stakeholder requirement "Be waken at chosen time" and has the same name. And this use case has a lot of alternative flows, that most clock makers in their blissfull ignorance forget: I awake early and want to prematurly cancel the alarm (without clearing it for the next day). I pressed the snooze button, but now, that I'm awake, decide to get up anyway (and while I'm under the shower, the alarm goes off). I stayed up late, and now need to strike a balance between a minimum sleep requirement and a full to do list (and would like to know, without calculating late at night, how much time would be left). All these alternative flows lead to additional functional requirements.
So the complete list of functional requirements found in this use case would be:
set Alarm time
select Radio or Alarm
set Radio Frequency
control clock for alarming (main function)
play Radio at predefined time
sound alarm at predefined time
snooze alarm
cancel Alarm for today
clear Alarm time
show time until alarm
It is amazing how many alarm clocks fail to have all these functions, given that a use case analysis would find them quickly.
So the diagram could be:
«stakeholder requirement» be waken at chosen time
<-«refine»- «use case» be waken at chosen time
<-«trace»- «functional requirement» cancel Alarm for today
<-«satisfy»- «operation» cancel Alarm
«functional requirement» cancel Alarm for today
<-«verify»- «testcase» cancel Alarm after snooze
You could argue, that the stakeholder requirement, and, thus indirectly the use case could get verified by a test case. However, I think that a stakeholder requirement would get validated, not verified.

Related

How to handle stream of inputs and generate output based on input combination in UML State machine diagram

Following is a safety controller with input and output
Condition given below for designing a state machine:
Here SignalOk, SignalWeak and SignalLost are measurements signal quality of steering angle. SteeringAngle signal itself contains the original steering data. In case of 3 consecutive SignalOk, system controller will output ValidSignal with the steering angle data. In other cases, signal will be considered as CorrputSignal. I am using UML 2 state charts(Harel charts). This is so far what I have done:
N.B.:Parallel states and broadcasting is not supported yet, but nested states are supported.
I don't know how to model this stream of inputs in state machine, any kind of help will be appreciated.
First I would recommend renaming the states, so that they don't resemble actions. I suggest to name them First Ok received, Second Ok received and Ok confirmed.
Since the SteeringAngle shall be ignored the first two times, the only transition triggered by it should be an internal transition in Ok confirmed. This transition will also invoke ValidSignal.
Nothing is specified about the order of SteeringAngle and SignalOk. Therefore, SteeringAngle should be deferred in Second Ok received. This way, even it it comes first, it will stay in the event pool.
Any reception of SignalWeak or SignalLost should return to Ready. You could do this with a local transition of Operational to Ready.
One additional recommendation: Define an Initial state in Operational and target the SystemOk transition to Operational. The effect is the same, but it results in a better separation of the two top level states.

Strava - How to detect pause in run/activity

Is there a way to detect a user pausing a run/activity within the strava API?
With Get Activity Streams (getActivityStreams) you can obtain differents StreamSet from your activity: in order to detect pause I think you can analyze CadenceStream or MovingStream.
Pauses are not available in the Strava API and can not be extracted consistently through algorithmic processing of the available fields. Moreover, the data contained in the API's streams collection can not be processed in a way which will arrive at the summary distance or time of the run.
The MovingStream contains a bit field which does not flag pauses, but instead (presumably) flags points where the athlete stopped moving. Although, that said, this field can not be used to arrive at the Moving Time by summing up the time values where this flag is true.

Trouble with sending some quantity forward in the process, and looping back

The process in a nutshell is that we are trying to recruit for open positions. We are assuming there is more than one position to be filled. There are level 1 & level 2 interviews, if someone passes, we want them to continue forward of course. But if not enough people pass to fill all of the open positions, we still need the ones who did succeed to move forward, while starting the searching process over again.
My question is, is how do I close off this process when there are multiple people/units/flows moving through it? Is the circled exclusive gateway at the end enough?
The exclusive gateway at the end of the process is correct.
However, I think the upper part of your diagramme might need some clarification. I see two design choices that you might want to rethink:
Does the Search for Candidates task create a list of candidates to interview or a single candidate ? In your diagramme, it first looks like there is a list of candidates that get interviewed in parallel during the Level 1 Interview task. However, the subsequent gateway suggests that you decide for each candidate, whether he/she has passed that level. If not, you move back to searching. The same applies for the Level 2 Interview task.
The inclusive gateways also suggest that you are deciding on individual candidates, whether they have passed each level because you can have a Yes and a No at the same time.
If a search for candidates results in a list of candidates that get interviewed at the same time, then I would put the interviews and Assign to project tasks inside a sub-process. You would loop through the interviews and assigning tasks until all posts are filled or all candidates examined (note the exit condition in the annotation at the top). If one round of interviews has not filled all posts, you would decide whether you need to launch another round.
If you rather interview candidates individually and want to avoid a sub-process, then you could keep your structure but use exclusive instead of inclusive gateways:
Note the data items in both examples that make it explicit whether your search resulted in a list of candidates that get interviewed or a single candidate at a time.

What are the errors in this BPMN?

I have a BPMN diagram (see below) with some errors that I can't seem to figure out. The diagram depicts the Produce Magazine Article Process, where the writer and Researcher are freelancers who work together to write articles for various publications.
Bigger version: BPMN diagram
There is a bunch of errors here, three of them are logical (two are related), one is BPMN syntax.
Let's start with the syntax.
The message is always a communication between two separate pools s it has to cross pool boundaries. In your case, you have depicted Freelancers as a single pool, so Send information, being between lanes but not pools is a syntax error. Before suggesting a solution though, I will focus on logical errors.
Time event is not used to show the fact that some time goes by between the activities. That is actually something natural in the process It is used to indicate that the flow of time is a trigger of the next action(s). For instance, 7 days after choosing a topic the Publication might contact the Researcher to check on the progress. That would be indicated by timed event. In your case, it seems that the flow continuation is triggered by passing messages so you should indicate it as an Incoming message event. You actually do that in 2 places, one that is obvious (Get article as a "result" of time event) and the second that correlates to a second problem.
The second thing that most probably is a logical question is that since we are talking here about freelancers, most probably Researcher and Writer are two separate entities, not one organisation as your current diagram suggests. If that is the case, you should have them represented as two separate pools. Then your message would be judged, but still rather than "Wait for information" time event you should have "Receive information" incoming message event (that is BTW the starting event for the Writer pool - similarly receiving Article request by Researcher should be handled by Incoming message event).
If you prefer to depict the Freelancer as one "organisation", then you should completely abandon the time event (as again you have used it as an indication of time passing and as I have explained earlier that is not how it should be used). You have a simple flow, where once Researcher finishes their job, it is passed to Writer who carries it over from there. In such case, you should have a simple action flow (solid line) between the actions themselves.
It is also a good practice to be consistent in using End events (and at least recommended - some BPM engines verify that) to always have an End even for every branch of a process. You are missing one or two, depending on how are you going to approach the Freelancers part. Similarly, you should have a Start event for Publication.
Below are the two options shown in the form of diagrams. Note that I also did some minor changes to handle the insufficient information case by Publication. Otherwise, they will be stuck forever waiting for the article to come.
Option with Freelancers as separate pools:
Option with Freelancers considered as a single organisation

Prevent subscribers from reading certain samples temporarily

We have a situation where there are 2 modules, with one having a publisher and the other subscriber. The publisher is going to publish some samples using key attributes. Is it possible for the publisher to prevent the subscriber from reading certain samples? This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
We are planning on using Opensplice DDS but please give your inputs even if they are not specific to Opensplice.
Thanks.
RTI Connext DDS supplies an option to coordinate writes (in the documentation as "coherent write", see Section 6.3.10, and the PRESENTATION QoS.
myPublisher->begin_coherent_changes();
// (writers in that publisher do their writes) /* data captured at publisher */
myPublisher->end_coherent_changes(); /* all writes now leave */
Regards,
rip
If I understand your question properly, then there is no native DDS mechanism to achieve what you are looking for. You wrote:
This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
There is no such thing as a "global mutex" in DDS.
However, I suspect you can achieve your goal by adding some information to the data-model and adjust your application logics. For example, you could add an enumeration field to your data; let's say you add a field called status and it can take one of the values CALCULATING or READY.
On the publisher side, in stead of "taking a the mutex", your application could publish a sample with the status value set to CALCULATING. When the calculation is finished, the new sample can be written with the value of status set to READY.
On the subscriber side, you could use a QueryCondition with status=READY as its expression. Read or take actions should only be done through the QueryCondition, using read_w_condition() or take_w_condition(). Whenever the status is not equal to READY, the subscribing side will not see any samples. This approach takes advantage of the mechanism that newer samples overwrite older ones, assuming that your history depth is set to the default value of 1.
If this results in the behaviour that you are looking for, then there are two remaining disadvantages to this approach. First, the application logics get somewhat polluted by the use of the status field and the QueryCondition. This could easily be hidden by an abstraction layer though. It would even be possible to hide it behind some lock/unlock-like interface. The second disadvantage is due to the extra sample going over the wire when setting the status field to CALCULATING. But extra communications can not be avoided anyway if you want to implement a distributed mutex-like functionality. Only if your samples are pretty big and/or high-frequent, this is an issue. In that case, you might have to resort to a dedicated, small Topic for the single purpose of simulating the locking mechanism.
The PRESENTATION Qos is not specific RTI Connext DDS. It is part of the OMG DDS specification. That said the ability to write coherent changes on multiple DataWriters/Topics (as opposed to using a single DataWriter) is part of one of the optional profiles (object model profile), so no all DDS implementations necessariiy support it.
Gerardo