How to reflect automatic processes and processes done by different users with BPMN? - bpmn

Let's say I have the following simplified process:
How should I reflect there that the data could be added not only by manual input, but can be received from another system (without user verification)?
And is there more correct way to display the same actions done by different users (see Verification step done by Manager 1 or Manager 2; in reality there are much more steps than just Verification, and all of them are the same in Manager 1 and Manager 2 columns).

Obviously there are many open questions regarding your specific requirements, so I can just give you an example:
I am using two lanes, one for the manager, one for the user. I assume that the concrete person (or subrole) necessary to carry out the steps for the "manager" needs to be determined within the process. From a process perspective it's just one role carried out by people with different skill sets or authorizations. I show that "Assign" task here as an automatic step, but it could also be a manual step. A BPMN process can have several start "events", I am using here two of them to show the different ways in which the process can start. I am using a collapsed pool "External System" and a message flow to indicate where the automatic message is coming from.
(Please note that BPMN processes are typically modeled from left to right, but may also be modeled from top to bottom. Also note, that for more complex processes and a more finegrained level of detail, it is often preferable to show every process participant in a separate pool with a separate process and exchange of messages in between them. Modeling one process pool with several lanes quite soon reaches practical limits!)

Related

How to model a task to be performed by all members of a specific role in BPMN?

How do we model a task where it is to be performed by all members of a specific role in BPMN? Especially when the number of members of this role is indeterminate at design time, and may increase or decrease at runtime from time to time.
Scenario: There is one task called "Review draft of new standards document" and it is assigned to the role called "Experts". Whenever this task is executed for a new document, we want all (not just any one of them) the members of the Experts role to review it individually and provide their comments and recommendations.
Now, at design time, I cannot be sure how many experts there are at any point in time in the future. New experts may join later; some may leave at a certain point in time. But however many there are at the time the new document needs to be reviewed, we want all of them to review it. Therefore, I cannot model it with a separate "Review" task for each expert ahead of time.
If I model just one task and assign it to the Role "Experts", how do I specify in BPMN that all the possible members of that Role (at that specific time) are to execute that one [sic] task, and not just any one of them will do?
I hope that my scenario is made clear here.
Also, I am interested in the implications in how an execution context might treat such a model (even if it can be modeled correctly in BPMN). If there happened to be 3 experts in the role of "Experts" at the specific time when document X is set to be reviewed, must all 3 of the experts complete their tasks before any subsequent task (for example, "Compile Comments and Modify Standards Document") can start? What if the editor wants to have the flexibility to start their work once the first review comments have been submitted without waiting for the rest to complete their review? What if the editor wants to ignore the review of the last expert if they are taking too long to do their review?

How can blockchains be used in audit trails?

I'm currently trying to figure out how to use blockchain in audit trails and potentially in accounting (and if they actually make sense). Both Deloitte and EY mention them.
I somehow cannot understand how this could be of benefit for audits and/or accounting.
To my understanding to make use of the power of blockchains you need multiple users. Only one user means you cannot validate the integrity since all blocks of that user could be compromised (if one block of a blockchain of a user got changed maybe also all of the following where changed, making it impossible to detect the modification). This means blockchains only make sense if you can share them with different users?
Data and thus blockchains however aren't always shared between multiple users. In accounting you often only have one "user"/"owner" of the data. Sure you could create multiple users in one company but there wouldn't be any benefit since they are in one location (company) and potentially all compromised. Or if the admin want's to change something he could easily modify all users making it useless for audits.
To make it work you would need different partners (supplier/customer) to share the information with. In that case you could however only have two users share the same blockchain (depending on legal regulations in your country) and then again who do you trust if one of the two doesn't validate?
Deloitt mentions that they can be used for files. Again I don't see the benefit since you would need multiple users AND files might get compressed with a different algorithm over time rendering them invalid (the useful information didn't change but the block will still be invalid). Or is this a not an issue from your experience? To me it seems it could be a problem.
The same goes for all the internal data which may be important for audits from my point of view. Which company would like to share the information with independent users. Or is it only intendet for "public"/"shared" data?
To identify a modification of one block in a blockchain the user would have ot validate every single block (every hash in the header of a block needs to be compared to the data of the previous block). In terms of accounting a blockchain could be all transactions of one account during one fiscal year. This however could easily be thousands of transactions. Wouldn't this be very slow to validate?
Maybe I'm misunderstanding the point in terms of audit trails but as long as the users are not independent data can always be modified making it useless for audits. And you need a critical mass to share the blockchain with.
First of all, I think that it's neccesary to get the power of Blockchain. It gives us the chance to create descentralized data bases, i.e. data bases that are not controled by an authority. Also, the data of Blockchain is immutable and permanent, i.e. it can not be modified or deleted. Thanks to it you achive a unique descentralized registry in a distributed network, for example for audit trails.
It's true that it has no sense if you use it inside your company. But if you use it among different companies? Each one could encode its data, so the rest of the companies couldn't see it. However, all the data would be stored in all the companies, so anyone couldn't change it. Moreover, you can have more than one user (node) for each company.
Nowadays, there are many implementations of Blockchain, each one with a different objetive. To understan better the power of Blockchain, I suggest you to wathc the video were is explained the new version (the v 1.0) of the Hyperledger Fabric.

Camunda modular design

I want to manage a huge workflow in Camunda.
I have decided to split this into different processes like Create, Configuration, Review & Confirm. Each of these processes have 10 to 15 tasks. These processes should be executed in sequence.
If I want to design my workflow like this, how will I link each process. What is the proper way for Camunda modular design.
You would probably go with some kind of SubProcess. If you plan to model different processes you most likely will use Call Activities and execute them one ofter another in some kind of root process.
Beware of the fact that each sub process starts its own process instance and thus you have to handle different execution scopes. That will be relevant if you request information from the system like e.g. the List of UserTasks. You can not use the processInstanceId of the root process in this case and will have to use a businessKey.
You also have to handle the process variables and decide which variables you want to propagate to the sub process.

BPMN - system lane/pool?

I am modeling some processes to be used by non-IT people (i.e. they need to be as clear as possible but I also don't wanna break any BPMN rules).
I attached a mockup of what I'm trying to show => a person performs some steps in the system but it's also important for the people reading the model to understand what system does after each of the user steps (e.g. that system automatically calculates a risk score). What's the best practice to model this in BPMN? I assume in any case (read: if this is a good approach in general) it is a pool, not a lane - but in this case the system pool would also need a start and finish, right?
The system is part of your organization so model it as a separate lane in the same pool as the rest of your process.
To indicate if the step is automated or done by a user use action types - script for steps done automatically by the system and user for those performed by a user.
Actions within the same pool are connected with solid lines to indicate business flow.
If we use MDA/CIM system not modeled as a part of proces (lane). Software is the tool not role....
(PS two pools, one for company second for system is bad, BPMN use one poll for one proces...).
We use mapping "activity to use case" for showing where is the system using.

how to deal with race conditions among jobs with e.g. beanstalkd

I am wanting to set up a job queue with multiple workers. Right now I am looking at beanstalkd, but this is more of a conceptual problem, I believe: How can you ensure that jobs related to a single entity get handled in order?
Let's say the workers manage an email platform for some UI. For a given mailbox, jobs need to be performed serially. For example, sometimes a user will want to re-push their password into the mail platform while troubleshooting. So, they change their password, then change it back right away. That's two password-change jobs submitted to beanstalkd.
Now, most of the time this will go fine, as beanstalkd will hand those jobs out to workers in order. However, some transient error like a DNS lookup delay could cause the second password change (back to the proper one) to go through before the first, leaving the mailbox with an incorrect password.
I have thought about introducing semophores/mutexes, and having a 1:1 worker-machine:beanstalkd-server ratio, but even that would only work of the locks requests are granted in the order requested, which doesn't seem fully reliable. Having a queue per entity opens some other options, but this needs to support hundreds of thousands of entities.
Judging by how little discussion around this topic I've found, this must not be as common of a scenario as I initially thought. Does anyone have experience dealing with this problem?
A couple of potential methods come to mind.
As you point out, unless you are changing priorities, Beanstalkd is a FIFO queue. This means that, if only one worker is dealing with changing the password, it would handle the jobs in order.
If there are multiple workers, then you could store meta-data alongside the password - a last modified time (more exactly, when the password change request was made). That time would be set from the job, but if the time that is already in the database (alongside the password) is ever newer than the latest request - the new request would be dropped as out of date.
Depending on the user data storage, you may need additional locking around the database (with an SQL database, this is quite easy, but a file-based store would need additional locking to avoid potential file corruption).