I am modeling some processes to be used by non-IT people (i.e. they need to be as clear as possible but I also don't wanna break any BPMN rules).
I attached a mockup of what I'm trying to show => a person performs some steps in the system but it's also important for the people reading the model to understand what system does after each of the user steps (e.g. that system automatically calculates a risk score). What's the best practice to model this in BPMN? I assume in any case (read: if this is a good approach in general) it is a pool, not a lane - but in this case the system pool would also need a start and finish, right?
The system is part of your organization so model it as a separate lane in the same pool as the rest of your process.
To indicate if the step is automated or done by a user use action types - script for steps done automatically by the system and user for those performed by a user.
Actions within the same pool are connected with solid lines to indicate business flow.
If we use MDA/CIM system not modeled as a part of proces (lane). Software is the tool not role....
(PS two pools, one for company second for system is bad, BPMN use one poll for one proces...).
We use mapping "activity to use case" for showing where is the system using.
Related
How do we model a task where it is to be performed by all members of a specific role in BPMN? Especially when the number of members of this role is indeterminate at design time, and may increase or decrease at runtime from time to time.
Scenario: There is one task called "Review draft of new standards document" and it is assigned to the role called "Experts". Whenever this task is executed for a new document, we want all (not just any one of them) the members of the Experts role to review it individually and provide their comments and recommendations.
Now, at design time, I cannot be sure how many experts there are at any point in time in the future. New experts may join later; some may leave at a certain point in time. But however many there are at the time the new document needs to be reviewed, we want all of them to review it. Therefore, I cannot model it with a separate "Review" task for each expert ahead of time.
If I model just one task and assign it to the Role "Experts", how do I specify in BPMN that all the possible members of that Role (at that specific time) are to execute that one [sic] task, and not just any one of them will do?
I hope that my scenario is made clear here.
Also, I am interested in the implications in how an execution context might treat such a model (even if it can be modeled correctly in BPMN). If there happened to be 3 experts in the role of "Experts" at the specific time when document X is set to be reviewed, must all 3 of the experts complete their tasks before any subsequent task (for example, "Compile Comments and Modify Standards Document") can start? What if the editor wants to have the flexibility to start their work once the first review comments have been submitted without waiting for the rest to complete their review? What if the editor wants to ignore the review of the last expert if they are taking too long to do their review?
I want to manage a huge workflow in Camunda.
I have decided to split this into different processes like Create, Configuration, Review & Confirm. Each of these processes have 10 to 15 tasks. These processes should be executed in sequence.
If I want to design my workflow like this, how will I link each process. What is the proper way for Camunda modular design.
You would probably go with some kind of SubProcess. If you plan to model different processes you most likely will use Call Activities and execute them one ofter another in some kind of root process.
Beware of the fact that each sub process starts its own process instance and thus you have to handle different execution scopes. That will be relevant if you request information from the system like e.g. the List of UserTasks. You can not use the processInstanceId of the root process in this case and will have to use a businessKey.
You also have to handle the process variables and decide which variables you want to propagate to the sub process.
Let's say I have the following simplified process:
How should I reflect there that the data could be added not only by manual input, but can be received from another system (without user verification)?
And is there more correct way to display the same actions done by different users (see Verification step done by Manager 1 or Manager 2; in reality there are much more steps than just Verification, and all of them are the same in Manager 1 and Manager 2 columns).
Obviously there are many open questions regarding your specific requirements, so I can just give you an example:
I am using two lanes, one for the manager, one for the user. I assume that the concrete person (or subrole) necessary to carry out the steps for the "manager" needs to be determined within the process. From a process perspective it's just one role carried out by people with different skill sets or authorizations. I show that "Assign" task here as an automatic step, but it could also be a manual step. A BPMN process can have several start "events", I am using here two of them to show the different ways in which the process can start. I am using a collapsed pool "External System" and a message flow to indicate where the automatic message is coming from.
(Please note that BPMN processes are typically modeled from left to right, but may also be modeled from top to bottom. Also note, that for more complex processes and a more finegrained level of detail, it is often preferable to show every process participant in a separate pool with a separate process and exchange of messages in between them. Modeling one process pool with several lanes quite soon reaches practical limits!)
My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent).
In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal.
External cloud system not allowed as he has no confidence.
The system should take into account of scalability which means addition of new servers should be easy to configure.
The servers should be load balanced.
The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption).
The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time)
Please help. I'm already at the end of my wit. Thank you in advance.
I imagine that these requirements (if properly analysed) are essentially incompatible, in that they cannot work according to CAP Theorem.
If you have several datacentres, even if they are close by, partitions WILL happen. If a partition happens, either availability OR consistency MUST be lost, because either:
you have a pre-determined "master", which keeps working and other "slave" DCs which fail (or go readonly). This keeps consistency at the expense of availability.
OR you lose consistency for the duration of the partition (this means that operations which depend on immediate consistency are also unavailable).
This is incompatible with your requirements, as far as I can see. What your boss wants is clearly impossible. He needs to understand CAP theorem.
Now, in YOUR application case, you may decide that you can bend the rules and redefine what consistency or availiblity are, for convenience, and have a system which degrades into an inconsistent but temporarily acceptable state.
You probably want to get product management to have a look at the business case for these requirements. Dropping some of them is probably ok. Consistency is a good requirement to keep, as it makes things behave as people expect - this means to drop availability or partition-tolerance. Keeping consistency is definitely easier from an engineering perspective.
This is another one of those things where employers tend not to understand the benefits of using an off-the-shelf solution. If you as a programmer don't really even know where to start with this, then rolling your own is probably a going to be a huge money and time sink. There's nothing wrong with not knowing this stuff either; high-availability, failsafe networking that takes into consideration catastrophic failure of critical components is a large problem domain that many people pour a lot of effort and money into. Why not take advantage of what providers have to offer?
Give talking to your boss about using existing cloud providers one more try.
You could contact one of the solid and experienced hosting provides (we use Rackspace) that have data centers in different regions world wide and get their recommendations upon your requirements.
This will require expert assistance and a large budget, and serious planning.
I better option will be contact a reputable provider with a global footprint and select a premium solution with a solid SLA backing up there service and let them tailor a solution that comes close to your needs.
Just realize even the guys like Google, Yahoo, Microsoft and Amazon (to name a few), at one time or another have had some or other issue that rendered segments of there systems offline to certain users.
My company is planning to implement SAP HR in our organsization. We already have the other modules running. We plan to offer ESS/MSS to approximatly 200 000 users. Our current configuration is one machine with a Central Instance and 3 machines with Dialogue Instances. The DB is on the Central Instance machine. Enterprise Portal + DB is running on a separate machine. We are thinking of separating the HR module onto a separate DB so as to not to kill the other modules with load. Is this a valid concern? Is there any better way to architect the system? I was thinking along the lines of separating the DB and Cental instance onto two different machines. I've tried searching on SAP market place for any advice on SAP infrastructure architecture without any luck.
I'm not quite sure what is meant by "seperating" ...
I would through out the idea of two seperat SAP systems, one for HR and one (or possibly multiple others) for the rest. Each of these systems can then be sized/secured according to the different requirements (HR system many users, possibly high dialog use; the other system maybe a bit more "batch-oriented").
This would also be suggested by SAP's general strategy with almost every module being on it's own release schedule.
With regards to the DB and the Application server (central instance?) being on different machines .. that is indeed very common and one of the easiest tuning measures. You can mix and match pretty "ruthlessly" with the AppServer on Solaris and the DB on HP-UX.
Separation the HR is a valid option.
Its not only the load, but also the
HR module has very strict security
needs. That may cause some
difficulties in system copy's for qa and development
system.
Separating the Central instance and DB to separate machines is a valid option. But I would not do it (We are doing it...). It cause some complication in future operation. Like upgrading and database maintenance. Its easier to remove as much load from the central instance. Just remove it from the logon group. So only the message server, enque process and update(optional but recommended) process are left on it.
Update 1: Its not uncommon to separate the db from the center instance. But it does introduce some complication. That, I think, are unnesesery.