I read the following sentences in a article.
BPMN uses Pools when representing the interaction between an
organisations and participants outside of its control. Within a
company, a single pool covers its own internal operations. It is only
when it interacts with external participants that additional Pools are
required.
and read this in another place.
You may create separate pools for Customer Service Assistant and Logistics Department. But to highlight the fact that they are under
the same company, it would be better to create a pool for The True
Aqua Distilled Water Company and make Customer Service Assistant and
Logistics Department lanes of the pool. Create a pool below Customer.
Name the pool The True Aqua Distilled Water Company.
see this
Which of these are more correct,single pool or multi pool and why?
Both explanations are correct. The first one offers a strict way of reasoning about pools in BPMN. In an organization, there may be several interacting processes. It is recommended that these processes are modelled as lanes in BPMN, with their interactions represented as message events. All lanes belonging to a single organization is put into one pool. Processes of a separate organization will also be modelled in a single pool and the interactions between both organisations modelled as message events between the pools.
However, these rules can be relaxed such that instead of modelling each process of an organisation in lanes, they can be modelled in pools. This is particularly true in cases where interactions between organisations are not modelled (that is, pools representing organisations will not be necessary if only one organisation is being considered). It is important to point out that using lanes and pools correctly may avoid future issues. For instance, one may model an organisation’s processes using pools. If in future, another organisation's processes need to be modelled, using pools will become confusing. It is, therefore, necessary to follow best practice.
Related
When people describe Paxos, they always assume that there are already some proposers in the cluster. But where are the proposers from, or what decides which processes to be proposers?
How the cluster is initially configured and how it is changed is down to the administrator who is trying to optimise the system.
You can run the different roles on different hosts and have different numbers of them. We could run three proposers, five acceptor and seven learners, whatever you choose. Clients that need to write a value only need to connect to proposers. With multi-Paxos for state replication clients only need to connect to proposers as that is sufficient and the clients don't need to exchange messages with any other role type. Yet there is nothing to prevent clients from also being learners by seeing messages from acceptor.
As long as you follow the Paxos algorithm it all comes down to minimising network hops (latency and bandwidth), costs of hardware, and complexity of the software for your particular workload.
From a practical perspective your clients need to be able to find proposers in the face of failures. A cluster administrator will be configuring which nodes are to be proposes and making sure that they are discovered by clients.
It is hard to visualize from descriptions of the abstract algorithm how things might work as many messaging topographies are possible. When applying the algorithm to a practical application its fair more obvious what setup minimises latency, bandwidth, hardware and complexity. An example might be a three node MySQL cluster running Paxos. You want all three servers to have all the data so they are all learners. All must be acceptors as you need three at a minimum to have one node fail and still maintain progress. They may as well all be proposers to give the best availability and simplicity of software and configuration. Note that one will become the distinguished leader. The database administrator doesn't think about the Paxos roles as they just set up a three-node database cluster.
The roles in the cluster may need to change. For example, you might want to expand the capacity of a database cluster. Or a server might die so you need to change the cluster membership to swap the dead one for a fresh one. For the Paxos algorithm to work every process must have a strongly consistent view of which processes are in which roles. How do you get consensus? You use Paxos to fix a new value of the cluster membership.
I am modeling some processes to be used by non-IT people (i.e. they need to be as clear as possible but I also don't wanna break any BPMN rules).
I attached a mockup of what I'm trying to show => a person performs some steps in the system but it's also important for the people reading the model to understand what system does after each of the user steps (e.g. that system automatically calculates a risk score). What's the best practice to model this in BPMN? I assume in any case (read: if this is a good approach in general) it is a pool, not a lane - but in this case the system pool would also need a start and finish, right?
The system is part of your organization so model it as a separate lane in the same pool as the rest of your process.
To indicate if the step is automated or done by a user use action types - script for steps done automatically by the system and user for those performed by a user.
Actions within the same pool are connected with solid lines to indicate business flow.
If we use MDA/CIM system not modeled as a part of proces (lane). Software is the tool not role....
(PS two pools, one for company second for system is bad, BPMN use one poll for one proces...).
We use mapping "activity to use case" for showing where is the system using.
Let's say I have the following simplified process:
How should I reflect there that the data could be added not only by manual input, but can be received from another system (without user verification)?
And is there more correct way to display the same actions done by different users (see Verification step done by Manager 1 or Manager 2; in reality there are much more steps than just Verification, and all of them are the same in Manager 1 and Manager 2 columns).
Obviously there are many open questions regarding your specific requirements, so I can just give you an example:
I am using two lanes, one for the manager, one for the user. I assume that the concrete person (or subrole) necessary to carry out the steps for the "manager" needs to be determined within the process. From a process perspective it's just one role carried out by people with different skill sets or authorizations. I show that "Assign" task here as an automatic step, but it could also be a manual step. A BPMN process can have several start "events", I am using here two of them to show the different ways in which the process can start. I am using a collapsed pool "External System" and a message flow to indicate where the automatic message is coming from.
(Please note that BPMN processes are typically modeled from left to right, but may also be modeled from top to bottom. Also note, that for more complex processes and a more finegrained level of detail, it is often preferable to show every process participant in a separate pool with a separate process and exchange of messages in between them. Modeling one process pool with several lanes quite soon reaches practical limits!)
What if you had one large database to server all your apps. So your website that needs to store customer orders can use the same database that your game uses to store registered users. Different applications could have tables only for them to use. Some may say that this could be a security issue, because if someone cracks your database, they could attack all your applications. But in a lot of databases you could use a line like the following to restrict access:
deny select on aTable to aUser;
I am wondering if this central database would be considered a poor practice, and if so why?
They way I look at it, a web application is nothing more than a collection of web pages. Because of this, it really doesn't matter if one page is about, say, cooking, while the other page is about computer programming.
If you also consider it, this is very similar to Openid, which I use to log into my SO account!
If you have your fundamental security implemented correctly, it doesn't matter how the user is interacting with your website. Where I would make this distinction is in two cases:
Don't mix http with https. On a shared host, this isn't going to be an issue anyway; if you buy the certificate for https, make everything that way (excluding the rare case where this might affect performance).
E-commerce or financial data should be handled fundamentally in a different way. If you look at your typical bank, they have multiple log-in protocols, picture verification and short log-in times. This builds confidence in user's securities. It would be a pain in the butt for a game site, or most other non-mission critical applications.
Regarding structure, if you do mix applications into one large database, you should consider the other maintenance issues, such as:
Keep tables separate; consider a prefix for every table unique to each application. Following my example above, you would then start the cooking DB table names with 'ck', and the computer programming DB table names with 'pg'. This would allow you to easily separate the applications if you need to in the future.
Use a matching table to identify which ID goes to which web application.
Consider what you would do and how to handle it if a user decided to register for both applications. Do you want to offer transparency that they can share the same username?
Keep an eye on both your data storage limit AND your bandwidth limit.
If you are counting on these applications to drive revenue, you are putting "all your eggs in one basket". Make sure if it goes down, you have options to restore or move to another host.
These are just a few of the things to consider. But fundamentally, outside of huge (big data) applications there is nothing wrong with sharing resources/databases/hardware between applications.
Conceptually, it could be done.
Implementation-wise, to make the various parts distinct from one another, you could use both naming conventions (as per #Sable Foste) and/or separate database schemas (table Finance.Users, GameApp.Users, etc.)
Management-wise, things could get tricky. Repeating some points, adding others:
One application could use a disproportionally large share of resources (disk space, I/O, CPU)
Tracking versions could be tricky (App is v4, finance is v7) -- depends on how many application instances you have to support.
Disaster recovery-wise, everything is lumped together. It all gets backed up as one set, it all gets restored as one set. Finance corrupt? Restore from backup... and lose your more recent game data.
Single point of failure. One database goes down, all your applications are down.
These (and other similar issues) are trade-offs you'll want to consider. Plan ahead, to lessen the chance that what's reasonable and economic today becomes a major headache tomorrow.
My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent).
In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal.
External cloud system not allowed as he has no confidence.
The system should take into account of scalability which means addition of new servers should be easy to configure.
The servers should be load balanced.
The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption).
The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time)
Please help. I'm already at the end of my wit. Thank you in advance.
I imagine that these requirements (if properly analysed) are essentially incompatible, in that they cannot work according to CAP Theorem.
If you have several datacentres, even if they are close by, partitions WILL happen. If a partition happens, either availability OR consistency MUST be lost, because either:
you have a pre-determined "master", which keeps working and other "slave" DCs which fail (or go readonly). This keeps consistency at the expense of availability.
OR you lose consistency for the duration of the partition (this means that operations which depend on immediate consistency are also unavailable).
This is incompatible with your requirements, as far as I can see. What your boss wants is clearly impossible. He needs to understand CAP theorem.
Now, in YOUR application case, you may decide that you can bend the rules and redefine what consistency or availiblity are, for convenience, and have a system which degrades into an inconsistent but temporarily acceptable state.
You probably want to get product management to have a look at the business case for these requirements. Dropping some of them is probably ok. Consistency is a good requirement to keep, as it makes things behave as people expect - this means to drop availability or partition-tolerance. Keeping consistency is definitely easier from an engineering perspective.
This is another one of those things where employers tend not to understand the benefits of using an off-the-shelf solution. If you as a programmer don't really even know where to start with this, then rolling your own is probably a going to be a huge money and time sink. There's nothing wrong with not knowing this stuff either; high-availability, failsafe networking that takes into consideration catastrophic failure of critical components is a large problem domain that many people pour a lot of effort and money into. Why not take advantage of what providers have to offer?
Give talking to your boss about using existing cloud providers one more try.
You could contact one of the solid and experienced hosting provides (we use Rackspace) that have data centers in different regions world wide and get their recommendations upon your requirements.
This will require expert assistance and a large budget, and serious planning.
I better option will be contact a reputable provider with a global footprint and select a premium solution with a solid SLA backing up there service and let them tailor a solution that comes close to your needs.
Just realize even the guys like Google, Yahoo, Microsoft and Amazon (to name a few), at one time or another have had some or other issue that rendered segments of there systems offline to certain users.