In the above process, what happens if:
A and B activates
OR-JOIN should now be waiting for 2 tokens
A completes
OR-JOIN now waits for B
B chooses exit 2
Does OR-JOIN now understand that it no longer needs to wait for B? And continue?
This is indeed a tricky one. My initial thoughts were that this process would not finish if the second token went towards the end2 end event. However, the BPMN specification does not seem to clearly define this case. The only relevant reference about inclusive gateways that I could find in the Specification is that
Upon execution, a token is consumed from each incoming Sequence Flow
that has a token. (BPMN Specification, Version 2, January 2011,
page 435. My emphasis).
The word “has” raised some doubts in me and I found the following example in a BPMN book (Thomas Allweyer, BPMN 2.0), where the authors says that the converging inclusive gateway only waits for the tokens that can reach it and not necessarily all tokens that were created by a diverging inclusive gateway. That would also be in line with the formulation in the BPMN specification.
So in answer to your question, the “OR-JOIN” would indeed understand that it does not have to wait for the second token. The first token would be consumed by end1 and the second token by end2 and the process finishes normally.
Related
I'm building this BPMN in which a user has to fill 6 forms (do 6 tasks). After he's completed all 6 he should get some results, but if any of those tasks are missing, then we do not have the results.
Which gateway should I use? The one I thought suited the most was the inclusive gateway, but all 6 tasks can be completed in any order.
Should I use a complex gateway and just describe the process? Or the parallel gateway works just fine
If exactly 6 tasks have to be performed in any desired order, and the flow shall not continue before all 6 are completed, the simplest would be to use a parallel gateway:
As soon as a token arrives at the first gate, a token is send on every outgoing flow, which activates each of the tasks. The gate at the left will generate a token to continue the flow once every task is complete and passed its token to the gate.
The complex gateway could also address your needs, since it allows complex conditions on the incoming and outgoing flows, including with multiple tokens on multiple branch. This would be necessary for example if 5 out of 6 tasks would have to be completed, or if you only want some combinations of completed tasks to continue the flow. But it seems overkill for your problem.
The inclusive gateway is not a solution for your needs, since it only allows one of the outgoing branch to be activated.
I have a BPMN diagram (see below) with some errors that I can't seem to figure out. The diagram depicts the Produce Magazine Article Process, where the writer and Researcher are freelancers who work together to write articles for various publications.
Bigger version: BPMN diagram
There is a bunch of errors here, three of them are logical (two are related), one is BPMN syntax.
Let's start with the syntax.
The message is always a communication between two separate pools s it has to cross pool boundaries. In your case, you have depicted Freelancers as a single pool, so Send information, being between lanes but not pools is a syntax error. Before suggesting a solution though, I will focus on logical errors.
Time event is not used to show the fact that some time goes by between the activities. That is actually something natural in the process It is used to indicate that the flow of time is a trigger of the next action(s). For instance, 7 days after choosing a topic the Publication might contact the Researcher to check on the progress. That would be indicated by timed event. In your case, it seems that the flow continuation is triggered by passing messages so you should indicate it as an Incoming message event. You actually do that in 2 places, one that is obvious (Get article as a "result" of time event) and the second that correlates to a second problem.
The second thing that most probably is a logical question is that since we are talking here about freelancers, most probably Researcher and Writer are two separate entities, not one organisation as your current diagram suggests. If that is the case, you should have them represented as two separate pools. Then your message would be judged, but still rather than "Wait for information" time event you should have "Receive information" incoming message event (that is BTW the starting event for the Writer pool - similarly receiving Article request by Researcher should be handled by Incoming message event).
If you prefer to depict the Freelancer as one "organisation", then you should completely abandon the time event (as again you have used it as an indication of time passing and as I have explained earlier that is not how it should be used). You have a simple flow, where once Researcher finishes their job, it is passed to Writer who carries it over from there. In such case, you should have a simple action flow (solid line) between the actions themselves.
It is also a good practice to be consistent in using End events (and at least recommended - some BPM engines verify that) to always have an End even for every branch of a process. You are missing one or two, depending on how are you going to approach the Freelancers part. Similarly, you should have a Start event for Publication.
Below are the two options shown in the form of diagrams. Note that I also did some minor changes to handle the insufficient information case by Publication. Otherwise, they will be stuck forever waiting for the article to come.
Option with Freelancers as separate pools:
Option with Freelancers considered as a single organisation
Does someone know if the following BPMN model is correct?
I'm not sure here because of XOR gateway within the parallel gateway.
After some research I found the solution.
The example above is not correct.
Instead of the normal end event, a termination event must be used. This event terminates the whole process immediately and removes all other tokens.
This would be the correct solution:
I know in BPMN there is just a "start event" for each pool. In my case I have a pool that can begin when a message is caught or because the actor decide to do it by his own decision.
How can I model that? I'm not sure I can use an event-based exclusive XOR.
Maybe a complex gateway?
As stated in many best practice how-tos, it is NOT RECOMMENDED to use multiple start events in a pool. BPMN specification 1.2 contains this note too:
9.3.2.
...
It is RECOMMENDED that
this feature be used sparingly and that
the modeler be aware that other readers of the Diagram may have difficulty
understanding the intent of the Diagram.
...
On the other side, the common rule for the case with omitted start event is
If the Start Event is not used, then all Flow Objects that do not have
an incoming Sequence Flow SHALL be instantiated when the Process is instantiated.
I assume this will be fair enough for the case of manual process start too. Even if the process has only message start event it will be correctly started because Message Start Event is a fair flow object with no incoming sequence flow and thus it complies to the above rule.
However, if you want to be 100% sure the process will go the way you want then the Event Based Exclusive Gateway (which is available since version 1.1) is your choice. Placing it before multiple different start events will make the process choose either of them for start.
Further explanation can be found in this blog.
Unlimited process instances
If you don't mind that during execution of your process the pool could be used multiple times (e. g. once started by a message and 3 times by an actor) then you can simply use multiple start events (BPMN 1.2 PDF Spec 9.3.2 page 37 allows this):
Single instance
If you can only allow a single run of the pool, you might have to instantiate it manually at the start of your execution and then decide whether to use it and when. Here is an example of how this can be done:
The Event-Based Gateway (Spec 9.5.2.4) will "decide" what to do with your pool:
If Actor decides to start or a message comes from the main pool, some actions will take place;
If the process is "sure" that additional pool will not be required, a signal is cast to terminate its instance.
I have a WCF service that will serve multiple clients.
They will have operation like 'Accept Suggested Match' or 'Reject Suggested Match'.
I would like all the operations to be run serially, but I am afraid this will have an impact on performance.
From what I saw - the default (and most used) instancing is 'Per Call' and the default concurrency mode is single.
So is it true I would need to use 'Single' mode in the concurrency ?
And how bad of an impact is it ?
(I estimate tens of clients using the system at the most).
Is there any way to enjoy both worlds ?
Use parallel computing (the service communicates with a database) and still perform the operations serially ?
In this post I read about 'Instance mode = Per Call and Concurrency = Single':
For every client instance, a single thread will be allocated.
For every method call, a new service instance will be created.
A single thread will be used to serve all WCF instances generated from a single client instance.
It seems like this doesn't guarantee to me that operation calls will be performed serially !
For example, if this happens :
CLIENT ALPHA calls 'Operation A'
CLIENT ALPHA calls 'Operation B'
CLIENT BETA calls 'Operation C'
From what I read - it says 'A single thread will be used to serve all WCF instances generated from a single client instance'. This sounds to me like it means that CALL 1 and CALL 2 would be performed serially, but CALL 3 might be performed between them, because it is from a different client thread.
Is this true ? Is there no way to make sure that calls are handled at the order they arrive ?
And can someone tell me if it is a bad practice or are there real world scenarios where it is accepted and recommended to use this method of communication in WCF.
Thank you very much
I think there are 3 ways to answer this question
A direct answer
An alternative approach for situations where you control the clients and the server
An alternative approach where you don't control the clients
I'd like to give answers 2 and 3 first (because I think they are better) but I will give a direct answer at the end...I promise!
I believe your problem is not that you need to process messages in the order they are received by the service, it is that since you have independent clients, you cannot guarantee that they are received by the service in the same order they are sent from the clients.
Consider your example of clients ALPHA and BETA. Even though client ALPHA might send the request to call operation 2 before client BETA calls operation C, you have no way of knowing what order they will be received by the service. This means that even if you process them at the server in the order they are received, this might still be the "wrong" order.
If you agree with this, carry on reading. If not, go straight to the direct answer at the bottom of this post ;o)
Approach 1: Use WS-Transactions
If you control both the service and the clients, and if the clients are capable of implementing WS-* SOAP protocols (e.g. .Net WCF clients). Then you could use the WS-Transaction protocol to make sure that operations from a single client that are intended to be processed in a single long-running transaction are definitely done that way.
For details of how to do WS-Transcation with WCF, look at
http://msdn.microsoft.com/en-us/library/ms752261.aspx
Following you example with clients ALPHA and BETA, this would enable client ALPHA to
Start a transaction
Call operation 1
Call operation 2
Commit the transaction
The effect of the ALPHA starting a transaction on the client is to flow that transaction through to the server so it can create a DB transaction which guarantees that operation 1 and operation 2 complete together or not at all.
If client BETA calls operation 3 while this is going on, it will be forced to wait until the transaction commits or aborts. Note that this might cause operation 3 to time out in which case BETA would need to handle that error.
This behaviour will work with any InstanceContextMode or ConcurrencyMode.
However, the drawback is that these kind of long running, distributed transactions are not generally good for scalability. Today you have 10s of client, but will that grow in the future? And will the clients always be able to implement WS-Transaction? If so, then this could be a good approach.
Approach 2: Use an "optimistic" type approach
If you need to support many different client types (e.g. phones, browsers etc.) then you could just allow the operations to happen in any order and make sure that the service and client logic can handle failures.
This might sound bad, but
you can use database transactions on the server side (not the WS-Transaction ones I mentioned above) to make sure your database is always consistent
each client can synchronise its own calls so that in your example, ALPHA would wait for call 1 to complete OK before doing call 2
The drawback here is that you need to think carefully about what failure logic you need and make sure the client and service code is robust and behaves appropriately.
In your example, I mentioned that ALPHA can synchronise its own calls, but if BETA calls operation 3 after ALPHA calls operation 1 but before it calls operation 2, it could be that
Operation 2 will fail (e.g. operation 3 has deleted a record that operation 2 is trying to update) in which case you need to handle that failure.
Operation 3 will overwrite operation 3 (e.g. operation 3 updates a record and the operation 2 tries to update the same record). In this case you need to decide what to do. Either you can let operation 2 succeed in which case the updates from BETA are lost. Or you could have operation 2 detect that the record has changed since it was read by ALPHA and then fail, or maybe inform the user and have them decide if they want to overwrite the changes.
For a discussion on this in the context of MS Entity Framework see
http://msdn.microsoft.com/en-us/library/bb738618.aspx
for a general discussion of it, see
http://en.wikipedia.org/wiki/Optimistic_concurrency_control
Again, in WCF terms, this will work with any ConcurrencyMode and InstanceContextMode
This kind of approach is the most common one used where scalability is important. The drawback of it is that the user experience can be poor (e.g. they might have their changes overwritten without being aware of it).
Direct answer to the original question
If you definately need to make sure messages are processed in series, I think you are looking for InstanceContextMode.Single
http://msdn.microsoft.com/en-us/library/system.servicemodel.instancecontextmode.aspx
This means that a single instance of the service class is created for the lifetime of the service application. If you also use ConcurrencyMode = Single or ConcurrencyMode = Reentrant this then means that your service will only process one message at a time (because you have a single instance and it is single threaded) and the messages will be processed in the order they are received.
If you use ConcurrencyMode = multiple then your single service instance will process multiple messages at the same time on different threads so the order of processing is not guaranteed.
How bad a performance impact this will have will depend on how long each service call takes to execute. If you get 10 per second and each takes 0.1 second it will be fine. If you get 20 per second and each normally takes 0.1 seconds, you will see a 100% increase in the average time.