Ask about Activiti diagram parallel task - process

My business process has to translate into Activiti BPM diagram, or other BPM engines.
Here is my shorten business process:
- user 1 creates the business transaction
- then user 2 does his task to change transaction status.
My problem is : after the business transaction is created, user 1 could delete the transaction, where by user 2 could not done his task (or his task is removed automatically by user 1).
So how can I express that business using BPM diagram?, I'm confusing about using parallel gateway now?

The business process segment you indicate is actually quite a common practice.
Often, an initiator is authorized to cancel a process or portion of a process which may have existing tasks.
The easiest way to handle this is:
After initiation of the process, split the flow and send a "Status" task to user 1 and a "process transaction" task to user 2.
Have the status task (user 1) provide a simply UI option to cancel the process which will send a terminate event (every thing gets shutdown immediately) or if cleanup is needed sends a signal event to shutown the process instance.
The "Process Transaction" task will have an event listener that picks up the signal event which automatically closes the task and flows (via cleanup logic) to the end.
Note that if you use a terminate event step 3 is not required, but it is something of a "crowbar approach", in my experience, instance cleanup is always required (notify users of why their process task went away, undo DB or System of Record transactions, send messages to other systems).
As something of a separate note, BP-3 ( http://www.bp-3.com ) offers a full suite of Activiti services including:
Migration support (To Activiti from other BPM systems)
Pre production support for Activiti BPM engine
BPM process design, development, review services
Production support for Activiti BPM engine
General consulting services
They may be able to assist you with a migration strategy.

User 2's task which is something like (Update Transaction Status) should first check that the specific entity exists before updating the status, then probably place a logical/physical lock indicating that it is being work with if necessary. Similarly, in user 1's task, there should be a check before delete to see if the entity is held/locked by another user. So the step is not just 1 action.

Related

How long should a process live in Camunda

How long should a process live in a Camunda BPMN workflow?
I have a process that can run multiple times throughout the life of a product. I need to keep track of and update data points that this workflow handles for the product.
One proposal was to write a looping BPMN that listens for an event to start the process, and ends with it back on the Receive Task listening for the event to fire again.
However, this would result in processes that never actually end because they always loop back, but we have no guarantees about when or how many times this event could be fired.
I have also considered creating BPMN that just does one run and terminates. This relieves the problem of a long living process, but I loose all of the process variables that are included.
EDIT:
Here is a simplified diagram of the looping mechanism we're looking at. I don't want to re-check eligibility after the first time, but I want to verify and save the address any time it changes.
Simplified Address Diagram
Honestly, the BPMN file (aka the process definition) should be the one to dictate how long it "lives". Like if you have a process that necessitates your user to contact a customer and wait for his answer, a process could easily state that "1 month" is the time to wait before sending a reminder (or reacting in any other way to the timer's expiration).
But we also have to differentiate between "time to live / life cycle of the real life process" conceputalized through the BPMN file VS "time to live / life cycle of the process in your Camunda engine (for lack of a better term)".
Each instance of a process in Camunda has an unique identifier. You do not have to let the "memory instance of the process" live until it is completed ... you could instead instanciate it everytime an event is sent to the unique ID of a process instance to treat the event/command being and stop the instance (not the lifecycle of the process) once the event/command has been treated.
The only time I worked with Camunda, thats is what we did. Basically, we'd sent to Camunda API the name of the BPMN file, the ID of the process instance we had previously started and all the pertinent informations to treat the event/command that will affect the process (include process variables).
This way, when an event/command is successfully treated by Camunda API, you could store all the process variables into the "return message" after it has been processed and you would never really lose process variables since you would always "reload" them from the latest "state" of the process (aka the response you got last time you sent an event to a specific process instance).
Hopefully, I'm being clear ?

What are some use cases for the Application Log (BC-SRV-BAL)

Hello fellow developers,
I recently stumbled upon the Application Log and find it to be quite handy. Now I am wondering, from a best practice perspective, what are some use cases for utilizing the Application Log vs. normal messages / class based exceptions?
Normally application log is used when end-user need not be informed of this information. Application log complements the normal messages and class based exceptions but not completely replace them.
Imagine a situation, there is an issue with data on a background processing. If a developer want to see what is the data that was being processed (after it is processed), it will be difficult. A developer can thus write some data to application log based on his gut if there is a possibility of failure.
Normally, this application logging is controlled by some user parameters and also the granularity of the data that is being stored in application log.
Hope this helps.
The application log comes in handy to
store messages. Interactive messages and exceptions are lost after the user clicks them away. The application log stores that information for longer periods of time.
log background processes. These have no direct means to inform a user because there is no user, only some other process that triggered the batch.
provide additional details. Interactive messages are usually minimized to not spam the user with too many popups. The application log can provide additional aspects and side infos to accompany the main result.
log "undercurrents". If a reuse component is unsure what level of detail its consumer wants, it can write an application log with high level of detail that the consumer later can consume or not, as desired.
It is not appropriate when
you want to process the logged details in an automatic way. Application logs are for display to the end user. Application processing should store or hand over data in a more appropriate format.
you need to process vast amounts of data. Writing the application log is fast, but takes time for the database roundtrips, such that large numbers of records can slow down the actual application too much.
you need to store sensitive data. Application logs are secured with authorization checks, but still they may not be the appropriate place for really sensitive information.

Understanding Eventual Consistency, BacklogItem and Tasks example from Vaughn Vernon

I'm struggling to understand how to implement Eventual Consistency with the exposed example of BacklogItems and Tasks from Vaughn Vernon. The statement I've understood so far is (considering the case where he splits BacklogItem and Task into separate aggregate roots):
A BacklogItem can contain one or more tasks. When all remaining hours from a the tasks of a BacklogItem are 0, the status of the BacklogItem should change to "DONE"
I'm aware about the rule that says that you should not update two aggregate roots in the same transaction, and that you should accomplish that with eventual consistency.
Once a Domain Service updates the amount of hours of a Task, a TaskRemainingHoursUpdated event should be published to a DomainEventPublisher which lives in the same thread as the executing code. And here it is where I'm at a loss with the following questions:
I suppose that there should be a subscriber (also living in the same thread I guess) that should react to TaskRemainingHoursUpdated events. At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app? In the application code? Is there any reasoning to place domain subscriptors in a specific place?
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
If I use this message broker, should I have a background thread or something that just consumes events from a RabbitMQ queue and then dispatches the event to update the product?
I'd appreciate if someone can shed some clear light over this since it is quite complex to picture in its completeness.
So to start with, you need to recognize that, if the BacklogItem is the authority for whether or not it is "Done", then it needs to have all of the information to compute that for itself.
So somewhere within the BacklogItem is data that is tracking which Tasks it knows about, and the known state of those tasks. In other words, the BacklogItem has a stale copy of information about the task.
That's the "eventually consistent" bit; we're trying to arrange the system so that the cached copy of the data in the BacklogItem boundary includes the new changes to the task state.
That in turn means we need to send a command to the BacklogItem advising it of the changes to the task.
From the point of view of the backlog item, we don't really care where the command comes from. We could, for example, make it a manual process "After you complete the task, click this button here to inform the backlog item".
But for the sanity of our users, we're more likely to arrange an event handler to be running: when you see the output from the task, forward it to the corresponding backlog item.
At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app?
That seems pretty reasonable.
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
Same thread and same transaction are not necessarily coincident. It can all be coordinated in the same thread; but it probably makes more sense to let the consequences happen in the background. At their core, events and commands are just messages - write the message, put it into an inbox, and let the next thread worry about processing.
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
No; the mechanics of the plumbing matter not at all.

Long running workflow in asp.net mvc

I'm developing an intranet site using asp.net mvc4 to manage some of our data. One important feature of this site is to trigger import/export jobs. These jobs can take anywhere between 5 minutes to 1 hour. Users of the site need to be able to determine whether a job is currently running as well as the status of prior jobs. Many jobs will often include warning messages concerning duplicate data and these warnings need to be visible on the site.
My plan is to implement these long running processes as a WCF Workflow Service that the asp.net site will interact with. I've got much of the business logic implemented via activities and have tested it using a simple console application. I should note I'm using a correlation handle in order to partition the service based on specific "Projects" on the site.
My problem is how do I go by querying the status of an active job (if one exists) as well as the warning messages of previous jobs. I suspect the best way to do this would be to use the AppFabric tracking service and have my asp.net query a SQL monitoring store and report back on the current status. After setting up AppFabric and adding custom tracking messages, I ran into a few issues. My first issue is that I cannot figure out how to filter out workflow instances that were not using the correct correlation handle as I'd like to show only workflows for a specific project. The other issue is that the tracking database can be delayed quite a bit which causes issues for me trying to determine if a workflow is currently running.
Another possible solution could be to have the workflow explicitly update a database with its current status and any error messages. I'm leaning towards this solution but could use some expert advice.
TL;DR: I need to know the best way to query the execution status and any warning messages of a WCF Workflow service.
As you want to query workflow status and messages even after the workflow is finished I would start by creating a table where you can convert the correlation values a client send to the related workflow ID. I would create a custom activity to do that and drop it right after the receive that creates the workflow.
Next I would create a regular WCF service the client app uses to query the status. This WCF service can query the WF persistence store to see if a given workflow is still running. If so the active bookmarks column will tell you what SOAP messages the workflow is currently waiting for.
As far as messages go you can either use the AppFabric tracking infrastructure to store and retrieve them or you could create a custom activity and store them in your own database. It really depends if you are also interested in the standard WF tracking messages generated.
Update on cheking for running workflow instances:
There are several downsides to adding an IsRunning message to your workflow. For one you would need to make sure one branch keeps looping and waiting for the message but stops as soon as the other real workflow branch is done. Certainly possible but it complicates the workflow and is a possible source of errors. And as it is not part of the business problem it really has no place in the workflow as far as I am concerned. It also means that you will have to load a workflow from disk and persist it back just to tell you that it is there. If it was finished you will need to wait for a fault to indicate there was no workflow instance. And that usually means you get a timeout exception after, by default, 60 seconds. Add throttling to that and you request might be queued because there are too many other workflow instances or SOAP request being processed. So a timeout might mean that a workflow instance exists but is unreachable due to system constraints. Instead I would opt for the simple thing and check if the record in the instance store is still available. The additional info from the active bookmarks column will tell you what the workflow is waiting on, information I have used in the past to dynamically update the UI by enabling/disabling UI elements.

Grails test JMS messaging

I've got a JMS messaging system implemented with two queues. One is used as a standard queue second is an error queue.
This system was implemented to handle database concurrency in my application. Basically, there are users and users have assets. One user can interact with another user and as a result of this interaction their assets can change. One user can interact with single user at once, so they cannot start another interaction before the first one finishes. However, one user can be in interaction with other users multiple times [as long as they started the interaction].
What I did was: crated an "interaction registry" in redis, where I store the ID of users who begin an interaction. During interaction I gather all changes that should be made to the second user's assets, and after interaction is finished I send those changes to the queue [user who has started the interaction is saved within the original transaction]. After the interaction is finished I clear the ID from registry in redis.
Listener of my queue will receive a message with information about changes to the user that need to be done. Listener will get all objects which require a change from the database and update it. Listener will check before each update if there is an interaction started by the user being updated. If there is - listener will rollback the transaction and put the message back on the queue. However, if there's something else wrong, message will be put on to the error queue and will be retried several times before it is logged and marked as failed. Phew.
Now I'm at the point where I need to create a proper integration test, so that I make sure no future changes will screw this up.
Positive testing is easy, unfortunately I have to test scenarios, where during updates there's an OptimisticLockFailureException, my own UserInteractingException & some other exceptions [catch (Exception e) that is].
I can simulate my UserInteractingException by creating a payload with hundreds of objects to be updated by the listener and changing one of it in the test. Same thing with OptimisticLockFailureException. But I have no idea how to simulate something else [I can't even think of what could it be].
Also, this testing scenario based on a fluke [well, chance that presented scenario will not trigger an error is very low] is not something I like. I would like to have something more concrete.
Is there any other, good, way to test this scenarios?
Thanks
I did as I described in the original question and it seems to work fine.
Any delays I can test with camel.