Camunda: how to model task that can be cancelled? - bpmn

I want to model a process that can be initiated by receipt of a message (which will be done via a REST call). The process will lead to a task that is assigned to a user. The user will supply some extra information and then the process will terminate.
However, I also want to model the case when additional information is received after the first info has been received. Receipt of this extra information via REST should terminate the process.
This overall model represents a computer system that monitors a flow of information and if it detects a problem it creates a task for someone to investigate. However if further information becomes available, the the task should've terminated.
What is the best way of modelling this in BPMN and Camunda please?
What I have at the moment:
(MSE) --> (UT) -->(TEE)
(RT) --> (TEE)
Where:
MSE = Message Send Event
UT = User Task
TEE = Termination End Event
RT = Receive Task
I can successfully start/add a process for using curl to post a message representing the start message. This adds a process and the task is allocated to a user.
However, I don't seem to be able to get the receive task to correlate with the process, it just seems to add a new process. The cancel message that the receive task is supposed to represent should specifically cancel the particular process that it exists in, not any old process.

There are different ways to model this.
You could use an interrupting boundary message receive event and if the extra info was received the user task is canceled by the boundary event.
Another approach would be to use an interrupting event sub process.
If the message with the extra information was received the event sub process is triggered and will cancel the process.
You could also use a parallel gateway and a terminate end event.
But I would recommend one of the methods above-mentioned.

Related

Enable to cancel a process until it is finished

I would like to ask how it is possible to model a following situation in BPMN:
Users can submit a request, which they can cancel any time until the request is solved. Once the request is solved, it cannot be cancelled. So if the user cancels the request before it is processed, the process ends without further processing. So until there is no result of the request, it can be cancelled.
For example until the research paper is published, it can be discarded by its author.
I model another example of BPMN process, which outlines the problem.
Thanks a lot!
This can be modeled using the boundary message intermediate event (interrupting). The message intermediate event catch the cancel message from user and trigger the stop of the process.

AXON framework synchronous response

I am new to AXON framework and are using it for our development. We have a requirement where command (command side) is created for the persisting data, for the same event is triggered which is consumed at query side. Now we need to have a response back to command side from query side which says if the record is persisted into database successfully (custom successful message) or if failed then the reason of the failure (custom exception message as response). Kindly help if there is any way to achieve such scenario.
Here command side and query side are 2 different micro-services and we are using Rabbit Mq for event driven technique.
Thanks in advance
I think that what you are asking is if there is a way for the command and event to be processed in a single transaction?
If you use a subscribing event processor, running in the same JVM, the event is processed synchronously and the whole transaction is rolled back in case of an exception in an event handler. This is not the case here, because you have loosely coupled separate services, which is good.
It's best practice for the aggregate with the command handler to have all the information available to decide whether or not the command can successfully be processed, and when an event is applied, this is a signal that it has happened, and the other services (the query side in this case) have to be informed. It's not good practice for a query module to overrule this ("you say it happened, I say it didn't"). If there is an error in the query side, you fix it, and replay the event.
If it really is an error in the event handler that the whole system must know about, that is really a separate event. You can apply such an event directly on the event bus and notify the whole system. Something like this:
#Autowired
private EventBus eventBus;
(...)
CatastrophicFailureEvent failureEvent = new CatastrophicFailureEvent("OH NO!");
eventBus.publish(GenericEventMessage.asEventMessage(failureEvent));
I think you might need to reconsider your architecture. Keep in mind that events should encapsulate the irreversible state changes of your system. These state changes should not be questioned after they have happened. Your query side should only need to care about projecting these valid state changes that your command side has decided on.
If you need to check whether a user already existed, you need to do this on the command side in your aggregate. The aggregate can keep a list of all the existing usernames and throw an exception if an invalid command is given. The command response (tip: using the sendAndWait() method on the CommandGateway returns a response) can then be used as the system to inform your user about the success/failure of its action.
The following flow might solve your problem, but keep in mind that the user will get a callback on the success of the action even though the query side might not have processed its result yet. This part is eventually consistent.
Command Side:
Request from frontend handled by a Controller class and creates an corresponding command
The above command is invoked and handled by a command handler which creates the corresponding event or throws an exception if the user already exists.
The invoker of the command is informed about the success of the command or the exception is handled and the error shown to the user.
The above event is published through rabbit mq event bus if the command was successful.
Query side:
The event that is published in the step 4 is consumed by the event handler in query side. No checks or validations should be necessary, since they were already handled on the command side.
#Mzzl
Series of activities
Command Side:
1. Request from frontend handled by a Controller class and creates an corresponding command
2. The above command is invoked and handled by a command handler which in return create corresponding event
3. The above event is then published through rabbit mq event bus.
Query Side:
4. The event that is published in the step 3 is consumed by the event handler in query side.
5. The event handler has the logic to perform db transaction (lets assume add a user). Once a user is added then a success message or failure message (lets assume user already available in the DB so could not create duplicate entry) should flow from query side to command side and eventually back to UI as a repsonse.
I'm not sure I've fully understand your issue (especially the microservice part :)),
but if your problem is related to having the query side up to date after the command execution, then you can have a look at this project.
In this example, you can see that he uses a SubscriptionQueryResult in conjunction with a QueryUpdateEmitter (see here)
Basically you will subscribe to query side changes before the command is issued, and you will block after the command execution until the query side send a notification when it is up to date.
This way you can avoid the eventual consistency.

Event sourcing combinded with deletable data

I want to develop an Emailer microservice which provides a SendEmail command. Inside the microservice I have an aggregate in mind which represents the whole email process with the following events:
Aggregate Email:
(EmailCreated)
EmailDeliveryStarted
EmailDeliveryFailed
EmailRecipientDelivered when one of the recipients received the email
EmailRecipientDeliveryFailed when one of the recipients could not receive the email
etc.
In the background the email delivery service SendGrid is used; my microservice works like a facade for that with my own events. The incoming webhooks from SendGrid are translated to proper domain events.
The process would look like this:
Command SendEmail ==> EmailCreated
EmailCreatedHandler ==> Email.Send (to SendGrid)
Incoming webhook ==> EmailDeliveryStarted
Further webhooks ==> EmailRecipientDelivered, EmailRecipientDeliveryFailed, etc.
Of course if I'd want to replace the external webservice and it would apply other messaging strategies I would adapt to that but keep my domain model with it's events. I want to let the client not worry about the concrete email delivery strategy.
Now the crucial problem I face: I want to accept the SendEmail commands even if SendGrid is not available at that very moment, which entails storing the whole email data (with attachments) and then, with an event handler, start the sending process. On the other hand I don't want to bloat my initial EmailCreated event with this BLOB data. And I want to be able to clean up this data after SendGrid has accepted my send email request.
I could also try both sending the email to SendGrid and storing an initial EmailDeliveryStarted event in the SendEmail command. But this feels like a two-phase commit: if SendGrid accepted my call but somehow my repository was unable to store the EmailDeliveryStarted event the client would be informed that something went wrong and it tries again which would be a disaster.
So I don't know how to design my aggregate and, more important, my EmailCreated event since it should not contain the BLOB data like attachments.
I found this question interesting and it took a little bit to reflect on that.
First things first - I do not see an obligation to store the email attachments in the event. You can simply store the fully qualified name of the files attached. That would keep the event log smaller and perhaps rule out the need for "deleting" the event (and you know that, in an event source model, you should not do that).
Secondly, assuming that the project is not building an e-mail client, I don't see a need to model an e-mail as an aggregate root. I see AggregateRoots represent business-relevant domains, not for a utility task like sending an e-mail. You could model this far more easily using a database table / document that keeps track of what has been sent and what not yet. I see sending e-mails through SendGrid as a reaction to a business event, certainly to be tracked, but not an AggregateRoot in its own right.
Lastly, if you want to accept SendEmail commands also when SendGrid is offline, the aggregate emits an EmailQueued event. The EmailQueuedHandler will produce a line on the read model of the process in charge taking all the Emails in queued state and batch them for sending. If the communication with SendGrid fails, you can either:
Do nothing, the sender process will pick the email at the next attempt
Emit a EmailSendFailed, intercepted by a Handler that will increase the retry count (if you want to stop after a number of retries).
Hope that is sufficiently clear and best of luck with your project.

Why put activities between Receive and SendReply instead of after them in Workflow Service

Most of the examples that I've seen on Workflow Services put activities between the Receive and SendReply activities. However, if the activities take a long time to complete the service timesout. I could increase the timeout or put the activities after the SendReply. Is there a best practice on where to run these activities?
There is no need to keep all activities between Receive and send reply. Your activities will be executed after completion SendReply activity. For a log running process send reply can send its client message related that service is started or any exception. Workflow will be executing after sendreply completion.
You can follow this approach..
1. Put receive activity as first activity on the workflow.
2. Apply validation on Data contract used as argument.
3. Put a code activity that can set WorkflowinstanceID in out parameter that can return as response from send reply. This is can
be used to control Workflow.
4. Add another send reply by right click on Receive activity, return response if any validation faults occur.
5. Put rest of activity below of send reply configure service behavior for any unhandled exception.

Grails test JMS messaging

I've got a JMS messaging system implemented with two queues. One is used as a standard queue second is an error queue.
This system was implemented to handle database concurrency in my application. Basically, there are users and users have assets. One user can interact with another user and as a result of this interaction their assets can change. One user can interact with single user at once, so they cannot start another interaction before the first one finishes. However, one user can be in interaction with other users multiple times [as long as they started the interaction].
What I did was: crated an "interaction registry" in redis, where I store the ID of users who begin an interaction. During interaction I gather all changes that should be made to the second user's assets, and after interaction is finished I send those changes to the queue [user who has started the interaction is saved within the original transaction]. After the interaction is finished I clear the ID from registry in redis.
Listener of my queue will receive a message with information about changes to the user that need to be done. Listener will get all objects which require a change from the database and update it. Listener will check before each update if there is an interaction started by the user being updated. If there is - listener will rollback the transaction and put the message back on the queue. However, if there's something else wrong, message will be put on to the error queue and will be retried several times before it is logged and marked as failed. Phew.
Now I'm at the point where I need to create a proper integration test, so that I make sure no future changes will screw this up.
Positive testing is easy, unfortunately I have to test scenarios, where during updates there's an OptimisticLockFailureException, my own UserInteractingException & some other exceptions [catch (Exception e) that is].
I can simulate my UserInteractingException by creating a payload with hundreds of objects to be updated by the listener and changing one of it in the test. Same thing with OptimisticLockFailureException. But I have no idea how to simulate something else [I can't even think of what could it be].
Also, this testing scenario based on a fluke [well, chance that presented scenario will not trigger an error is very low] is not something I like. I would like to have something more concrete.
Is there any other, good, way to test this scenarios?
Thanks
I did as I described in the original question and it seems to work fine.
Any delays I can test with camel.