Modelling multiple instances of camunda workflows - bpmn

I have the following scenario
I have to do create multiple instances(count determined by call to DB) of 5 different business processes which are modelled in separate workflows
For illustration naming them
Workflow 1
Workflow 2
Workflow 3
Workflow 4
Workflow 5
I will have to start multiple instances of the above workflows based on some data in the database
I will also need a parent workflow(to manage creating the above workflows) which will essentially do the below steps
Note: This workflow should never die, unless stopped externally. I want this workflow as a scheduler to create instances of other workflows. This workflow will be started when the container starts
Step1. Read data from the database using a REST API in a service task.
. The data from step1 will tell the following information
Workflow 1 -> create 5 instances
Workflow 2 -> create 2 instances
Workflow 3 -> create 1 instances
Workflow 4 -> nothing yet to create
Workflow 5 -> nothing yet to create
Note: We have some thresholds set, which ensures not many PI are being created by this process
Step 2: I am trying to start these process instances using the java API of the RuntimeService in the next service task
runtimeService.startProcessInstanceByKey("workflow1");. * 5 times
runtimeService.startProcessInstanceByKey("workflow2");. * 2 times
runtimeService.startProcessInstanceByKey("workflow3");. * 1 time
Not starting workflow 4 and workflow 5 as there is no need in this iteration
I am calling this number of times based on the data in step2
I am expecting that all these process instances will be started async
The purpose of this workflow is only to start the other workflows
Step 3 . After i have finished starting all the process instances for workflow1 to workflow5
I am doing some process cleanup and sending the flow back to Step 1
This keeps going in a loop and does the same steps again
I observed that the execution of the workflows(workflow1 to workflow5) are not triggering at all . until the main workflow is stopped
I have tried different mechanism but unsuccessful in achieving the use case
What is the best approach to model this? I am not sure what has to be done to achieve this.Can someone please help with the same?
I am using the spring boot camunda starter to do the same
I have attached the master workflow which has 3 service tasks
Get Data (explained earlier)
Schedule Workflow(Start child workflows)
Cleanup

From the answer to my question on the Camunda forum:
In BPMN, you can mark an activity as multi-instance, meaning that it is executed multiple times either based on static configuration or based on a dynamic condition or collection variable. See https://camunda.org/bpmn/reference/#activities-task (scroll to Multiple Instance) for an introduction on this and https://docs.camunda.org/manual/7.7/reference/bpmn20/tasks/task-markers/#multiple-instance for the Camunda implementation reference. In Camunda modeler, the multi-instance marker can be toggled via the context menu of an activity.

Related

Sync Camunda workflow states across multiple bpmn files

We are working on a problem wherein we have multiple Camunda workflows running at the same time. All of them have the same context.
Ex : Each of them has the same user tasks and service tasks, process variables. Just that the ordering can be a bit different. Some workflow can have these tasks in parallel, others can have it sequential.
Context : Each workflow (bpmn file) represents journey via different mediums. One can be for journey via PC, other for journey via Mobile.
We need the workflow states to be in sync throughout the life of the application. The business data is de-coupled from camunda and is at a centralised place, so it always remains in sync. But we wanted to sync the camunda process variables, task completion states and all such workflow parameters.
If a person completes step A, it should be reflected on the other workflows as well, in near real time
Is there a suggested design/way to handle this?

Locking an SQL Table and waiting for lock to be released - Java

I want to execute a function which locks on a table / row ? , checks whether a record(with specific criteria) exists or not, and creates the record if it doesn't exist.
I want to use it in my microservice architecture to handle concurrent requests trying to modify the same data.
How can i achieve this ?
EDIT 1 : Explaining my goal in detail :-
Step 1 : Microservice Instance 1 locks the table, executes procedure/transaction
Step 2 : inside procedure/transaction , check if record exists, if yes return true and release lock, if not create record and release lock.
Step 3 : While step 2 is being performed, another Microservice Instance B tries to access the table( i.e concurrent request), but since it is locked, instance B will wait till Instance A releases the lock.
step 4: After instance A releases the lock, instance B proceeds with its step 2.
There could be 5-7 concurrent requests .
Sorry if I am missing something about the goal you want to achieve, but instead of trying performing this kind of distributed locking mechanism, maybe an alternative solution will be using some type of message broker to which the different microservices will publish messages, with a single consumer that will handle them taking into account the possible duplicates. It is true that with certain restrictions, but services like Kafka or RabbitMQ, for example, will guarantee message ordering.
You can make the publishers of this first write queue consumers of another one in which the aforementioned single consumer will publish the results of the operation performed against the backend, if your microservices need this feedback.
I think this setup will highly decouple your different services without the risk of using the suggested locks.

Is there a way to get the parent execution given an execution id in Camunda?

I'm wondering if there is a way to get the parent execution of an execution in camunda. What I'm trying to achieve is basically the following:
This is a simple process involving a parallel gateway. Each of the flows is composed of a service task (external) and a user task.
In each "Pre: Task X" service task, I want to set some variables that I will use afterward in their respective user tasks. I want each execution flow of the parallel gateway to have their own variables and non-accessible from the other flows. How can I achieve this?
I was doing some tests and I found the following:
When the process is instantiated, I get instantly 5 execution instances.
What I understand is that one belongs to the process, the next two belong to each flow of the parallel gateway, and the last two belong to each of the service tasks.
If I call "complete" for one of the service tasks on the REST API with localVariables, they will instantly disappear and no further be available because they will be tied to the execution associated to the external task, which is terminated after the task completion.
Is there a way in which I can get the parent execution of the task, which in this case would be the parallel execution flow. So I can set localVariables at this level?
Thanks in advance for the valuable help
Regards
First of all 5 executions doesn't mean they are active. In your case there should only be 2 executions active when you start a new instance for the process. You can set your variables in respective executions as return value of the respective service tasks.
You can set variables for process instance but do respect you have 2 executions and 1 process instance. You can not set same variable for multiple executions.

How can I get progress info from a java ee 7 batch

I am working with java ee 7 batch and - from a different thread of execution - would like to retrieve info on the progress of the job. I know I can always retrieve the job execution but I would like more detailed info, such as how many records have been processed.
Is there a way to feed this information back into the batch framework during processing so that I could retrieve it in the other thread? As far as I've seen I cannot update the job properties during processing.
Thanks, regards,
gufi.
Batch API does not mandate any progress monitoring but it is really easy to roll your own - have a look at batch listeners - that way you have access to all phases of batch execution(read, process, write) and you can just plug in some custom reporting - e.g. update the job values in DB, or send them via JMS to some listener or websocket or anything you like.

Implementing GoTo in WF 4

Given a SQL Server-persisted .NET 4 Windows Workflow Foundation (WF) workflow service deployed under AppFabric, how can I "jump" the service from one activity to another? The workflow could be sequential or flowchart.
The use case is administrative. A long-running workflow is idle at Receive activity A. Some client mistakenly calls the service, progressing it to Receive activity B. The workflow (which could be embedded in a larger workflow) has no path back to A. The client calls the support desk and requests that the workflow be set back to A.
We've seen this case occur frequently in production. Our existing BPM system supports a "goto" call. How can this be accomplished in WF 4?
EDIT: If the above is not practical, what is a good design pattern for implementing a "fail" activity off from the "happy path" that can branch to one of a limited number of known prior activities (restart from here) based on a variable? The goal is to avoid creating an unreadable workflow with a multitude of lines.
EDIT 2: We decided not to go this route, but there's a newer MSDN article on doing just this.
EDIT 3: We changed our minds again and are going with Leon Welicki's solution from the MSDN article linked above. :)
This can't be done out of the box.
If it can be done at all it would mean opening up the workflow state, stored in 4 binary columns and changing those to the previous state knowing that any number of activities could have executed and any variables could have been changed or even dropped because they are no longer in scope.
Suppose I was going to try this I would try copying the state from the SQL database every time a workflow went idle so you get a sort of stack with all previous idle states of a workflow. Then at some later time when the workflow is idle and not in memory you can replace the current state with a previous state and reload the workflow. I have never tried it so don't know if it will work and see quite a few potential problems, thinks like DB transaction having competed or emails having been send but executing a second time.