BPMN model API to edit Process diagram - bpmn

I have a process diagram that directs flow on the basis of threshold variables. For example, for variable x,y; if x<50 I am directed to service task 1 , if y<40 to service task 2, or if x>50 && y>40 to some task..
As intuition tells, I am using compare checks on sequence flow to determine next task.
x,y are input by user but 50, 40 (Let's call these numbers {n}) is a part of process definition(PD).
Now, for a fixed {n} I have deployed a process diagram and it runs successfully.
What should I do if my {n} can vary for different process instances? Is there a way to maintain the same version of process definition but which takes {n} dynamically?
I read about BPMN Model API here. But, I can't seem to figure out how to use it to edit my PD dynamically? Do I need to redeploy it each time on Tomcat or how does it work?

If you change a process model with the model API you have to redeploy it to actually use it. If you want to have a process definition with variable {n} values you can also use a variable for it and set it during the start of the process instance either using the Java API, REST API or the Tasklist.

Related

Anylogic Nested Agent RandomNumberGenerator

I have an agent that is nested in another agent. This nested agent has a function that calls the annylogic probability distribution functions(pdfs) such as gamma(), lognormal(), etc. However I keep getting a nullPointerException if I call these pdfs inside the nested agent. I am realising this is because the nested agent cannot access the default randomNumberGenerator. Is there a way I can access the defaultRandomNumberGenerator within the nested agent as well or is the only solution to create a new generator for each nested agent?
The error is because your agent is outside the model hierarchy of agents.
This is not good practice; there should very rarely be a need to have 'floating' agents outside the model hierarchy; they can always be inside an agent population somewhere.
In the rare cases that there are strong design reasons to do so (or if you use plain Java classes and thus have Java objects which by definition are not Agents and are therefore outside the agent hierarchy), just give them a parameter (field in the case of a Java class) that points to some agent that is in the model hierarchy (typically their 'generator'), and then you can call all 'required-to-be-in-model-hierarchy' functions via that parameter. (That is, you are delegating all such calls to an agent instance which can call them.)
e.g., the nested agent type (let's say Thing) has parameter agentRef of type Agent set by whoever creates it; for example
Thing t = new Thing(this);
Then, within Thing, you use code such as agentRef.normal(1,10).
Only agents that are connected to the engine in some way have access to the random number generator. And if your experiment is set to run main - like the example below - then all agents that want to use the random number generator must be connected to main in some way
So if you do this for example it wont work, and you get an NPE (Null Pointer Exception)
If you do this it will
Best option is to just create your own random number generator
lognormal(0.1, 0.1, 5, new Random(0));
(Just put the random number generator somewhere so that you can use it again and again, else you will get the same number every time since it is the same (new) random object used to get the number)
This design is way better - see example here
Why do two flowcharts set up exactly the same end with different results every time the simulation is run even when I use a fixed seed?

mule4 batch - how to send oncomplete phase response to http listner?

I have common scenario but I am not able to figure out the solution in Mule 4 batch. In my flow I have a http listner which invokes the flow and then I am calling DB select and then using a batch to upsert data into salesforce.
by default batch will create stats in On-Complete phase and my requirement is to send exact stats as response but I am not able to access it outside of batch. Tried with vars, attributes and even tried VM publish (in this case response will not go back to listner)
Can someone please guide me on this? I'm attaching the flow design for reference.
flow design
Thanks.
You can't. Batch works in the background, your flow will be long gone before your batch is done.
My suggestion is you (1) Store the reporting data somewhere and (2) get to the data using another request/way.
Here's the documentation: https://docs.mulesoft.com/mule-runtime/4.2/batch-processing-concept
You can store the payload in on-complete phase in an objectStore and can retrieve it later to build your report. The payload stored in the on-complete phase is a java object that has properties that you would need to build your report. (For e.g.loadedRecords, failedRecords etc)..

Mule app - call same APIs multiple times in parallel

In a Mule app (using Mule 4), I am trying to invoke a single API multiple times for an input array of Strings as this:
"input_arr": [
"val1", "val2", "val3"
]
All the invocations can run in parallel as they are independent, but I want to wait and collate the results once they all complete. Also, if one or more result in errors, I want to obtain that as well.
I tried couple different ways:
1. Simple foreach -- not efficient since it is sequential.
2. Batch - it is async and the main flow does not wait.
What would be the best way to achieve this efficiently in Mulesoft?
If you are using Mule 4.2 + then parallel-foreach might achieve what you are looking for.
The Parallel For Each scope enables you to process a collection of messages by splitting the collection into parts that are simultaneously processed in separate routes within the scope of any limitation configured for concurrent-processing.
NOTE: However, because this feature is not available in the Anypoint Studio Mule Palette view, you must manually configure Parallel For Each scope in the XML.
Also there are some differences other than concurrency with the new scope, so make sure to read the documentation:
https://docs.mulesoft.com/mule-runtime/4.2/parallel-foreach-scope
Probably the solution is to use the Parallel Foreach scope from Mule 4.2.

Integration and Unit testing Nifi process groups

I have a few Nifi process groups which I want to run integration tests on before promoting to production. The issue is that I can't seem to find any documentation on how to do so.
Data Provenance seems like a promising tool to accomplish what I want, however, over the course of the flowfile's lifecycle, data is published to/from kafka or the file system. As a result, the flowfile UUID changes so I cannot query for it using the nifi-api.
Additionally, I know that Nifi offers a TestRunner library to run tests, however, this seems to only be for processors/processor groups generated via code and not the UI.
Does anyone know of a tool, framework, or pattern for integration and unit testing nifi process groups. Ideally this would be a solution where you can programatically compare input/output of the processor/processor group without modifying the existing workflow.
With the introduction of the Apache NiFi Registry, we have seen users promote flows from a development/sandbox environment to a test/QE environment where there are existing "test harness" flows surrounding the "flow under test" so that they can send repeatable and deterministic (or an anonymized sample of real production data) through the flow and compare the results to an expected value.
As you point out, there is a TestRunner class and a whole testing framework provided for unit tests. While it can be difficult to manually translate a UI-constructed flow to the programmatic construction, you could also create something like a translator to accept a flow template or flow.xml.gz file and convert it into something processable by the test framework.
Maybe plumber will help you with flow testing.
We also wanted to test whole NiFi flows, not just single processor, so we created this library and decided to open-source it.
Simple example in Scala:
// read flow previously exported from NiFi
val template = TemplateDeserializer.deserialize(this.getClass.getClassLoader.getResourceAsStream("exported-flow.xml"))
val flow = NifiTemplateFlowFactory(template).create()
// enqueue some data to any processor
flow.enqueueByName("csv row,12,another value,true", "CsvParserProcessor")
// run entire flow once
flow.run(1)
// get the results from any processor
val records = flow.resultsFromProcessorRelation("LastProcessorInFlow","successRelation")
records should have size 1
This library is still under development so improvements and ideas are welcomed! :)

Are BPMN sequenceFlow allowed to reference specific activities within another process/subprocess?

I am modeling a complex process using BPMN 2.0
I have split the process into multiple global processes which can reference one another through call activity.
However, in one or two special cases, I would like to actually call directly into the middle of one of the other processes. I do not want to have to create an entirely duplicate [sub]process with just the first couple nodes missing and would also prefer not to split those couple nodes into their own little process.
I don't think common BPMN 2.0 tools support this, but is it explicitly disallowed by the spec? For instance, I read through http://www.omg.org/spec/BPMN/2.0.2/PDF and I don't see anywhere that it claims that a sequenceFlow's targetRef must be within the same FlowElementsContainer. Maybe it is just implied?
The correct way to do this would be to create several "none" start events in the global process and then reference the correct one via the targetRef attribute of a sequence flow incoming to the call activity. The spec says on p. 239:
"If the Process is used as a global Process (a callable Process that
can be invoked from Call Activities of other Processes) and there are
multiple None Start Events, then when flow is transferred from the
parent Process to the global Process, only one of the global Process’s
Start Events will be triggered. The targetRef attribute of a Sequence
Flow incoming to the Call Activity object can be extended to identify
the appropriate Start Event."