Mule app - call same APIs multiple times in parallel - mule

In a Mule app (using Mule 4), I am trying to invoke a single API multiple times for an input array of Strings as this:
"input_arr": [
"val1", "val2", "val3"
]
All the invocations can run in parallel as they are independent, but I want to wait and collate the results once they all complete. Also, if one or more result in errors, I want to obtain that as well.
I tried couple different ways:
1. Simple foreach -- not efficient since it is sequential.
2. Batch - it is async and the main flow does not wait.
What would be the best way to achieve this efficiently in Mulesoft?

If you are using Mule 4.2 + then parallel-foreach might achieve what you are looking for.
The Parallel For Each scope enables you to process a collection of messages by splitting the collection into parts that are simultaneously processed in separate routes within the scope of any limitation configured for concurrent-processing.
NOTE: However, because this feature is not available in the Anypoint Studio Mule Palette view, you must manually configure Parallel For Each scope in the XML.
Also there are some differences other than concurrency with the new scope, so make sure to read the documentation:
https://docs.mulesoft.com/mule-runtime/4.2/parallel-foreach-scope

Probably the solution is to use the Parallel Foreach scope from Mule 4.2.

Related

Mule flow terminate

I have a small question. Do we have any option to stop/terminate the flow where ever we want in Mulesoft 4? Example: After executing transform message or after logger processor want to stop/terminate the flow based on our business requirement.
Two different ways to achieve that are described in the KB article https://help.mulesoft.com/s/article/How-To-Stop-Or-Start-Flows-In-Mule-4-x-Programmatically
Basically you need to get an instance of the Mule registry, the lookup the flow and stop or start it.
Yes, you can make use of groovy script or java class to start/stop mulesoft at runtime. Drag and drop the scripting component from the palette and choose groovy as your engine and use the script below.
flowName = registry.lookupByName('flowName').get();
if (flowName.isStarted())
flowName.stop()
else
flowName.start()

Parallel execution with multiple users in karate

My requirement is : I want to have parallel execution with say 5 thread. All thread would be creating an entity.I want to have more threads so that text execution time could be less.But I am facing issue as when threads are increasing ,I get error from db saying unable to lock the error as all threads are using same user to create an entity.Is it possible in karate that I can use multiple user credentials so that threads can pick users randomly and create an entity??
Simple solution, write the logic in Java to do this and make it a singleton or static method. Then make a call to it from your script something like this:
* var MyCode = Java.type('com.myco.MyCode')
* var entity = MyCode.getEntity()
So you can keep track of entities created (maybe in a Set or Map) and re-use as per your wish.
Sorry Karate does not have built-in support for this kind of thing.

How to hit an API parallely with different input paramters

I am getting doctorCodes as (Dr1124914 ,Dr1074955).
My clinic API gives above response taking one doctorCode a time ,I have to extract a value from my response.
But I want to make parallel calls to my API with all values of doctorCodes as shown above in one go , extracting required field from it
and accumulating finally to my resultant payload .
You can use the Scatter-Gather component to perform parallel calls and aggregate the results using DataWeave. See the documentation at https://docs.mulesoft.com/mule-runtime/4.1/scatter-gather-concept
Note that it works for a fixed number of parallel way, not for dynamic routes. I don't think there is no way to do a dynamic number of routes in Mule 4. If you are interested in that you would have to implement it by yourself in custom Java or scripting code somehow.

Integration and Unit testing Nifi process groups

I have a few Nifi process groups which I want to run integration tests on before promoting to production. The issue is that I can't seem to find any documentation on how to do so.
Data Provenance seems like a promising tool to accomplish what I want, however, over the course of the flowfile's lifecycle, data is published to/from kafka or the file system. As a result, the flowfile UUID changes so I cannot query for it using the nifi-api.
Additionally, I know that Nifi offers a TestRunner library to run tests, however, this seems to only be for processors/processor groups generated via code and not the UI.
Does anyone know of a tool, framework, or pattern for integration and unit testing nifi process groups. Ideally this would be a solution where you can programatically compare input/output of the processor/processor group without modifying the existing workflow.
With the introduction of the Apache NiFi Registry, we have seen users promote flows from a development/sandbox environment to a test/QE environment where there are existing "test harness" flows surrounding the "flow under test" so that they can send repeatable and deterministic (or an anonymized sample of real production data) through the flow and compare the results to an expected value.
As you point out, there is a TestRunner class and a whole testing framework provided for unit tests. While it can be difficult to manually translate a UI-constructed flow to the programmatic construction, you could also create something like a translator to accept a flow template or flow.xml.gz file and convert it into something processable by the test framework.
Maybe plumber will help you with flow testing.
We also wanted to test whole NiFi flows, not just single processor, so we created this library and decided to open-source it.
Simple example in Scala:
// read flow previously exported from NiFi
val template = TemplateDeserializer.deserialize(this.getClass.getClassLoader.getResourceAsStream("exported-flow.xml"))
val flow = NifiTemplateFlowFactory(template).create()
// enqueue some data to any processor
flow.enqueueByName("csv row,12,another value,true", "CsvParserProcessor")
// run entire flow once
flow.run(1)
// get the results from any processor
val records = flow.resultsFromProcessorRelation("LastProcessorInFlow","successRelation")
records should have size 1
This library is still under development so improvements and ideas are welcomed! :)

BPMN model API to edit Process diagram

I have a process diagram that directs flow on the basis of threshold variables. For example, for variable x,y; if x<50 I am directed to service task 1 , if y<40 to service task 2, or if x>50 && y>40 to some task..
As intuition tells, I am using compare checks on sequence flow to determine next task.
x,y are input by user but 50, 40 (Let's call these numbers {n}) is a part of process definition(PD).
Now, for a fixed {n} I have deployed a process diagram and it runs successfully.
What should I do if my {n} can vary for different process instances? Is there a way to maintain the same version of process definition but which takes {n} dynamically?
I read about BPMN Model API here. But, I can't seem to figure out how to use it to edit my PD dynamically? Do I need to redeploy it each time on Tomcat or how does it work?
If you change a process model with the model API you have to redeploy it to actually use it. If you want to have a process definition with variable {n} values you can also use a variable for it and set it during the start of the process instance either using the Java API, REST API or the Tasklist.