Mule flow terminate - mule

I have a small question. Do we have any option to stop/terminate the flow where ever we want in Mulesoft 4? Example: After executing transform message or after logger processor want to stop/terminate the flow based on our business requirement.

Two different ways to achieve that are described in the KB article https://help.mulesoft.com/s/article/How-To-Stop-Or-Start-Flows-In-Mule-4-x-Programmatically
Basically you need to get an instance of the Mule registry, the lookup the flow and stop or start it.

Yes, you can make use of groovy script or java class to start/stop mulesoft at runtime. Drag and drop the scripting component from the palette and choose groovy as your engine and use the script below.
flowName = registry.lookupByName('flowName').get();
if (flowName.isStarted())
flowName.stop()
else
flowName.start()

Related

mule4 batch - how to send oncomplete phase response to http listner?

I have common scenario but I am not able to figure out the solution in Mule 4 batch. In my flow I have a http listner which invokes the flow and then I am calling DB select and then using a batch to upsert data into salesforce.
by default batch will create stats in On-Complete phase and my requirement is to send exact stats as response but I am not able to access it outside of batch. Tried with vars, attributes and even tried VM publish (in this case response will not go back to listner)
Can someone please guide me on this? I'm attaching the flow design for reference.
flow design
Thanks.
You can't. Batch works in the background, your flow will be long gone before your batch is done.
My suggestion is you (1) Store the reporting data somewhere and (2) get to the data using another request/way.
Here's the documentation: https://docs.mulesoft.com/mule-runtime/4.2/batch-processing-concept
You can store the payload in on-complete phase in an objectStore and can retrieve it later to build your report. The payload stored in the on-complete phase is a java object that has properties that you would need to build your report. (For e.g.loadedRecords, failedRecords etc)..

Integration and Unit testing Nifi process groups

I have a few Nifi process groups which I want to run integration tests on before promoting to production. The issue is that I can't seem to find any documentation on how to do so.
Data Provenance seems like a promising tool to accomplish what I want, however, over the course of the flowfile's lifecycle, data is published to/from kafka or the file system. As a result, the flowfile UUID changes so I cannot query for it using the nifi-api.
Additionally, I know that Nifi offers a TestRunner library to run tests, however, this seems to only be for processors/processor groups generated via code and not the UI.
Does anyone know of a tool, framework, or pattern for integration and unit testing nifi process groups. Ideally this would be a solution where you can programatically compare input/output of the processor/processor group without modifying the existing workflow.
With the introduction of the Apache NiFi Registry, we have seen users promote flows from a development/sandbox environment to a test/QE environment where there are existing "test harness" flows surrounding the "flow under test" so that they can send repeatable and deterministic (or an anonymized sample of real production data) through the flow and compare the results to an expected value.
As you point out, there is a TestRunner class and a whole testing framework provided for unit tests. While it can be difficult to manually translate a UI-constructed flow to the programmatic construction, you could also create something like a translator to accept a flow template or flow.xml.gz file and convert it into something processable by the test framework.
Maybe plumber will help you with flow testing.
We also wanted to test whole NiFi flows, not just single processor, so we created this library and decided to open-source it.
Simple example in Scala:
// read flow previously exported from NiFi
val template = TemplateDeserializer.deserialize(this.getClass.getClassLoader.getResourceAsStream("exported-flow.xml"))
val flow = NifiTemplateFlowFactory(template).create()
// enqueue some data to any processor
flow.enqueueByName("csv row,12,another value,true", "CsvParserProcessor")
// run entire flow once
flow.run(1)
// get the results from any processor
val records = flow.resultsFromProcessorRelation("LastProcessorInFlow","successRelation")
records should have size 1
This library is still under development so improvements and ideas are welcomed! :)

How to automate run an mule application

I have a mule flow and I want to automate the execution of the application without http listener
I want the mule application execute without enter "localhost:8081/app"
is it a way to do this?
Screenshots of the flow
As I understood from your question, I can suggest the below steps
1) Add Composite source at the start of your flow.
2) Place the existing HTTP inbound endpoint into Composite source scope.
3) As an addition, add the quartz inbound endpoint into composite source scope and configure it at what time you want to run using cron expression.
This approach enables you option to trigger the flow using either HTTP URL or automated execution through quartz component using cron expression.
Please comment on this answer if you feel my understanding is wrong.
Do you simply want the app to run at scheduled intervals? If so, I think the Quartz connector would be you best choice.
Is this the scenario you are after?

Updating the steps in Saga

I am looking for a way to change the steps in the saga, example: insert a step during the processing, preferablly during runtime
Is it possible to do using sagas?
Sagas (particularly those written using Automatonymous) were not designed to handle dynamic configuration at runtime. They are a codified way to create process monitors and workflows.
If you need to dynamically modify the steps of a workflow, you could use the Courier routing slip, which is built into MassTransit. It allows an activity in the workflow to revise the itinerary, adding or removing steps (activities) as needed.

Is it possible to programatically add an interceptor before a VM endpoint from within a Mule FunctionalTestCase? If so, how?

I have a flow with two VM endpoints both configured with the exchange pattern of request/response. I want to evaluate the message at the end of the flow when it reaches the seecond VM endpoint, before the next flow takes off with the message. I thought I might be able to do this with an interceptor inserted before the VM endpoint. Is this possible from within a Mule FunctionalTestCase? Is it possible to programatically add an interceptor to a flow at all..?
Personally, I think that the flows should not really be altered during the testings. In that case you would have another (although just slightly different) version running when you deploy it to a server.
Instead, I would argue that you divide your flows into testable parts and put the endpoint addresses into separate configuration. That way you can test each vm-based flow separated from each other and verify the behaviour using mock flows or similar.
vm://in-flow1 -> process -> vm://mock
vm://mock -> verify payload -> vm://in-flow2
In the "real" configuration, you change "mock" to something pointing to the second vm flow.
You can also elaborate on mocking the first or second VM flows entirely from each other to create distinct unit tests.
However, if you really want to go down the "modify code for testing purposes" rabbit hole, you can likely use some aspect oriented black magic to achieve that.
Look at this blogpost how it's done in mule.
You could try with Munit, and run an spy around the flow (it should work). So you can run assertions after the flow execution
https://github.com/mulesoft/munit