Use batch job in sub-flow in Mulesoft - mule

I am implementing a mule app that fetches data from a database and syncs it to different CRMs on a specific condition, e.g. if 10 records are fetched, 3 of them might be synced with Hubspot and 7 with Salesforce. I am using the integration APIS of all CRMS. I want to make a separate sub-flow for each CRM. But Mule doesn't allow me to drag and drop a batch job in a sub flow. Is it achievable? What other options are there for this implementation?

As of today, Batch job can only be inside a flow and not a sub-flow. You can however reference this flow from a flow-reference from any other flow.
Now regarding your requirement of passing the records to different CRMs. I will not recommend creating multiple batch jobs for a single message. As instantiating a batch-job has its own overhead. Instead what you can do is you can create your subflows to process one or multiple records directly to the CRM. And these subflows would not contain their own batch. But, you can create a batch job after you are fetching your records, and, for each CRM subflow, you can create a batch:step and define the required acceptExpression to only send those records to the corresponding subflow.
It will look something like this:
<!-- your logic to fetch the records. -->
<batch:job jobName="all-crm-sync-batch-job">
<batch:process-records >
<batch:step name="crm1-sync-step" acceptExpression="#[payload.yourCrm1RelatedCondition == true]">
<flow-ref name="send-record-to-crm1-subflow" />
</batch:step>
<batch:step name="crm2-sync-step" acceptExpression="#[payload.yourCrm2RelatedCondition == true]">
<flow-ref name="send-record-to-crm2-subflow" />
</batch:step>
</batch:process-records>
</batch:job>

Related

Batch processing in Mule 4

I am performing a batch operation using batch scope in mule 4. I am using a Sfdc connector inside my batch step to query the Ids. The batch is happening in sets of say 200 and total 1000 inputs. I need to get all the 1000 Ids from Sfdc query outside my batch scope. I tried few ways to access the payload coming out of the batch step but failed to get the payload outside scope. I see that the payload is accessible only inside the process stage. Are there any ways to achieve this?

Roll back in batch processing

While reading the mule docs batch processing, I read there are 3 ways to handle failures during batch processing. But while I am processing 100 records and the 4th record fails then I want to roll back the whole batch instead of continuing from the 5th record onwards. Is there a way we can roll back all the 3 records?
Need to set Max Failed Record as '0'
<batch:job name="accesspayloadBatch" max-failed-records="0">
<batch:process-records>
<batch:step name="Batch_Step"/>
</batch:process-records>
</batch:job>
If you set it as-1 it keep progress the record and ignore the failure. Perhaps set as 0. It stop there itself and don't progress the 4th record. If you are using DB to insert record make as Transactional to revert it back.
Also refer this url: Error handling in Mule Salesforce Batch

Mule ESB Database Bulk and Batch Process

Hi am trying to fetch records from one table and insert into another table (MS SQL) When am fetching its coming as Key Value and am using a batch process to commit with a size of 1000 but when 2000 records comes only two rows where inserted in the table.
I tried using bulk mode but bulk mode is asking for a dynamic query
Below is my query
INSERT INTO Sample VALUES( #[payload.Index], #[payload.Name]) and the payload is
{Index=1,Name=XX},{Index=2,Name=XX},{Index=3,Name=XX} etc
Please help me how to use bulk mode in Mule ESB and what dynamic query i can write for this.Batch is just posting the first row so I think using bulk inside batch will solve my problem.Thanks in advance!!!
You can use mule bulk mode = "true" with parameterized query.
<db:insert config-ref="Database_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query>
<![CDATA[INSERT INTO Sample(column1, column2) VALUES( #[payload.Index], #[payload.Name])
]]></db:parameterized-query>
</db:insert>
Check out this bulk mode link from mule
Enable to submit collections of data with one query, as opposed to executing one query for every parameter set in a collection. Enabling
bulk mode improves the performance of your applications as it reduces
the number of individual query executions. Bulk mode requires a
parameterized query with at least one parameter.
Just make sure your payload is of Collection type going into bulk mode
In your log4j2.xml, add logging for database module in debug mode to see actual query passed by mule to database. Great for debugging.
<AsyncLogger name="org.mule.module.db" level="DEBUG"/>
Note that bulk mode will not work with batch-processing. Either of 2 would work though but if your intent is to just insert data, bulk-mode is far more efficient than batch.

How to send multiple record to db in Mule ESB

Am having an inbound database endpoint am selecting records with a condition which returns 500 rows as result set.Now i want to insert the coloumns in another DB.I used the batch process and have two batch steps selecting data and inserting data.
Now if while selecting data any error occurs I have to send a mail and If while inserting if it fails I need to log it in a different place.So how can I create two different exceptions for each step am not able to use catch exception in batch process.For now am using a flow reference inside batch step and handling the exception.Please provide me a better way.AM using batch execute -> batch -> batch step -> flow reference->exception handling
<batch:job name="BOMTable_DataLoader">
<batch:process-records>
<batch:step name="SelectData">
<flow-ref name="InputDatabase" doc:name="InputDatabase"/>
</batch:step>
<batch:step name="InsertData">
<batch:commit size="1000" doc:name="Batch Commit">
<flow-ref name="InserDatabase" doc:name="UpdateDataBase"/>
</batch:commit>
</batch:step>
</batch:process-records>
<batch:on-complete>
For 500 rows only I will not use a batch processing you could simply create with an iteration a SQL script with all the insert in one shot and execute it via the database connector.
If you want to know which insert fails than you could use foreach processor to iterate over the rows and insert them one by one so that you can control the output, it will be slower but it all depends on your needs.
Just use the right type of query depending on your needs and remember you can use MEL in your query. Just pay attention to injection if your input are also coming from untrusted sources (if you use parametrized query than you are on the safe side becouse parameters are escaped)
More on the db connector
https://docs.mulesoft.com/mule-user-guide/v/3.7/database-connector
I do confirm after your question update that the only way to handle exception is the way you use it. The fact that the module in mule is called "Batch processing" does not mean that everytime you have something that looks like a batch job you need to use that component. When you got some complex case just don't use it and use standard mule components like VM async for a async execution and normal tools like foreach or even better collection splitter and get all the freedom and control over exception handling.

multiple inbound database endpoint in 1 flow

I am trying to create one flow where there could potentially be more than one database inbound end-point. The flow goes as follows:
Get rows from table X database A for rows with X.status (table X column status) = 'new'.
Get rows from table Y database B where Y.some_id = X.another_id. X is rows retrieved data from step(1).
Insert new rows into table Z database B.
I realized that there could be one database inbound end-point. Is there any way I could accomplish this with Mule ?
Environment: Mule 3.4
You can declare multiple message sources in your flow by leveraging the <composite-source />scope.
Basically your flow would look like this:
<flow ... >
<composite-source>
<jdbc:inbound-endpoint ... />
<jdbc:inbound-endpoint ... />
</composite-source>
...
</flow>
This documentation page should provide you more informations on the topc
Your first DB call will be inbound and the second one will be outbound. Since the first DB call is reading from the database it will be inbound while the second one is receiving data which implies that it will be an outbound call.
step 1) Inbound database call - Read from database-A (table-X, column-status).
Step 2) The result (payload) will most likely be a list or a list of maps. You can query the individual fields using groovy script and setup a flow variable. for example
Because you have multiple rows coming in it is important to use the collection splitter to split every row of records.
To query status variable: #[flowVars['Var']['status']]. Similarly you can query for the remaining fields.
Now that you have your dataset ready from database A, write a query for your database B and use the flow variable (explained above) to query for Y.some_id. eg "where Y.some_id=#[flowVars['Var']['another_id']]"
Step 3) In the same outbound connection you can do the insert in database B (table-Z). What makes all this possible is because you are using specific database connectors.
A couple of advise:
1) Make database connectors and queries global (before the flow)
2) Use HTTP outbound endpoint so that you can control your flows run from a browser.
Hope this helps.