How to modify job parameter on spring batch? - kotlin

I've managed to get a job parameter context on a Tasklet but I didn't figure out how to change this value so the next Tasklet can access a modified value
val params = JobParametersBuilder()
.addString("transaction", UUID.randomUUID().toString())
.addDouble("amount", 0.0)
jobLauncher.run(
paymentPipelineJob,
params.toJobParameters()
)
First task:
override fun beforeStep(stepExecution: StepExecution) {
logger.info("[$javaClass] - task initialized.")
this.amount = stepExecution.jobParameters.getDouble("amount")
// Prints 0.0
logger.info("before step: ${this.amount}")
}
override fun afterStep(stepExecution: StepExecution): ExitStatus? {
// Change the execution content to pass it to the next step
stepExecution.jobExecution.executionContext.put("amount", this.amount!! + 3)
// Still prints 0.0
logger.info("after step: ${stepExecution.jobParameters.getDouble("amount")}")
logger.info("[$javaClass] - task ended.")
return ExitStatus.COMPLETED
}
How can I modify a job parameter so all the steps can access it?

While the execution context is a mutable object, job parameters are immutable. Therefore, it is not possible to modify them once the execution is launched.
That said, according to the code you shared, you are putting an attribute amount in the job execution context and expecting to see the modified value from the job parameters instance. This is a wrong expectation. The execution context and the job parameters are two distinct objects and are not "inter-connected".
Edit: Add suggestion about how to address the use case
You can use the execution context to share data between steps. You are already doing this in your example:
stepExecution.jobExecution.executionContext.put("amount", this.amount!! + 3)
Once that is done in a listener after step 1, you can get the value from the EC in a listener before step 2:
double amount = stepExecution.jobExecution.executionContext.get("amount");
Please check Passing Data to Future Steps from the reference documentation.

Related

FLOWABLE: How to change 5 minute default interval of Async job

I assume DefaultAsyncJobExecutor is the class which gets picked up by default as an implementation of AsyncExecutor interface (not sure if this assumption is right or not)
So basically I want to modify the default time-out duration of an asynchronous job, the default time-out duration is 5 minutes, which is the value of two variables:
timerLockTimeInMillis, asyncJobLockTimeInMillis in AbstractAsyncExecutor.java**
I tried to change both values with respective setter methods and tried to directly modify the value in the constructor of my custom implementation like this:
public class AsyncExecutorConfigImpl extends DefaultAsyncJobExecutor
{
// #Value( "${async.timeout.duration}" )
private int customAsyncJobLockTimeInMillis = 10 * 60 * 1000;
AsyncExecutorConfigImpl()
{
super();
setTimerLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
setAsyncJobLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
super.timerLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
super.asyncJobLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
}
}
But the values remain same because time-out still happens after 5 minutes.
Initialisation is done via an API, like start-new-process-instance, in this APIfollowing code is there to start the process instance
->Start a workflow process instance asynchronously something like this
(processInstanceName, processInstanceId)
ProcessInstance lProcessInstance = mRuntimeService.createProcessInstanceBuilder()
.processDefinitionId( lProcessDefinition.get().getId() )
.variables( processInstanceRequest.getVariables() )
.name( lProcessInstanceName )
.predefineProcessInstanceId( lProcessInstanceId )
.startAsync();
->Once this is done rest of the workflow involves service tasks and while one instance is executing, I guess the time-out occurs and instance gets restarted
-> Since, I have a listener configured I was able to see this in logs that start-event activity gets started after every 5 minutes
so for example: event-1 is the first event then this event is getting re-started after 5 minutes(duration is displayed in console logs)
Not sure, what I'm missing at this point, let me know if any other details required
if the jar file is not under your control you cannot change the default value of count because in the jar classes are compiled. You can only change the value inside of an object so you can super keyword:
class CustomImplementation extends DefaultExecutedClass{
private int custom_count=1234;
CustomImplementation(){
super();
super.count = this.custom_count;
}
}
otherwise if you really need to change the original file you have to extract it from the jar
When you are using the Flowable Spring Boot starters. Then the SpringAsyncExecutor is used, this uses the TaskExecutor from Spring. It's is provided as a bean. In order to change it's values you can use properties.
e.g.
flowable.process.async.executor.timer-lock-time-in-millis=600000
flowable.process.async.executor.async-job-lock-time-in-millis=600000
Note: Be careful when changing this. If your processes start is taking more than 5 minutes then this means that you have a transaction open for that duration of the time.

JMeter - Avoid threads abrupt shutdown

I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.

Corda: Can the output of one transaction be used in another transaction within the same flow with multiple same signers?

There is a flow as per below scenario.
Initiating Party : PartyA
Responding Party : PartyB
Transaction 1: Input StateA - ContractA results in output StateB - ContractA. Participants are PartyA and PartyB
Transaction 2: Input StateB - ContractA and no output. Participants are PartyA and PartyB
Is this possible in Corda? Please do share an example with response. Thanks.
It sounds like you're getting two different error messages:
If you don't try and initiate a second flow-session to get the second signature, you get something like:
net.corda.core.flows.UnexpectedFlowEndException: Counterparty flow on
O=Mock Company 2, L=London, C=GB has completed without sending data
While if you do initiate a second flow-session to get the second signature, you get something like:
java.lang.IllegalStateException: Attempted to initiateFlow() twice in
the same InitiatingFlow
com.example.flow.ExampleFlow$Initiator#312d7fe4 for the same party
O=Mock Company 2, L=London, C=GB. This isn't supported in this version
of Corda. Alternatively you may initiate a new flow by calling
initiateFlow() in an #InitiatingFlow sub-flow.
In the first case, the error is caused by the fact that the counterparty's flow has already completed. You try and get around this by creating a second flow session, but each Initiating flow can only initiate a single flow-session with a given counterparty.
Instead, you simply need to modify the responder flow to sign twice. For example:
#InitiatedBy(Initiator::class)
class Acceptor(val otherPartyFlow: FlowSession) : FlowLogic<Unit>() {
#Suspendable
override fun call() {
val signTransactionFlow = object : SignTransactionFlow(otherPartyFlow) {
override fun checkTransaction(stx: SignedTransaction) = requireThat {
// Transaction checks...
}
}
subFlow(signTransactionFlow)
subFlow(signTransactionFlow)
}
}
yes it is possible .Please find the link to know more
https://docs.corda.net/key-concepts-transactions.html

Delete Batch operation - SAP UI5

Can anyone please let me know the syntax for the DELETE batch operation using SAP UI5. I have done the insert batch operation. Its working fine. But delete batch operation is not working.
My INSERT Batch operation syntax
update_entry.YEAR = year;
update_entry.COUNTRY_ID = country;
update_entry.CUSTOMER = name;
var batch_single = insert_model.createBatchOperation('/customers',"POST",update_entry);
batch_changes.push(batch_single) ;
insert_model.addBatchChangeOperations(batch_changes);
insert_model.submitBatch(function() {
update_success == "successful" ;}, function() {
update_success == "unsuccessful";}, true);
insert_model.refresh();
I have modified the above code for the DELETE batch operation as below
var batch_single = insert_model.createBatchOperation('/customers',"DELETE",update_entry);
But the above syntax is not working. Could anyone help me with the issue.
Thanks
Sathish
As opposed to the create operation, you'll need to pass the ID to the delete operation and not "entry":
var batch_single = insert_model.createBatchOperation('/customers(1234)',"DELETE");`
I think changing from "POST" to "DELETE" will work, because you need a DELETE-request to push data to your backend.
First, check out this thread: SAPUI5 - Batch Operations - how to do it right?
I think the main-difference is your entity in createBatchOperation "/customers" - I think you have to change it to your service (e.g. "/sap/opu/odata/sap/MY_SERVICE/?$batch"). I found that then the batch is triggered, starting in this sequence:
1) /IWBEP/IF_MGW_CORE_SRV_RUNTIME~CHANGESET_BEGIN: SAP Proposal EXIT.
2) /iwbep/if_mgw_appl_srv_runtime~delete_entity. (n-times)
3)/iwbep/if_mgw_core_srv_runtime~changeset_end: SAP Proposal COMMIT
WORK.
Second dont use '' and "" in the same statement (during createBatchOperation) - always use the same (if possible).
insert_model.createBatchOperation("/customers","POST",update_entry);
Regards,
zY

Bigquery Api Java client intermittently returning bad results

I am executing some long running quires using the big-query java client.
I construct a big-query job and execute like this
val queryRequest = new QueryRequest().setQuery(query)
val queryJob = client.jobs().query(ProjectId, queryRequest)
queryJob.execute()
The problem I am facing is the for the same query, the client returns before the job is complete i.e. the number of rows in result is zero.
I tried printing the response and it shows
{"jobComplete":false,"jobReference":{"jobId":"job_bTLRGrw5_xR26i9Li3a9EQvuA6c","projectId":"analytics-production"},"kind":"bigquery#queryResponse"}
From that I can see that the job is not complete. The why did the client return before the job is complete ?
While building the client, I use the HttpRequestInitializer and in the initialize method I provide the timeout parameters.
override def initialize(request: HttpRequest): Unit = {
request.setConnectTimeout(...)
request.setReadTimeout(...)
}
Tried giving high values for timeout like 240 seconds etc..but no luck. The behavior is still the same. It fails intermitently.
Make sure you set the timeout on the Bigquery request body, and not the HTTP object.
val queryRequest = new QueryRequest().setQuery(query).setTimeoutMs(10000) //10 seconds
The param is timeoutMs. This is documented here: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
Please also read the docs regarding this field: How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with the 'jobComplete' flag set to false. You can call GetQueryResults() to wait for the query to complete and read the results. The default value is 10000 milliseconds (10 seconds).
More about Synchronous queries here
https://cloud.google.com/bigquery/querying-data#syncqueries