FLOWABLE: How to change 5 minute default interval of Async job - flowable

I assume DefaultAsyncJobExecutor is the class which gets picked up by default as an implementation of AsyncExecutor interface (not sure if this assumption is right or not)
So basically I want to modify the default time-out duration of an asynchronous job, the default time-out duration is 5 minutes, which is the value of two variables:
timerLockTimeInMillis, asyncJobLockTimeInMillis in AbstractAsyncExecutor.java**
I tried to change both values with respective setter methods and tried to directly modify the value in the constructor of my custom implementation like this:
public class AsyncExecutorConfigImpl extends DefaultAsyncJobExecutor
{
// #Value( "${async.timeout.duration}" )
private int customAsyncJobLockTimeInMillis = 10 * 60 * 1000;
AsyncExecutorConfigImpl()
{
super();
setTimerLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
setAsyncJobLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
super.timerLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
super.asyncJobLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
}
}
But the values remain same because time-out still happens after 5 minutes.
Initialisation is done via an API, like start-new-process-instance, in this APIfollowing code is there to start the process instance
->Start a workflow process instance asynchronously something like this
(processInstanceName, processInstanceId)
ProcessInstance lProcessInstance = mRuntimeService.createProcessInstanceBuilder()
.processDefinitionId( lProcessDefinition.get().getId() )
.variables( processInstanceRequest.getVariables() )
.name( lProcessInstanceName )
.predefineProcessInstanceId( lProcessInstanceId )
.startAsync();
->Once this is done rest of the workflow involves service tasks and while one instance is executing, I guess the time-out occurs and instance gets restarted
-> Since, I have a listener configured I was able to see this in logs that start-event activity gets started after every 5 minutes
so for example: event-1 is the first event then this event is getting re-started after 5 minutes(duration is displayed in console logs)
Not sure, what I'm missing at this point, let me know if any other details required

if the jar file is not under your control you cannot change the default value of count because in the jar classes are compiled. You can only change the value inside of an object so you can super keyword:
class CustomImplementation extends DefaultExecutedClass{
private int custom_count=1234;
CustomImplementation(){
super();
super.count = this.custom_count;
}
}
otherwise if you really need to change the original file you have to extract it from the jar

When you are using the Flowable Spring Boot starters. Then the SpringAsyncExecutor is used, this uses the TaskExecutor from Spring. It's is provided as a bean. In order to change it's values you can use properties.
e.g.
flowable.process.async.executor.timer-lock-time-in-millis=600000
flowable.process.async.executor.async-job-lock-time-in-millis=600000
Note: Be careful when changing this. If your processes start is taking more than 5 minutes then this means that you have a transaction open for that duration of the time.

Related

How to modify job parameter on spring batch?

I've managed to get a job parameter context on a Tasklet but I didn't figure out how to change this value so the next Tasklet can access a modified value
val params = JobParametersBuilder()
.addString("transaction", UUID.randomUUID().toString())
.addDouble("amount", 0.0)
jobLauncher.run(
paymentPipelineJob,
params.toJobParameters()
)
First task:
override fun beforeStep(stepExecution: StepExecution) {
logger.info("[$javaClass] - task initialized.")
this.amount = stepExecution.jobParameters.getDouble("amount")
// Prints 0.0
logger.info("before step: ${this.amount}")
}
override fun afterStep(stepExecution: StepExecution): ExitStatus? {
// Change the execution content to pass it to the next step
stepExecution.jobExecution.executionContext.put("amount", this.amount!! + 3)
// Still prints 0.0
logger.info("after step: ${stepExecution.jobParameters.getDouble("amount")}")
logger.info("[$javaClass] - task ended.")
return ExitStatus.COMPLETED
}
How can I modify a job parameter so all the steps can access it?
While the execution context is a mutable object, job parameters are immutable. Therefore, it is not possible to modify them once the execution is launched.
That said, according to the code you shared, you are putting an attribute amount in the job execution context and expecting to see the modified value from the job parameters instance. This is a wrong expectation. The execution context and the job parameters are two distinct objects and are not "inter-connected".
Edit: Add suggestion about how to address the use case
You can use the execution context to share data between steps. You are already doing this in your example:
stepExecution.jobExecution.executionContext.put("amount", this.amount!! + 3)
Once that is done in a listener after step 1, you can get the value from the EC in a listener before step 2:
double amount = stepExecution.jobExecution.executionContext.get("amount");
Please check Passing Data to Future Steps from the reference documentation.

Testing Flink window

I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Pixel> pixels = input.map(record -> mapper.readValue(record, Pixel.class));
pixels
.keyBy("id", "timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(dynamoDBSink);
env.execute(jobName);
I am trying to test this application with the recommended approach in documentation. I also have looked at this stackoverflow question, but adding the sink hadn't helped.
I do have a #ClassRule as recommended in my test class. The function looks like this:
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
CollectSink.values.clear();
Pixel testPixel1 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel2 = Pixel.builder().id(2).timestampRoundedToMinutes("202002261220").constant(1).build();
Pixel testPixel3 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel4 = Pixel.builder().id(3).timestampRoundedToMinutes("202002261220").constant(1).build();
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.keyBy("id","timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
JobExecutionResult result = env.execute("AggregationTest");
assertNotEquals(0, CollectSink.values.size());
CollectSink is copied from documentation.
What am I doing wrong? Is there also a simple way to test the application with embedded kafka?
Thanks!
The reason why your test is failing is because the window is never triggered. The job runs to completion before the window can reach the end of its allotted time.
The reason for this has to do with the way you are working with time. By specifying
.keyBy("id","timestampRoundedToMinutes")
you are arranging for all the events for the same id and with timestamps within the same minute to be in the same window. But because you are using processing time windowing (rather than event time windowing), your windows won't close until the time of day when the test is running crosses over the boundary from one minute to the next. With only four events to process, your job is highly unlikely to run long enough for this to happen.
What you should do instead is something more like this: set the time characteristic to event time, and provide a timestamp extractor and watermark assigner. Note that by doing this, there's no need to key by the timestamp rounded to minute boundaries -- that's part of what event time windows do anyway.
public static void main(String[] args) throws Exception {
...
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
env.execute();
}
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks() {
super(/* delay to handle out-of-orderness */);
}
#Override
public long extractTimestamp(Event event) {
return event.timestamp;
}
}
See the documentation and the tutorials for more about event time, watermarks, and windowing.

JMeter - Avoid threads abrupt shutdown

I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.

Usage of WorkspaceJob

I have an eclipse plugin which has some performance issues. Looking into the progress view sometimes there are multiple jobs waiting and from the code most of it's arhitecture is based on classes which extend WorkspaceJobs mixed with Guava EventBus events. The current solution involves also nested jobs...
I read the documentation, I understand their purpose, but I don't get it why would I use a workspace job when I could run syncexec/asyncexec from methods which get triggered when an event is sent on the bus?
For example instead of creating 3 jobs which wait one for another, I could create an event which triggers what would have executed Job 1, then when the method is finished, it would have sent a different event type which will trigger a method that does what Job 2 would have done and so on...
So instead of:
WorkspaceJob Job1 = new WorkspaceJob("Job1");
Job1.schedule();
WorkspaceJob Job2 = new WorkspaceJob("Job2");
Job2.schedule();
WorkspaceJob Job1 = new WorkspaceJob("Job3");
Job3.schedule();
I could use:
#Subsribe
public replaceJob1(StartJob1Event event) {
//do what runInWorkspace() of Job1 would have done
com.something.getStaticEventBus().post(new Job1FinishedEvent());
}
#Subsribe
public replaceJob2(Job1FinishedEvent event) {
//do what `runInWorkspace()` of Job2 would have done
com.something.getStaticEventBus().post(new Job2FinishedEvent());
}
#Subsribe
public replaceJob3(Job2FinishedEvent event) {
//do what `runInWorkspace()` of Job3 would have done
com.something.getStaticEventBus().post(new Job3FinishedEvent());
}
I didn't tried it yet because I simplified the ideas as much as I could and the problem is more complex than that, but I think that the EventBus would win in terms of performance over the WorkspaceJobs.
Can anyone confirm my idea or tell my why this I shouldn't try this( except for the fact that I must have a good arhitecture of my events)?
WorkspaceJob delays resource change events until the job finishes. This prevents components listening for resource changes receiving half completed changes. This may or may not be important to your application.
I can't comment on the Guava code as I don't know anything about it - but note that if your code is long running you must make sure it runs in a background thread (which WorkbenchJob does).

MQL4 How To Detect Status During Change of Account (Completed Downloading of Historical Trades)

In MT4, there exists a stage/state: when we switch from AccountA to AccountB, when Connection is established and init() and start() are triggered by MT4; but before the "blinnnggg" (sound) when all the historical/outstanding trades are loaded from Server.
Switch Account>Establish Connection>Trigger Init()/Start() events>Start Downloading of Outstanding/Historical trades>Completed Downloading (issue "bliinng" sound).
I need to know (in MQL4) that all the trades are completed downloaded from the tradeServer --to know that the account is truly empty -vs- still downloading history from tradeServer.
Any pointer will be appreciated. I've explored IsTradeAllowed() IsContextBusy() and IsConnected(). All these are in "normal" state and the init() and start() events are all fired ok. But I cannot figure out if the history/outstanding trade lists has completed downloading.
UPDATE: The final workaround I finally implemented was to use the OrdersHistoryTotal(). Apparently this number will be ZERO (0) during downloading of order history. And it will NEVER be zero (due to initial deposit). So, I ended-up using this as a "flag".
Observation
As the problem was posted, there seems no such "integrated" method for MT4-Terminal.
IsTradeAllowed() reflects an administrative state of the account/access to the execution of the Trading Services { IsTradeAllowed | !IsTradeAllowed }
IsConnected() reflects a technical state of the visibility / login credentials / connection used upon an attempt to setup/maintain an online connection between a localhost <-> Server { IsConnected() | !IsConnected() }
init() {...} is a one-stop setup facility, that is/was being called once an MT4-programme { ExpertAdvisor | Script | TechnicalIndicator } was launched on a localhost machine. This facility is strongly advised to be non-blocking and non-re-entrant. A change from the user account_A to another user account_B is typically ( via an MT4-configuration options ) a reason to stop an execution of a previously loaded MQL4-code ( be it an EA / a Script / a Technical Indicator ) )
start() {...} is an event-handler facility, that endlessly waits, for a next occurrence of an FX-Market Event appearance ( being propagated down the line by the Broker MT4-Server automation ) that is being announced via an established connection downwards, to the MT4-Terminal process, being run on a localhost machine.
A Workaround Solution
As understood, the problem may be detected and handled indirectly.
While the MT4 platform seems to have no direct method to distinguish between the complete / in-complete refresh of the list of { current | historical } trades, let me propose a method of an indirect detection thereof.
Try to launch a "signal"-trade ( a pending order, placed geometrically well far away, in the PriceDOMAIN, from the current Ask/Bid-levels ).
Once this trade would be end-to-end registered ( Server-side acknowledged ), the local-side would have confirmed the valid state of the db.POOL
Making this a request/response pattern between localhost/MT4-Server processes, the localhost int init(){...} / int start(){...} functionality may thus reflect a moment, when the both sides have synchronised state of the records in db.POOL