scheduling java methods to run on specific time in ofbiz using - jobs

is it possible to schedule methods to run on specific times in ofbiz? like jobs in databases?
i have been doing some reading on services in ofbiz and I came across the JobSandbox Entity aand ofbiz provides a very helpful GUI to setup the running of the jobs which I assume uses the JobSandbox Entity.
I just want to see if there is a reference or manual that would let me setup the service through code?

Yes, it is very easy to schedule a service through the code, please check this small snippet:
long startTime = new java.util.Date().getTime();
int frequency = RecurrenceRule.DAILY;
int interval = 1;
int count = 20;
LocalDispatcher dispatcher=dctx.getDispatcher();
dispatcher.schedule("myService",_context, startTime, frequency,
interval, count);
}catch (GenericServiceException e){
Debug.logError("Error trying to Schedule My Service: "
+ e.getMessage());
}

Related

Testing Flink window

I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Pixel> pixels = input.map(record -> mapper.readValue(record, Pixel.class));
pixels
.keyBy("id", "timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(dynamoDBSink);
env.execute(jobName);
I am trying to test this application with the recommended approach in documentation. I also have looked at this stackoverflow question, but adding the sink hadn't helped.
I do have a #ClassRule as recommended in my test class. The function looks like this:
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
CollectSink.values.clear();
Pixel testPixel1 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel2 = Pixel.builder().id(2).timestampRoundedToMinutes("202002261220").constant(1).build();
Pixel testPixel3 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel4 = Pixel.builder().id(3).timestampRoundedToMinutes("202002261220").constant(1).build();
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.keyBy("id","timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
JobExecutionResult result = env.execute("AggregationTest");
assertNotEquals(0, CollectSink.values.size());
CollectSink is copied from documentation.
What am I doing wrong? Is there also a simple way to test the application with embedded kafka?
Thanks!
The reason why your test is failing is because the window is never triggered. The job runs to completion before the window can reach the end of its allotted time.
The reason for this has to do with the way you are working with time. By specifying
.keyBy("id","timestampRoundedToMinutes")
you are arranging for all the events for the same id and with timestamps within the same minute to be in the same window. But because you are using processing time windowing (rather than event time windowing), your windows won't close until the time of day when the test is running crosses over the boundary from one minute to the next. With only four events to process, your job is highly unlikely to run long enough for this to happen.
What you should do instead is something more like this: set the time characteristic to event time, and provide a timestamp extractor and watermark assigner. Note that by doing this, there's no need to key by the timestamp rounded to minute boundaries -- that's part of what event time windows do anyway.
public static void main(String[] args) throws Exception {
...
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
env.execute();
}
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks() {
super(/* delay to handle out-of-orderness */);
}
#Override
public long extractTimestamp(Event event) {
return event.timestamp;
}
}
See the documentation and the tutorials for more about event time, watermarks, and windowing.

JMeter - Avoid threads abrupt shutdown

I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.

Optaplanner not using different values for PlanningVariable

I am trying implement a job shop scheduling application using Fisher & Thompson mt10 dataset. Basically it contains of
10 jobs, each having 10 dependent steps
10 machines
each step of a job is assigned to a specific machine
I have implemented an Optaplanner use case based on the "Taskassigning" example. I removed the speed and priority concepts but kept the skill concept to make jobs run only on machines where they are able to run. I introduced a "predecessor" concept to build the dependencies between jobs/steps.
As there will be gaps in the schedule (which is different from the Taskassigning example), removed starttime and endtime and introduced a starttime PlanningVariable, fed by a list of possible start times.
However, I only get two different start times in the schedule - Optaplanner does not seem to utilize my value range provider. Therefore, hard constraints are violated because the sequence of dependent steps is not kept.
Job:
private JobType jobType;
private Job predecessor;
private Job successor;
private int indexInJobType;
// Planning variables: changes during planning, between score calculations.
#PlanningVariable(valueRangeProviderRefs = {"machineRange", "jobRange"},
graphType = PlanningVariableGraphType.CHAINED)
private JobOrMachine previousJobOrMachine;
#AnchorShadowVariable(sourceVariableName = "previousJobOrMachine")
private Machine machine;
#PlanningVariable(valueRangeProviderRefs = {"startTimeRange"})
private StartTime startTime=new StartTime(0); // In minutes
My PlanningSolution has a range provider:
#ValueRangeProvider(id = "startTimeRange")
#ProblemFactCollectionProperty
public List<StartTime> getStartTimeList() {
return startTimeList;
}
I am relatively new to Optaplanner and might be missing something very basic. I am struggling to identify what I am doing wrong, even after extensive reading of the docs and examples.
Any idea?
I found a problem with a hard constraint rule related to the planning variable. This question is no longer valid. Thanks.

Dynamically change the periodic interval of celery task at runtime

I have a periodic celery task running once per minute, like so:
#tasks.py
#periodic_task(run_every=(crontab(hour="*", minute="*", day_of_week="*")))
def scraping_task():
result = pollAPI()
Where the function pollAPI(), as you might have guessed from the name, polls an API. The catch is that the API has a rate limit that is undisclosed, and sometimes gives an error response, if that limit is hit. I'd like to be able to take that response, and if the limit is hit, decrease the periodic task interval dynamically (or even put the task on pause for a while). Is this possible?
I read in the docs about overwriting the is_due method of schedules, but I am lost on exactly what to do to give the behaviour I'm looking for here. Could anyone help?
You could try using celery.conf.update to update your CELERYBEAT_SCHEDULE.
You can add a model in the database that will store the information if the rate limit is reached. Before doing an API poll, you can check the information in the database. If there is no limit, then just send an API request.
The other approach is to use PeriodicTask from django-celery-beat. You can update the interval dynamically. I created an example project and wrote an article showing how to use dynamic periodic tasks in Celery and Django.
The example code that updates the task when the limit reached:
def scraping_task(special_object_id, larger_interval=1000):
try:
result = pollAPI()
except Exception as e:
# limit reached
special_object = ModelWithTask.objects.get(pk=special_object_id)
task = PeriodicTask.objects.get(pk=special_object.task.id)
new_schedule, created = IntervalSchedule.objects.get_or_create(
every=larger_inerval,
period=IntervalSchedule.SECONDS,
)
task.interval = new_schedule
task.save()
You can pass the parameters to the scraping_task when creating a PeriodicTask object. You will need to have an additional model in the database to have access to the task:
from django.db import models
from django_celery_beat.models import PeriodicTask
class ModelWithTask(models.Model):
task = models.OneToOneField(
PeriodicTask, null=True, blank=True, on_delete=models.SET_NULL
)
# create periodic task
special_object = ModelWithTask.objects.create_or_get()
schedule, created = IntervalSchedule.objects.get_or_create(
every=10,
period=IntervalSchedule.SECONDS,
)
task = PeriodicTask.objects.create(
interval=schedule,
name="Task 1",
task="scraping_task",
kwargs=json.dumps(
{
"special_obejct_id": special_object.id,
}
),
)
special_object.task = task
special_object.save()

Tasks in SQL Server and multiple worker role instances

Consider the following table in SQL Server: Tasks (Payload nvarchar, DateToExecute datetime, DateExecuted datetime null).
Now we have two worker processes (2 Azure worker role instances in our case). Both of them periodically try to get records where DateExecuted IS NULL AND DateToExecute <= GETDATE(). Then they process that record and set (SQL update) DateExecuted to current date.
The problem is that a single task should be processed only once by a single worker instance.
What's the best way to provide synchronization or locking for implementing such scenario?
The easiest way to do locking over multiple roles/instances in Windows Azure is by using blob leases. Steve Marx created a great class for this called AutoRenewLease (source, NuGet, blog post). If you already have a timer or while loop, you can write code like this:
using (var arl = new AutoRenewLease(leaseBlob))
{
if (arl.HasLease)
{
// Query Tasks table and do work....
}
else
{
// Other worker is busy....
}
}
Or you could use the DoEvery method which allows you to schedule your code every X minutes:
AutoRenewLease.DoEvery(leaseBlob, TimeSpan.FromMinutes(15), () => {
// Query Tasks table and do work....
});