Grails Domain Embedded - grails-orm

I have this domain model, grails-app/domain, named com.portal.Schedule.groovy having this properties:
Subject subject
Room room
Day day
Time timeStart
Time timeEnd
static embedded = ['timeStart', 'timeEnd']
Where in the object com.portal.Time is located in the src/groovy having this properties:
Integer hour
Integer minute
public Time(Integer hour, Integer minute) {
super();
this.hour = hour;
this.minute = minute;
}
The problem is when I want to add a record using the BootStrap.groovy having this syntax:
new Schedule(subject: Subject.get(1), room: Room.get(1), day: Day.MON,
timeStart: new Time(9, 0), timeEnd: new Time(11, 00)).save(failOnError: true)
I get this error message prior to finish to start-up:
Message: No default constructor for entity: com.portal.Time; nested
exception is org.hibernate.InstantiationException: No default
constructor for entity: com.portal.Time
How can I resolve this to have my Bootstrap.groovy running with the instance of Schedule with those attributes?

Your Time constructor is set to private. That's why you're getting that error.

I've searched it thoroughly on Google how to solve this problem.
It's seems groovy has almost the same feature with python regarding constructor or in other terms tuples in Python located here
After inserting the annotation to the class Time I can now code the constructor in multiple ways.

Related

Testing Flink window

I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Pixel> pixels = input.map(record -> mapper.readValue(record, Pixel.class));
pixels
.keyBy("id", "timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(dynamoDBSink);
env.execute(jobName);
I am trying to test this application with the recommended approach in documentation. I also have looked at this stackoverflow question, but adding the sink hadn't helped.
I do have a #ClassRule as recommended in my test class. The function looks like this:
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
CollectSink.values.clear();
Pixel testPixel1 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel2 = Pixel.builder().id(2).timestampRoundedToMinutes("202002261220").constant(1).build();
Pixel testPixel3 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel4 = Pixel.builder().id(3).timestampRoundedToMinutes("202002261220").constant(1).build();
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.keyBy("id","timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
JobExecutionResult result = env.execute("AggregationTest");
assertNotEquals(0, CollectSink.values.size());
CollectSink is copied from documentation.
What am I doing wrong? Is there also a simple way to test the application with embedded kafka?
Thanks!
The reason why your test is failing is because the window is never triggered. The job runs to completion before the window can reach the end of its allotted time.
The reason for this has to do with the way you are working with time. By specifying
.keyBy("id","timestampRoundedToMinutes")
you are arranging for all the events for the same id and with timestamps within the same minute to be in the same window. But because you are using processing time windowing (rather than event time windowing), your windows won't close until the time of day when the test is running crosses over the boundary from one minute to the next. With only four events to process, your job is highly unlikely to run long enough for this to happen.
What you should do instead is something more like this: set the time characteristic to event time, and provide a timestamp extractor and watermark assigner. Note that by doing this, there's no need to key by the timestamp rounded to minute boundaries -- that's part of what event time windows do anyway.
public static void main(String[] args) throws Exception {
...
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
env.execute();
}
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks() {
super(/* delay to handle out-of-orderness */);
}
#Override
public long extractTimestamp(Event event) {
return event.timestamp;
}
}
See the documentation and the tutorials for more about event time, watermarks, and windowing.

FLOWABLE: How to change 5 minute default interval of Async job

I assume DefaultAsyncJobExecutor is the class which gets picked up by default as an implementation of AsyncExecutor interface (not sure if this assumption is right or not)
So basically I want to modify the default time-out duration of an asynchronous job, the default time-out duration is 5 minutes, which is the value of two variables:
timerLockTimeInMillis, asyncJobLockTimeInMillis in AbstractAsyncExecutor.java**
I tried to change both values with respective setter methods and tried to directly modify the value in the constructor of my custom implementation like this:
public class AsyncExecutorConfigImpl extends DefaultAsyncJobExecutor
{
// #Value( "${async.timeout.duration}" )
private int customAsyncJobLockTimeInMillis = 10 * 60 * 1000;
AsyncExecutorConfigImpl()
{
super();
setTimerLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
setAsyncJobLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
super.timerLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
super.asyncJobLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
}
}
But the values remain same because time-out still happens after 5 minutes.
Initialisation is done via an API, like start-new-process-instance, in this APIfollowing code is there to start the process instance
->Start a workflow process instance asynchronously something like this
(processInstanceName, processInstanceId)
ProcessInstance lProcessInstance = mRuntimeService.createProcessInstanceBuilder()
.processDefinitionId( lProcessDefinition.get().getId() )
.variables( processInstanceRequest.getVariables() )
.name( lProcessInstanceName )
.predefineProcessInstanceId( lProcessInstanceId )
.startAsync();
->Once this is done rest of the workflow involves service tasks and while one instance is executing, I guess the time-out occurs and instance gets restarted
-> Since, I have a listener configured I was able to see this in logs that start-event activity gets started after every 5 minutes
so for example: event-1 is the first event then this event is getting re-started after 5 minutes(duration is displayed in console logs)
Not sure, what I'm missing at this point, let me know if any other details required
if the jar file is not under your control you cannot change the default value of count because in the jar classes are compiled. You can only change the value inside of an object so you can super keyword:
class CustomImplementation extends DefaultExecutedClass{
private int custom_count=1234;
CustomImplementation(){
super();
super.count = this.custom_count;
}
}
otherwise if you really need to change the original file you have to extract it from the jar
When you are using the Flowable Spring Boot starters. Then the SpringAsyncExecutor is used, this uses the TaskExecutor from Spring. It's is provided as a bean. In order to change it's values you can use properties.
e.g.
flowable.process.async.executor.timer-lock-time-in-millis=600000
flowable.process.async.executor.async-job-lock-time-in-millis=600000
Note: Be careful when changing this. If your processes start is taking more than 5 minutes then this means that you have a transaction open for that duration of the time.

JMeter - Avoid threads abrupt shutdown

I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.

Optaplanner not using different values for PlanningVariable

I am trying implement a job shop scheduling application using Fisher & Thompson mt10 dataset. Basically it contains of
10 jobs, each having 10 dependent steps
10 machines
each step of a job is assigned to a specific machine
I have implemented an Optaplanner use case based on the "Taskassigning" example. I removed the speed and priority concepts but kept the skill concept to make jobs run only on machines where they are able to run. I introduced a "predecessor" concept to build the dependencies between jobs/steps.
As there will be gaps in the schedule (which is different from the Taskassigning example), removed starttime and endtime and introduced a starttime PlanningVariable, fed by a list of possible start times.
However, I only get two different start times in the schedule - Optaplanner does not seem to utilize my value range provider. Therefore, hard constraints are violated because the sequence of dependent steps is not kept.
Job:
private JobType jobType;
private Job predecessor;
private Job successor;
private int indexInJobType;
// Planning variables: changes during planning, between score calculations.
#PlanningVariable(valueRangeProviderRefs = {"machineRange", "jobRange"},
graphType = PlanningVariableGraphType.CHAINED)
private JobOrMachine previousJobOrMachine;
#AnchorShadowVariable(sourceVariableName = "previousJobOrMachine")
private Machine machine;
#PlanningVariable(valueRangeProviderRefs = {"startTimeRange"})
private StartTime startTime=new StartTime(0); // In minutes
My PlanningSolution has a range provider:
#ValueRangeProvider(id = "startTimeRange")
#ProblemFactCollectionProperty
public List<StartTime> getStartTimeList() {
return startTimeList;
}
I am relatively new to Optaplanner and might be missing something very basic. I am struggling to identify what I am doing wrong, even after extensive reading of the docs and examples.
Any idea?
I found a problem with a hard constraint rule related to the planning variable. This question is no longer valid. Thanks.

Full example of Message Broker in Lagom

I'm trying to implement a Message Broker set up with Lagom 1.2.2 and have run into a wall. The documentation has the following example for the service descriptor:
default Descriptor descriptor() {
return named("helloservice").withCalls(...)
// here we declare the topic(s) this service will publish to
.publishing(
topic("greetings", this::greetingsTopic)
)
....;
}
And this example for the implementation:
public Topic<GreetingMessage> greetingsTopic() {
return TopicProducer.singleStreamWithOffset(offset -> {
return persistentEntityRegistry
.eventStream(HelloEventTag.INSTANCE, offset)
.map(this::convertEvent);
});
}
However, there's no example of what the argument type or return type of the convertEvent() function are, and this is where I'm drawing a blank. On the other end, the subscriber to the MessageBroker, it seems that it's consuming GreetingMessage objects, but when I create a function convertEvent to return GreetingMessage objects, I get a compilation error:
Error:(61, 21) java: method map in class akka.stream.javadsl.Source<Out,Mat> cannot be applied to given types;
required: akka.japi.function.Function<akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset>,T>
found: this::convertEvent
reason: cannot infer type-variable(s) T
(argument mismatch; invalid method reference
incompatible types: akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset> cannot be converted to com.example.GreetingMessage)
Are there any more more thorough examples of how to use this? I've already checked in the Chirper sample app and it doesn't seem to have an example of this.
Thanks!
The error message you pasted tells you exactly what map expects:
required: akka.japi.function.Function<akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset>,T>
So, you need to pass a function that takes Pair<GreetingEvent, Offset>. What should the function return? Well, update it to take that, and then you'll get the next error, which once again will tell you what it was expecting you to return, and in this instance you'll find it's Pair<GreetingMessage, Offset>.
To explain what these types are - Lagom needs to track which events have been published to Kafka, so that when you restart a service, it doesn't start from the beginning of your event log and republish all the events from the beginning of time again. It does this by using offsets. So the event log produces pairs of events and offsets, and then you need to transform these events to the messages that will be published to Kafka, and when you returned the transformed message to Lagom, it needs to be a in a pair with the offset that you got from the event log, so that after publishing to Kafka, Lagom can persist the offset, and use that as the starting point next time the service is restarted.
A full example can be seen here: https://github.com/lagom/online-auction-java/blob/a32e696/bidding-impl/src/main/java/com/example/auction/bidding/impl/BiddingServiceImpl.java#L91