Is there a way keep track of the score from time to time while the solver is running?
I currently instantiate my solver as follow
SolverFactory solverFactory = SolverFactory.createFromXmlResource("solver/SolverConfig.xml");
Solver solver = solverFactory.buildSolver();
solver.addEventListener(new SolverEventListener() {
#Override
public void bestSolutionChanged(BestSolutionChangedEvent event) {
logger.info("New best score : " + event.getNewBestScore().toShortString());
}
});
solver.solve(planningSolution);
This way I am able to see the logs every time the best score changes.
However, I would like to view the score after every 100 steps or after every 10 seconds. Is that possible?
If you turn on DEBUG (or TRACE) logging, you'll see it.
If you want to listen to it in java, that's not supported in the public API, but there's PhaseLifecycleListener in the internal implementation that has no backward compatibility guarantees...
Related
I implemented two different solution to discover service on my BLE device. One use a handler then return what .discoverService have found, the other one is really similar but give the size of the service discovered list that is always 0. I tried it with my realme buds 2 as test and some other device publically visible. The result is always 0. What can the problem be?
Handler(Looper.getMainLooper()).post {
var temp = bluetoothGatt?.discoverServices()
addGlog("discordservice() returned ${temp.toString()}")
}
addGlog("handler discover service reached an end")
val gattServices: List<BluetoothGattService> = gatt.getServices()
addGlog("Services count: " + gattServices.size)
for (gattService in gattServices) {
val serviceUUID = gattService.uuid.toString()
addGlog("Service uuid $serviceUUID")
}
edit: AddGlog is a simple log function to print results
answer: The code is not wrong but it take some time to discover those services so i put this code in a button. In this way there is 3-4 second of time between connecting with the device and make a discoveryservice operation. So a button make the conneting operations and another one the service discovery operations. I am sorry if my answer is pretty lame but I am still a noob on this topic
I have a piece of code that is essentially executing the following with Infinispan in embedded mode, using version 13.0.0 of the -core and -clustered-lock modules:
#Inject
lateinit var lockManager: ClusteredLockManager
private fun getLock(lockName: String): ClusteredLock {
lockManager.defineLock(lockName)
return lockManager.get(lockName)
}
fun createSession(sessionId: String) {
tryLockCounter.increment()
logger.debugf("Trying to start session %s. trying to acquire lock", sessionId)
Future.fromCompletionStage(getLock(sessionId).lock()).map {
acquiredLockCounter.increment()
logger.debugf("Starting session %s. Got lock", sessionId)
}.onFailure {
logger.errorf(it, "Failed to start session %s", sessionId)
}
}
I take this piece of code and deploy it to kubernetes. I then run it in six pods distributed over six nodes in the same region. The code exposes createSession with random Guids through an API. This API is called and creates sessions in chunks of 500, using a k8s service in front of the pods which means the load gets balanced over the pods. I notice that the execution time to acquire a lock grows linearly with the amount of sessions. In the beginning it's around 10ms, when there's about 20_000 sessions it takes about 100ms and the trend continues in a stable fashion.
I then take the same code and run it, but this time with twelve pods on twelve nodes. To my surprise I see that the performance characteristics are almost identical to when I had six pods. I've been digging in to the code but still haven't figured out why this is, I'm wondering if there's a good reason why infinispan here doesn't seem to perform better with more nodes?
For completeness the configuration of the locks are as follows:
val global = GlobalConfigurationBuilder.defaultClusteredBuilder()
global.addModule(ClusteredLockManagerConfigurationBuilder::class.java)
.reliability(Reliability.AVAILABLE)
.numOwner(1)
and looking at the code the clustered locks is using DIST_SYNC which should spread out the load of the cache onto the different nodes.
UPDATE:
The two counters in the code above are simply micrometer counters. It is through them and prometheus that I can see how the lock creation starts to slow down.
It's correctly observed that there's one lock created per session id, this is per design what we'd like. Our use case is that we want to ensure that a session is running in at least one place. Without going to deep into detail this can be achieved by ensuring that we at least have two pods that are trying to acquire the same lock. The Infinispan library is great in that it tells us directly when the lock holder dies without any additional extra chattiness between pods, which means that we have a "cheap" way of ensuring that execution of the session continues when one pod is removed.
After digging deeper into the code I found the following in CacheNotifierImpl in the core library:
private CompletionStage<Void> doNotifyModified(K key, V value, Metadata metadata, V previousValue,
Metadata previousMetadata, boolean pre, InvocationContext ctx, FlagAffectedCommand command) {
if (clusteringDependentLogic.running().commitType(command, ctx, extractSegment(command, key), false).isLocal()
&& (command == null || !command.hasAnyFlag(FlagBitSets.PUT_FOR_STATE_TRANSFER))) {
EventImpl<K, V> e = EventImpl.createEvent(cache.wired(), CACHE_ENTRY_MODIFIED);
boolean isLocalNodePrimaryOwner = isLocalNodePrimaryOwner(key);
Object batchIdentifier = ctx.isInTxScope() ? null : Thread.currentThread();
try {
AggregateCompletionStage<Void> aggregateCompletionStage = null;
for (CacheEntryListenerInvocation<K, V> listener : cacheEntryModifiedListeners) {
// Need a wrapper per invocation since converter could modify the entry in it
configureEvent(listener, e, key, value, metadata, pre, ctx, command, previousValue, previousMetadata);
aggregateCompletionStage = composeStageIfNeeded(aggregateCompletionStage,
listener.invoke(new EventWrapper<>(key, e), isLocalNodePrimaryOwner));
}
The lock library uses a clustered Listener on the entry modified event, and this one uses a filter to only notify when the key for the lock is modified. It seems to me the core library still has to check this condition on every registered listener, which of course becomes a very big list as the number of sessions grow. I suspect this to be the reason and if it is it would be really really awesome if the core library supported a kind of key filter so that it could use a hashmap for these listeners instead of going through a whole list with all listeners.
I believe you are creating a clustered lock per session id. Is this what you need ? what is the acquiredLockCounter? We are about to deprecate the "lock" method in favour of "tryLock" with timeout since the lock method will block forever if the clustered lock is never acquired. Do you ever unlock the clustered lock in another piece of code? If you shared a complete reproducer of the code will be very helpful for us. Thanks!
I want to read data from Bigquery periodically in Beam, and the test codes as below
pipeline.apply("Generate Sequence",
GenerateSequence.from(0).withRate(1, Duration.standardMinutes(2)))
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(2))))
.apply("Read from BQ", new ReadBQ())
.apply("Convert Row",
MapElements.into(TypeDescriptor.of(MyData.class)).via(MyData::fromTableRow))
.apply("Map TableRow", ParDo.of(new MapTableRowV1()))
;
static class ReadBQ extends PTransform<PCollection<Long>, PCollection<TableRow>> {
#Override
public PCollection<TableRow> expand(PCollection<Long> input) {
BigQueryIO.TypedRead<TableRow> rows = BigQueryIO.readTableRows()
.fromQuery("select * from project.dataset.table limit 10")
.usingStandardSql();
return rows.expand(input.getPipeline().begin());
}
}
static class MapTableRowV1 extends DoFn<AdUnitECPM, Void> {
#ProcessElement
public void processElement(ProcessContext pc) {
LOG.info("String of mydata is " + pc.element().toString());
}
}
Since BigQueryIO.TypedRead is related to PBegin, one trick is done in ReadBQ through rows.expand(input.getPipeline().begin()). However, this job does NOT run every two minutes. How to read data from bigquery periodically?
Look at using Looping Timers. That provides the right pattern.
As written your code would only fire once after sequence is built. For fixed windows you would need an input value coming into the Window for it to trigger. For example, have the pipeline read a Pub/Sub input and then have an agent push events every 2 minutes into the topic/sub.
I am going to assume that you are running in streaming mode here; however, another way to do this would be to use a batch job and then run it every 2 mins from Composer. Reason being if your job is idle for effectively 90 secs (2 min - processing time) your streaming job wasting some resources.
One other note: Look at thinning down you column selection in your BigQuery SQL (to save time and money). Look at using some filter on your SQL to pick up a partition or cluster. Look at using #timestamp filter to only scan records that are new in last N. This could give you better control over how you deal with latency and variability at the db level.
As you have mentioned in the question, BigQueryIO read transforms start with PBegin, which puts it at the start of the Graph. In order to achieve what you are looking for, you will need to make use of the BigQuery client libraries directly within a DoFn.
For an example of this have a look at this
transform
Using a normal DoFn for this will be ok for small amounts of data, but for a large amount of data, you will want to look at implementing that logic in a SDF.
I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Pixel> pixels = input.map(record -> mapper.readValue(record, Pixel.class));
pixels
.keyBy("id", "timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(dynamoDBSink);
env.execute(jobName);
I am trying to test this application with the recommended approach in documentation. I also have looked at this stackoverflow question, but adding the sink hadn't helped.
I do have a #ClassRule as recommended in my test class. The function looks like this:
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
CollectSink.values.clear();
Pixel testPixel1 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel2 = Pixel.builder().id(2).timestampRoundedToMinutes("202002261220").constant(1).build();
Pixel testPixel3 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel4 = Pixel.builder().id(3).timestampRoundedToMinutes("202002261220").constant(1).build();
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.keyBy("id","timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
JobExecutionResult result = env.execute("AggregationTest");
assertNotEquals(0, CollectSink.values.size());
CollectSink is copied from documentation.
What am I doing wrong? Is there also a simple way to test the application with embedded kafka?
Thanks!
The reason why your test is failing is because the window is never triggered. The job runs to completion before the window can reach the end of its allotted time.
The reason for this has to do with the way you are working with time. By specifying
.keyBy("id","timestampRoundedToMinutes")
you are arranging for all the events for the same id and with timestamps within the same minute to be in the same window. But because you are using processing time windowing (rather than event time windowing), your windows won't close until the time of day when the test is running crosses over the boundary from one minute to the next. With only four events to process, your job is highly unlikely to run long enough for this to happen.
What you should do instead is something more like this: set the time characteristic to event time, and provide a timestamp extractor and watermark assigner. Note that by doing this, there's no need to key by the timestamp rounded to minute boundaries -- that's part of what event time windows do anyway.
public static void main(String[] args) throws Exception {
...
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
env.execute();
}
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks() {
super(/* delay to handle out-of-orderness */);
}
#Override
public long extractTimestamp(Event event) {
return event.timestamp;
}
}
See the documentation and the tutorials for more about event time, watermarks, and windowing.
I assume DefaultAsyncJobExecutor is the class which gets picked up by default as an implementation of AsyncExecutor interface (not sure if this assumption is right or not)
So basically I want to modify the default time-out duration of an asynchronous job, the default time-out duration is 5 minutes, which is the value of two variables:
timerLockTimeInMillis, asyncJobLockTimeInMillis in AbstractAsyncExecutor.java**
I tried to change both values with respective setter methods and tried to directly modify the value in the constructor of my custom implementation like this:
public class AsyncExecutorConfigImpl extends DefaultAsyncJobExecutor
{
// #Value( "${async.timeout.duration}" )
private int customAsyncJobLockTimeInMillis = 10 * 60 * 1000;
AsyncExecutorConfigImpl()
{
super();
setTimerLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
setAsyncJobLockTimeInMillis( this.customAsyncJobLockTimeInMillis );
super.timerLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
super.asyncJobLockTimeInMillis = this.customAsyncJobLockTimeInMillis;
}
}
But the values remain same because time-out still happens after 5 minutes.
Initialisation is done via an API, like start-new-process-instance, in this APIfollowing code is there to start the process instance
->Start a workflow process instance asynchronously something like this
(processInstanceName, processInstanceId)
ProcessInstance lProcessInstance = mRuntimeService.createProcessInstanceBuilder()
.processDefinitionId( lProcessDefinition.get().getId() )
.variables( processInstanceRequest.getVariables() )
.name( lProcessInstanceName )
.predefineProcessInstanceId( lProcessInstanceId )
.startAsync();
->Once this is done rest of the workflow involves service tasks and while one instance is executing, I guess the time-out occurs and instance gets restarted
-> Since, I have a listener configured I was able to see this in logs that start-event activity gets started after every 5 minutes
so for example: event-1 is the first event then this event is getting re-started after 5 minutes(duration is displayed in console logs)
Not sure, what I'm missing at this point, let me know if any other details required
if the jar file is not under your control you cannot change the default value of count because in the jar classes are compiled. You can only change the value inside of an object so you can super keyword:
class CustomImplementation extends DefaultExecutedClass{
private int custom_count=1234;
CustomImplementation(){
super();
super.count = this.custom_count;
}
}
otherwise if you really need to change the original file you have to extract it from the jar
When you are using the Flowable Spring Boot starters. Then the SpringAsyncExecutor is used, this uses the TaskExecutor from Spring. It's is provided as a bean. In order to change it's values you can use properties.
e.g.
flowable.process.async.executor.timer-lock-time-in-millis=600000
flowable.process.async.executor.async-job-lock-time-in-millis=600000
Note: Be careful when changing this. If your processes start is taking more than 5 minutes then this means that you have a transaction open for that duration of the time.