I'm using Ignite.NET 2.7.6. I did a very simple implementation of IAffinityFunction interface:
sealed class AffinityFunction : IAffinityFunction
{
public int Partitions { get { return 1; } }
public int GetPartition(object key) { return 0; }
public void RemoveNode(Guid nodeId) {}
public IEnumerable<IEnumerable<IClusterNode> > AssignPartitions(AffinityFunctionContext context)
{
List<IClusterNode> servers = new List<IClusterNode>();
foreach(IClusterNode node in context.CurrentTopologySnapshot)
{
if(!node.IsClient)
{
servers.Add(node);
}
}
m_Partitions = new List<IEnumerable<IClusterNode> >();
if(servers.Count != 0)
{
m_Partitions.Add(servers);
}
return m_Partitions;
}
private List<IEnumerable<IClusterNode> > m_Partitions;
}
Then I assign it to my cache's CacheConfiguration. I must mention that the cache is in the Replicated mode.
Firstly I start server node, then the client node and this leads to server crash with the following error in its log file:
java.lang.IndexOutOfBoundsException: index 0
at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(Unknown Source)
at java.util.concurrent.atomic.AtomicReferenceArray.get(Unknown Source)
at org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.localPartition0(GridDhtPartitionTopologyImpl.java:911)
at org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.localPartition(GridDhtPartitionTopologyImpl.java:825)
As I understand this AffinityFunction must collect all the servers in one partition and returns this partition as the one to search in.
public int Partitions { get { return (m_Partitions != null ? m_Partitions.Count : 0); } }
0 is not a valid value for Partitions property (should be positive)
Partitions property should return a constant value not based on any condition. This property is evaluated once per cache per node.
The error message and documentation should be better, I'll file a ticket.
Related
I have this model which works with Gorm3
class UserDataPoint implements Serializable {
User user
DataPoint dataPoint
static mapping = {
id composite: ['user', 'dataPoint']
}
static List<Long> getReadDataPointIds(User user, List<Long> locationIds = null) {
createCriteria().list {
eq('user', user)
dataPoint {
listing {
location {
inList('id', locationIds)
}
}
}
projections {
property('dataPoint.id')
}
} as List
}
}
but now it runs this exception :
java.lang.IllegalArgumentException: Unable to locate Attribute with the the given
name [dataPoint] on this ManagedType [user.UserDataPoint]
Some debug allows me to understand what is happening, it seems that GORM remove the user and dataPoint properties from HibernateProperties because the are a part of the Composite ID
public abstract class AbstractPersistentEntity<T extends Entity> implements PersistentEntity {
.......
IdentityMapping identifier = mapping != null ? mapping.getIdentifier() : null;
if(identity == null && identifier != null) {
final String[] identifierName = identifier.getIdentifierName();
final MappingContext mappingContext = getMappingContext();
if(identifierName.length > 1) {
compositeIdentity = mappingContext.getMappingSyntaxStrategy().getCompositeIdentity(javaClass, mappingContext);
}
for (String in : identifierName) {
final PersistentProperty p = propertiesByName.get(in);
if(p != null) {
persistentProperties.remove(p);
}
persistentPropertyNames.remove(in);
}
disableDefaultId();
}
......
}
So I understand the problem, but I don't know how to fix it ?
how to reference the composite ID and it fields in the projection ?
It seems to be fixed with this recent version https://github.com/grails/gorm-hibernate5/releases/tag/v7.2.1
I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.
I wrote a static class that auto-increments the id of a RealmObject by 1.
public class AutoIncrementKey {
public static int Next(Class<? extends RealmObject> c)
{
Realm realm = Realm.getDefaultInstance();
Number maxId = realm.where(c).max("id");
realm.close();
if(maxId == null)
{ // no object exists, so return 0
return 0;
}
return maxId.intValue() + 1;
}
}
However, when I set the default value of a RealmObject's ID like so:
#PrimaryKey private int id = AutoIncrementKey.Next(PresetSelect.class);
It never works! Specifically the first time it goes to create a new class via realm.createObject(IExtendRealmObject.class) the value is 0, but AutoIncrementKey.Next(...) returns the id as 1!
So id is never set to 1. It's always 0, and trying to create more objects causes it to throw an error "index already exists: 0"
What gives?
The AutoIncrementKey.Next() function IS being called. It IS finding the next key to be 1. The value returned simply isn't carried through though.
Edit:
So now that I've managed to create more than one object in my Realm, I'm finding that setting the id to a default value isn't the only issue.
Setting ANY member of a class extending RealmObject with a default value is IGNORED. Whats the deal with that?
That's because instead of
realm.createObject(IExtendRealmObject.class)
You're supposed to use
realm.createObject(IExtendRealmObject.class, primaryKeyValue)
But I think your method
public class AutoIncrementKey {
public static int Next(Class<? extends RealmObject> c)
{
Realm realm = Realm.getDefaultInstance();
Number maxId = realm.where(c).max("id");
realm.close();
if(maxId == null)
{ // no object exists, so return 0
return 0;
}
return maxId.intValue() + 1;
}
}
Would be more stable as
public class AutoIncrementKey {
public static int Next(Realm realm, Class<? extends RealmModel> c)
{
Number maxId = realm.where(c).max("id");
if(maxId == null)
{ // no object exists, so return 0
return 0;
}
return maxId.intValue() + 1; // why not long?
}
}
If you meet the condition that when you call AutoIncrementKey.Next(realm, Some.class), then a write transaction is in progress.
Hell, you might even add
public class AutoIncrementKey {
public static int Next(Realm realm, Class<? extends RealmModel> c)
{
if(!realm.isInTransaction()) {
throw new IllegalStateException("Realm is not in a transaction.");
}
// continue as mentioned
It should work well for your needs
I am using Redis with StackExchange.Redis. I have multiple threads that will at some point access and edit the value of the same key, so I need to synchronize the manipulation of the data.
Looking at the available functions, I see that there are two functions, TakeLock and ReleaseLock. However, these functions take both a key and a value parameter rather than the expected single key to be locked. The intellisene documentation and source on GitHub don't explain how to use the LockTake and LockRelease functions or what to pass in for the key and value parameters.
Q: What is the correct usage of LockTake and LockRelease in StackExchange.Redis?
Pseudocode example of what I'm aiming to do:
//Add Items Before Parallel Execution
redis.StringSet("myJSONKey", myJSON);
//Parallel Execution
Parallel.For(0, 100, i =>
{
//Some work here
//....
//Lock
redis.LockTake("myJSONKey");
//Manipulate
var myJSONObject = redis.StringGet("myJSONKey");
myJSONObject.Total++;
Console.WriteLine(myJSONObject.Total);
redis.StringSet("myJSONKey", myNewJSON);
//Unlock
redis.LockRelease("myJSONKey");
//More work here
//...
});
There are 3 parts to a lock:
the key (the unique name of the lock in the database)
the value (a caller-defined token which can be used both to indicate who "owns" the lock, and to check that releasing and extending the lock is being done correctly)
the duration (a lock intentionally is a finite duration thing)
If no other value comes to mind, a guid might make a suitable "value". We tend to use the machine-name (or a munged version of the machine name if multiple processes could be competing on the same machine).
Also, note that taking a lock is speculative, not blocking. It is entirely possible that you fail to obtain the lock, and hence you may need to test for this and perhaps add some retry logic.
A typical example might be:
RedisValue token = Environment.MachineName;
if(db.LockTake(key, token, duration)) {
try {
// you have the lock do work
} finally {
db.LockRelease(key, token);
}
}
Note that if the work is lengthy (a loop, in particular), you may want to add some occasional LockExtend calls in the middle - again remembering to check for success (in case it timed out).
Note also that all individual redis commands are atomic, so you don't need to worry about two discreet operations competing. For more complexing multi-operation units, transactions and scripting are options.
There is my part of code for lock->get->modify(if required)->unlock actions with comments.
public static T GetCachedAndModifyWithLock<T>(string key, Func<T> retrieveDataFunc, TimeSpan timeExpiration, Func<T, bool> modifyEntityFunc,
TimeSpan? lockTimeout = null, bool isSlidingExpiration=false) where T : class
{
int lockCounter = 0;//for logging in case when too many locks per key
Exception logException = null;
var cache = Connection.GetDatabase();
var lockToken = Guid.NewGuid().ToString(); //unique token for current part of code
var lockName = key + "_lock"; //unique lock name. key-relative.
T tResult = null;
while ( lockCounter < 20)
{
//check for access to cache object, trying to lock it
if (!cache.LockTake(lockName, lockToken, lockTimeout ?? TimeSpan.FromSeconds(10)))
{
lockCounter++;
Thread.Sleep(100); //sleep for 100 milliseconds for next lock try. you can play with that
continue;
}
try
{
RedisValue result = RedisValue.Null;
if (isSlidingExpiration)
{
//in case of sliding expiration - get object with expiry time
var exp = cache.StringGetWithExpiry(key);
//check ttl.
if (exp.Expiry.HasValue && exp.Expiry.Value.TotalSeconds >= 0)
{
//get only if not expired
result = exp.Value;
}
}
else //in absolute expiration case simply get
{
result = cache.StringGet(key);
}
//"REDIS_NULL" is for cases when our retrieveDataFunc function returning null (we cannot store null in redis, but can store pre-defined string :) )
if (result.HasValue && result == "REDIS_NULL") return null;
//in case when cache is epmty
if (!result.HasValue)
{
//retrieving data from caller function (from db from example)
tResult = retrieveDataFunc();
if (tResult != null)
{
//trying to modify that entity. if caller modifyEntityFunc returns true, it means that caller wants to resave modified entity.
if (modifyEntityFunc(tResult))
{
//json serialization
var json = JsonConvert.SerializeObject(tResult);
cache.StringSet(key, json, timeExpiration);
}
}
else
{
//save pre-defined string in case if source-value is null.
cache.StringSet(key, "REDIS_NULL", timeExpiration);
}
}
else
{
//retrieve from cache and serialize to required object
tResult = JsonConvert.DeserializeObject<T>(result);
//trying to modify
if (modifyEntityFunc(tResult))
{
//and save if required
var json = JsonConvert.SerializeObject(tResult);
cache.StringSet(key, json, timeExpiration);
}
}
//refresh exiration in case of sliding expiration flag
if(isSlidingExpiration)
cache.KeyExpire(key, timeExpiration);
}
catch (Exception ex)
{
logException = ex;
}
finally
{
cache.LockRelease(lockName, lockToken);
}
break;
}
if (lockCounter >= 20 || logException!=null)
{
//log it
}
return tResult;
}
and usage :
public class User
{
public int ViewCount { get; set; }
}
var cachedAndModifiedItem = GetCachedAndModifyWithLock<User>(
"MyAwesomeKey", //your redis key
() => // callback to get data from source in case if redis's store is empty
{
//return from db or kind of that
return new User() { ViewCount = 0 };
},
TimeSpan.FromMinutes(10), //object expiration time to pass in Redis
user=> //modify object callback. return true if you need to save it back to redis
{
if (user.ViewCount< 3)
{
user.ViewCount++;
return true; //save it to cache
}
return false; //do not update it in cache
},
TimeSpan.FromSeconds(10), //lock redis timeout. if you will have race condition situation - it will be locked for 10 seconds and wait "get_from_db"/redis read/modify operations done.
true //is expiration should be sliding.
);
That code can be improved (for example, you can add transactions for less count call to cache and etc), but i glad it will be helpfull for you.
I've been dealing with a score corruption error for few days with no apparent reason. The error appears only on FULL_ASSERT mode and it is not related to the constraints defined on the drools file.
Following is the error :
014-07-02 14:51:49,037 [SwingWorker-pool-1-thread-4] TRACE Move index (0), score (-4/-2450/-240/-170), accepted (false) for move (EMP4#START => EMP2).
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Score corruption: the workingScore (-3/-1890/-640/-170) is not the uncorruptedScore (-3/-1890/-640/-250) after completedAction (EMP3#EMP4 => EMP4):
The corrupted scoreDirector has 1 ConstraintMatch(s) which are in excess (and should not be there):
com.abcdl.be.solver/MinimizeTotalTime/level3/[org.drools.core.reteoo.InitialFactImpl#4dde85f0]=-170
The corrupted scoreDirector has 1 ConstraintMatch(s) which are missing:
com.abcdl.be.solver/MinimizeTotalTime/level3/[org.drools.core.reteoo.InitialFactImpl#4dde85f0]=-250
Check your score constraints.
The error appears every time after several steps are completed for no apparent reason.
I'm developing a software to schedule several tasks considering time and resources constraints.
The whole process is represented by a directed tree diagram such that the nodes of the graph represent the tasks and the edges, the dependencies between the tasks.
To do this, the planner change the parent node of each node until he finds the best solution.
The node is the planning entity and its parent the planning variable :
#PlanningEntity(difficultyComparatorClass = NodeDifficultyComparator.class)
public class Node extends ProcessChain {
private Node parent; // Planning variable: changes during planning, between score calculations.
private String delay; // Used to display the delay for nodes of type "And"
private int id; // Used as an identifier for each node. Different nodes cannot have the same id
public Node(String name, String type, int time, int resources, String md, int id)
{
super(name, "", time, resources, type, md);
this.id = id;
}
public Node()
{
super();
this.delay = "";
}
public String getDelay() {
return delay;
}
public void setDelay(String delay) {
this.delay = delay;
}
#PlanningVariable(valueRangeProviderRefs = {"parentRange"}, strengthComparatorClass = ParentStrengthComparator.class, nullable = false)
public Node getParent() {
return parent;
}
public void setParent(Node parent) {
this.parent = parent;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
/*public String toString()
{
if(this.type.equals("AND"))
return delay;
if(!this.md.isEmpty())
return Tools.excerpt(name+" : "+this.md);
return Tools.excerpt(name);
}*/
public String toString()
{
if(parent!= null)
return Tools.excerpt(name) +"#"+parent;
else
return Tools.excerpt(name);
}
public boolean equals( Object o ) {
if (this == o) {
return true;
} else if (o instanceof Node) {
Node other = (Node) o;
return new EqualsBuilder()
.append(name, other.name)
.append(id, other.id)
.isEquals();
} else {
return false;
}
}
public int hashCode() {
return new HashCodeBuilder()
.append(name)
.append(id)
.toHashCode();
}
// ************************************************************************
// Complex methods
// ************************************************************************
public int getStartTime()
{
try{
return Graph.getInstance().getNode2times().get(this).getFirst();
}
catch(NullPointerException e)
{
System.out.println("getStartTime() is null for " + this);
}
return 10;
}
public int getEndTime()
{ try{
return Graph.getInstance().getNode2times().get(this).getSecond();
}
catch(NullPointerException e)
{
System.out.println("getEndTime() is null for " + this);
}
return 10;
}
#ValueRangeProvider(id = "parentRange")
public Collection<Node> getPossibleParents()
{
Collection<Node> nodes = new ArrayList<Node>(Graph.getInstance().getNodes());
nodes.remove(this); // We remove this node from the list
if(Graph.getInstance().getParentsCount(this) > 0)
nodes.remove(Graph.getInstance().getParents(this)); // We remove its parents from the list
if(Graph.getInstance().getChildCount(this) > 0)
nodes.remove(Graph.getInstance().getChildren(this)); // We remove its children from the list
if(!nodes.contains(Graph.getInstance().getNt()))
nodes.add(Graph.getInstance().getNt());
return nodes;
}
/**
* The normal methods {#link #equals(Object)} and {#link #hashCode()} cannot be used because the rule engine already
* requires them (for performance in their original state).
* #see #solutionHashCode()
*/
public boolean solutionEquals(Object o) {
if (this == o) {
return true;
} else if (o instanceof Node) {
Node other = (Node) o;
return new EqualsBuilder()
.append(name, other.name)
.append(id, other.id)
.isEquals();
} else {
return false;
}
}
/**
* The normal methods {#link #equals(Object)} and {#link #hashCode()} cannot be used because the rule engine already
* requires them (for performance in their original state).
* #see #solutionEquals(Object)
*/
public int solutionHashCode() {
return new HashCodeBuilder()
.append(name)
.append(id)
.toHashCode();
}
}
Each move must update the graph by removing the previous edge and adding the new edge from the node to its parent, so i'm using a custom change move :
public class ParentChangeMove implements Move{
private Node node;
private Node parent;
private Graph g = Graph.getInstance();
public ParentChangeMove(Node node, Node parent) {
this.node = node;
this.parent = parent;
}
public boolean isMoveDoable(ScoreDirector scoreDirector) {
List<Dependency> dep = new ArrayList<Dependency>(g.getDependencies());
dep.add(new Dependency(parent.getName(), node.getName()));
return !ObjectUtils.equals(node.getParent(), parent) && !g.detectCycles(dep) && !g.getParents(node).contains(parent);
}
public Move createUndoMove(ScoreDirector scoreDirector) {
return new ParentChangeMove(node, node.getParent());
}
public void doMove(ScoreDirector scoreDirector) {
scoreDirector.beforeVariableChanged(node, "parent"); // before changes are made
//The previous edge is removed from the graph
if(node.getParent() != null)
{
Dependency d = new Dependency(node.getParent().getName(), node.getName());
g.removeEdge(g.getDep2link().get(d));
g.getDependencies().remove(d);
g.getDep2link().remove(d);
}
node.setParent(parent); // the move
//The new edge is added on the graph (parent ==> node)
Link link = new Link();
Dependency d = new Dependency(parent.getName(), node.getName());
g.addEdge(link, parent, node);
g.getDependencies().add(d);
g.getDep2link().put(d, link);
g.setStepTimes();
scoreDirector.afterVariableChanged(node, "parent"); // after changes are made
}
public Collection<? extends Object> getPlanningEntities() {
return Collections.singletonList(node);
}
public Collection<? extends Object> getPlanningValues() {
return Collections.singletonList(parent);
}
public boolean equals(Object o) {
if (this == o) {
return true;
} else if (o instanceof ParentChangeMove) {
ParentChangeMove other = (ParentChangeMove) o;
return new EqualsBuilder()
.append(node, other.node)
.append(parent, other.parent)
.isEquals();
} else {
return false;
}
}
public int hashCode() {
return new HashCodeBuilder()
.append(node)
.append(parent)
.toHashCode();
}
public String toString() {
return node + " => " + parent;
}
}
The graph does define multiple methods that are used by the constraints to calculate the score for each solution like the following :
rule "MinimizeTotalTime" // Minimize the total process time
when
eval(true)
then
scoreHolder.addSoftConstraintMatch(kcontext, 1, -Graph.getInstance().totalTime());
end
On other environment modes, the error does not appear but the best score calculated is not equal to the actual score.
I don't have any clue as to where the problem could come from. Note that i already checked all my equals and hashcode methods.
EDIT : Following ge0ffrey's proposition, I used collect CE in "MinimizeTotalTime" rule to check if the error comes again :
rule "MinimizeTotalTime" // Minimize the total process time
when
ArrayList() from collect(Node())
then
scoreHolder.addSoftConstraintMatch(kcontext, 0, -Graph.getInstance().totalTime());
end
At this point, no error appears and everything seems ok. But when I use "terminate early", I get the following error :
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Score corruption: the solution's score (-9133) is not the uncorruptedScore (-9765).
Also, I have a rule that doesn't use any method from the Graph class and seems to respect the incremental score calculation but returns another score corruption error.
The purpose of the rule is to make sure that we don't use more resources that available:
rule "addMarks" //insert a Mark each time a task starts or ends
when
Node($startTime : getStartTime(), $endTime : getEndTime())
then
insertLogical(new Mark($startTime));
insertLogical(new Mark($endTime));
end
rule "resourcesLimit" // At any time, The number of resources used must not exceed the total number of resources available
when
Mark($startTime: time)
Mark(time > $startTime, $endTime : time)
not Mark(time > $startTime, time < $endTime)
$total : Number(intValue > Global.getInstance().getAvailableResources() ) from
accumulate(Node(getEndTime() >=$endTime, getStartTime()<= $startTime, $res : resources), sum($res))
then
scoreHolder.addHardConstraintMatch(kcontext, 0, (Global.getInstance().getAvailableResources() - $total.intValue()) * ($endTime - $startTime) );
end
Following is the error :
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Score corruption: the workingScore (-193595) is not the uncorruptedScore (-193574) after completedAction (DWL_CM_XX_101#DWL_PA_XX_180 => DWL_PA_XX_180):
The corrupted scoreDirector has 4 ConstraintMatch(s) which are in excess (and should not be there):
com.abcdl.be.solver/resourcesLimit/level0/[43.0, 2012, 1891]=-2783
com.abcdl.be.solver/resourcesLimit/level0/[45.0, 1870, 1805]=-1625
com.abcdl.be.solver/resourcesLimit/level0/[46.0, 1805, 1774]=-806
com.abcdl.be.solver/resourcesLimit/level0/[45.0, 1774, 1762]=-300
The corrupted scoreDirector has 3 ConstraintMatch(s) which are missing:
com.abcdl.be.solver/resourcesLimit/level0/[43.0, 2012, 1901]=-2553
com.abcdl.be.solver/resourcesLimit/level0/[45.0, 1870, 1762]=-2700
com.abcdl.be.solver/resourcesLimit/level0/[44.0, 1901, 1891]=-240
Check your score constraints.
A score rule that has a LHS of just "eval(true)" is inherently broken. Either that constraint is always broken, for the exact same weight, and there really is no reason to evaluate it. Or it is sometimes broken (or always broken but for different weights) and then the rule needs to refire accordingly.
Problem: the return value of Graph.getInstance().totalTime() changes as the planning variables change value. But Drools just looks at the LHS as planning variables change and it sees that nothing in the LHS has changed so there's no need to re-evaluate that score rule, when the planning variables change. Note: this is called incremental score calculation (see docs), which is a huge performance speedup.
Subproblem: The method Graph.getInstance().totalTime() is inherently not incremental.
Fix: translate that totalTime() function into a DRL function based on Node selections. You 'll probably need to use accumulate. If that's too hard (because it's a complex calculation of the critical path or so), try it anyway (for incremental score calculation's sake) or try a LHS that does a collect over all Nodes (which is like eval(true) but it will be refired every time.