Optaplanner shadow variable corruption check mechanism - optaplanner

I'm having score corruption exception with construction heuristic phase with FULL_ASSERT:
java.lang.IllegalStateException: VariableListener corruption: the
entity (Task{6661-30})'s shadow variable (Task.plannedDateTime)'s
corrupted value (null) changed to uncorrupted value (2018-06-04T07:00)
after all VariableListeners were triggered without changes to the
genuine variables. Maybe the VariableListener class
(VrpTaskStartTimeListener) for that shadow variable
(Task.plannedDateTime) forgot to update it when one of its sources
changed after completedAction (Task{6661-30} {Shift{Tech1:2018-06-04}
-> Shift{Tech1:2018-06-04}}).
at
org.optaplanner.core.impl.score.director.AbstractScoreDirector.assertShadowVariablesAreNotStale(AbstractScoreDirector.java:462)
at
org.optaplanner.core.impl.solver.scope.DefaultSolverScope.assertShadowVariablesAreNotStale(DefaultSolverScope.java:140)
at
org.optaplanner.core.impl.phase.scope.AbstractPhaseScope.assertShadowVariablesAreNotStale(AbstractPhaseScope.java:171)
at
org.optaplanner.core.impl.phase.AbstractPhase.predictWorkingStepScore(AbstractPhase.java:169)
at
org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.doStep(DefaultConstructionHeuristicPhase.java:108)
at
org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.solve(DefaultConstructionHeuristicPhase.java:95)
at
org.optaplanner.core.impl.solver.AbstractSolver.runPhases(AbstractSolver.java:87)
at
org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:173)
at...
Now while looking at DefaultConstructionHeuristicPhase.doStep , it does:
private void doStep(ConstructionHeuristicStepScope<Solution_> stepScope) {
Move<Solution_> nextStep = stepScope.getStep();
nextStep.doMove(stepScope.getScoreDirector()); //Step-1
predictWorkingStepScore(stepScope, nextStep);
...
}
predictWorkingStepScore() calls AbstractScoreDirector.assertShadowVariablesAreNotStale() and assertShadowVariablesAreNotStale() is:
public void assertShadowVariablesAreNotStale(Score expectedWorkingScore, Object completedAction) {
SolutionDescriptor<Solution_> solutionDescriptor = getSolutionDescriptor();
//Step2
Map<Object, Map<ShadowVariableDescriptor, Object>> entityToShadowVariableValuesMap = new IdentityHashMap<>();
...
entityToShadowVariableValuesMap.put(entity, shadowVariableValuesMap);
}
//Step3
variableListenerSupport.triggerAllVariableListeners();
for (Iterator<Object> it = solutionDescriptor.extractAllEntitiesIterator(workingSolution); it.hasNext();) {
Object entity = it.next();
EntityDescriptor<Solution_> entityDescriptor
= solutionDescriptor.findEntityDescriptorOrFail(entity.getClass());
Collection<ShadowVariableDescriptor<Solution_>> shadowVariableDescriptors = entityDescriptor.getShadowVariableDescriptors();
Map<ShadowVariableDescriptor, Object> shadowVariableValuesMap = entityToShadowVariableValuesMap.get(entity);
for (ShadowVariableDescriptor shadowVariableDescriptor : shadowVariableDescriptors) {
Object newValue = shadowVariableDescriptor.getValue(entity);
Object originalValue = shadowVariableValuesMap.get(shadowVariableDescriptor);
//Step4
if (!Objects.equals(originalValue, newValue)) {
throw new IllegalStateException(VariableListener.class.getSimpleName() + " corruption:"
}
}
Here's the description, which I believe:
Step 1: execute step's move (it also executes shadow var listeners)
Step 2: get current entities shadow vars values.(Here shadow vars
wont have valid value)
Step 3: execute ShadowsVariable listehttp://example.comners(now
shadows will have right values)
Step 4: Get new values and compare
with Step2.
Now, the problem is for a custom listener on a genuine variable, here is the order:
Inverse relation shadow variable listener
Custom listener
Anchor shadow variable
What can I do to make above order such that custom listener executes last?

Configure the sources attribute of #CustomShadowVariable correctly.
There is this guarantee:

Related

Does this saving/loading pattern have a name?

There's a variable persistence concept I have integrated multiple times:
// Standard initialiation
boolean save = true;
Map<String, Object> dataHolder;
// variables to persist
int number = 10;
String text = "I'm saved";
// Use the variables in various ways in the project
void useVariables() { ... number ... text ...}
// Function to save the variables into a datastructure and for example write them to a file
public Map<String, Object> getVariables()
{
Map<String, Object> data = new LinkedHashMap<String, Object>();
persist(data);
return(data);
}
// Function to load the variables from the datastructure
public void setVariables(Map<String, Object> data)
{
persist(data);
}
void persist(Map<String, Object> data)
{
// If the given datastructure is empty, it means data should be saved
save = (data.isEmpty());
dataHolder = data;
number = handleVariable("theNumber", number);
text = handleVariable("theText", text);
...
}
private Object handleVariable(String name, Object value)
{
// If currently saving
if(save)
dataHolder.put(name, value); // Just add to the datastructure
else // If currently writing
return(dataHolder.get(name)); // Read and return from the datastruct
return(value); // Return the given variable (no change)
}
The main benefit of this principle is that you only have a single script where you have to mention new variables you add during the development and it's one simple line per variable.
Of course you can move the handleVariable() function to a different class which also contains the "save" and "dataHolder" variables so they wont be in the main application.
Additionally you could pass meta-information, etc. for each variable required for persisting the datastructure to a file or similar by saving a custom class which contains this information plus the variable instead of the object itself.
Performance could be improved by keeping track of the order (in another datastructure when first time running through the persist() function) and using a "dataHolder" based on an array instead of a search-based map (-> use an index instead of a name-string).
However, for the first time, I have to document this and so I wondered whether this function-reuse principle has a name.
Does someone recognize this idea?
Thank you very much!

OptaPlanner - The entity was never added to this ScoreDirector error

I am implementing an algorithm similar to the NurseRoster one in OptaPlanner. I need to implement a rule in drools that check if the Employee cannot work more days than the number of days in his contract. Since i couldn't figure out how to make this in drools, i decided to write it as a method in a class, and then use it in drools to check if the constraint has been broken. Since i needed a List of ShiftAssignments in the Employee class, i needed to use an #InverseRelationShadowVariable that updated that list automatically an Employee got assigned to a Shift. Since my Employee now has to be a PlanningEntity, the error The entity was never added to this ScoreDirector appeared. I believe the error is caused by my ShiftAssignment entity, which has a #ValueRangeProvider of employees that can work in that Shift. I think this is due to the fact that ScoreDirector.beforeEntityAdded and ScoreDirector.afterEntityAdded were never called, hence the error. For some reason when i removed that range provider from ShiftAssignment and put it on NurseRoster which is the #PlanningSolution, it worked.
Here is the code:
Employee:
#InverseRelationShadowVariable(sourceVariableName = "employee")
public List<ShiftAssignment> getEmployeeAssignedToShiftAssignments() {
return employeeAssignedToShiftAssignments;
}
ShiftAssignment:
#PlanningVariable(valueRangeProviderRefs = {
"employeeRange" }, strengthComparatorClass = EmployeeStrengthComparator.class,nullable = true)
public Employee getEmployee() {
return employee;
}
// the value range for this planning entity
#ValueRangeProvider(id = "employeeRange")
public List<Employee> getPossibleEmployees() {
return getShift().getEmployeesThatCanWorkThisShift();
}
NurseRoster:
#ValueRangeProvider(id = "employeeRange")
#PlanningEntityCollectionProperty
public List<Employee> getEmployeeList() {
return employeeList;
}
And this is the method i use to update that listOfEmployeesThatCanWorkThisShift:
public static void checkIfAnEmployeeCanBelongInGivenShiftAssignmentValueRange(NurseRoster nurseRoster) {
List<Shift> shiftList = nurseRoster.getShiftList();
List<Employee> employeeList = nurseRoster.getEmployeeList();
for (Shift shift : shiftList) {
List<Employee> employeesThatCanWorkThisShift = new ArrayList<>();
String shiftDate = shift.getShiftDate().getDateString();
ShiftTypeDefinition shiftTypeDefinitionForShift = shift.getShiftType().getShiftTypeDefinition();
for (Employee employee : employeeList) {
AgentDailySettings agentDailySetting = SearchThroughSolution.findAgentDailySetting(employee, shiftDate);
List<ShiftTypeDefinition> shiftTypeDefinitions = agentDailySetting.getShiftTypeDefinitions();
if (shiftTypeDefinitions.contains(shiftTypeDefinitionForShift)) {
employeesThatCanWorkThisShift.add(employee);
}
}
shift.setEmployeesThatCanWorkThisShift(employeesThatCanWorkThisShift);
}
}
And the rule that i use:
rule "maxDaysInPeriod"
when
$shiftAssignment : ShiftAssignment(employee != null)
then
int differentDaysInPeriod = MethodsUsedInScoreCalculation.employeeMaxDaysPerPeriod($shiftAssignment.getEmployee());
int maxDaysInPeriod = $shiftAssignment.getEmployee().getAgentPeriodSettings().getMaxDaysInPeriod();
if(differentDaysInPeriod > maxDaysInPeriod)
{
scoreHolder.addHardConstraintMatch(kcontext, differentDaysInPeriod - maxDaysInPeriod);
}
end
How can i fix this error?
This has definitely something to do with the solution cloning that is happening when a "new best solution" is created.
I encountered the same error when i implemented custom solution cloning. In my project i have multiple planning entity classes and all of them have references to each other (either a single value or a List). So when solution cloning is happening the references need to be updated so they can point to the cloned values. This is something that the default cloning process is doing without a problem, and thus leaving the solution in a consistent state. It even updates the Lists of planning entity instances in the parent planning entities correctly (covered by the method "cloneCollectionsElementIfNeeded" from the class "FieldAccessingSolutionCloner" from the OptaPlanner core).
Just a demonstration what i have when it comes to the planning entity classes:
#PlanningEntity
public class ParentPlanningEntityClass{
List<ChildPlanningEntityClass> childPlanningEntityClassList;
}
#PlanningEntity
public class ChildPlanningEntityClass{
ParentPlanningEntityClass parentPlanningEntityClass;
}
At first i did not update any of the references and got the error even for "ChildPlanningEntityClass". Then i have written the code that updates the references. When it comes to the planning entity instances that were coming from the class "ChildPlanningEntityClass" everything was okay at this point because they were pointing to the cloned object. What i did wrong in the "ParentPlanningEntityClass" case was that i did not create the "childPlanningEntityClassList" list from scratch with "new ArrayList();", but instead i just updated the elements of the list (using the "set" method) to point at the cloned instances of the "ChildPlanningEntityClass" class. When creating a "new ArrayList();", filling the elements to point to the cloned objects and setting the "childPlanningEntityClassList" list everything was consistent (tested with FULL_ASSERT).
So just connecting it to my issue maybe the list "employeeAssignedToShiftAssignments" is not created from scratch with "new ArrayList();" and elements instead just get added or removed from the list. So what could happen (if the list is not created from scratch) here is that both the working and the new best solution (the clone) will point to the same list and when the working solution would continue to change this list it would corrupt the best solution.

An NHibernate audit trail that doesn't cause "collection was not processed by flush" errors

Ayende has an article about how to implement a simple audit trail for NHibernate (here) using event handlers.
Unfortunately, as can be seen in the comments, his implementation causes the following exception to be thrown: collection xxx was not processed by flush()
The problem appears to be the implicit call to ToString on the dirty properties, which can cause trouble if the dirty property is also a mapped entity.
I have tried my hardest to build a working implementation but with no luck.
Does anyone know of a working solution?
I was able to solve the same problem using following workaround: set the processed flag to true on all collections in the current persistence context within the listener
public void OnPostUpdate(PostUpdateEvent postEvent)
{
if (IsAuditable(postEvent.Entity))
{
//skip application specific code
foreach (var collection in postEvent.Session.PersistenceContext.CollectionEntries.Values)
{
var collectionEntry = collection as CollectionEntry;
collectionEntry.IsProcessed = true;
}
//var session = postEvent.Session.GetSession(EntityMode.Poco);
//session.Save(auditTrailEntry);
//session.Flush();
}
}
Hope this helps.
The fix should be the following. Create a new event listener class and derive it from NHibernate.Event.Default.DefaultFlushEventListener:
[Serializable]
public class FixedDefaultFlushEventListener: DefaultFlushEventListener
{
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
protected override void PerformExecutions(IEventSource session)
{
if (log.IsDebugEnabled)
{
log.Debug("executing flush");
}
try
{
session.ConnectionManager.FlushBeginning();
session.PersistenceContext.Flushing = true;
session.ActionQueue.PrepareActions();
session.ActionQueue.ExecuteActions();
}
catch (HibernateException exception)
{
if (log.IsErrorEnabled)
{
log.Error("Could not synchronize database state with session", exception);
}
throw;
}
finally
{
session.PersistenceContext.Flushing = false;
session.ConnectionManager.FlushEnding();
}
}
}
Register it during NHibernate configuraiton:
cfg.EventListeners.FlushEventListeners = new IFlushEventListener[] { new FixedDefaultFlushEventListener() };
You can read more about this bug in Hibernate JIRA:
https://hibernate.onjira.com/browse/HHH-2763
The next release of NHibernate should include that fix either.
This is not easy at all. I wrote something like this, but it is very specific to our needs and not trivial.
Some additional hints:
You can test if references are loaded using
NHibernateUtil.IsInitialized(entity)
or
NHibernateUtil.IsPropertyInitialized(entity, propertyName)
You can cast collections to the IPersistentCollection. I implemented an IInterceptor where I get the NHibernate Type of each property, I don't know where you can get this when using events:
if (nhtype.IsCollectionType)
{
var collection = previousValue as NHibernate.Collection.IPersistentCollection;
if (collection != null)
{
// just skip uninitialized collections
if (!collection.WasInitialized)
{
// skip
}
else
{
// read collections previous values
previousValue = collection.StoredSnapshot;
}
}
}
When you get the update event from NHibernate, the instance is initialized. You can safely access properties of primitive types. When you want to use ToString, make sure that your ToString implementation doesn't access any referenced entities nor any collections.
You may use NHibernate meta-data to find out if a type is mapped as an entity or not. This could be useful to navigate in your object model. When you reference another entity, you will get additional update events on this when it changed.
I was able to determine that this error is thrown when application code loads a Lazy Propery where the Entity has a collection.
My first attempt involed watching for new CollectionEntries (which I've never want to process as there shouldn't actually be any changes). Then mark them as IsProcessed = true so they wouldn't cause problems.
var collections = args.Session.PersistenceContext.CollectionEntries;
var collectionKeys = args.Session.PersistenceContext.CollectionEntries.Keys;
var roundCollectionKeys = collectionKeys.Cast<object>().ToList();
var collectionValuesClount = collectionKeys.Count;
// Application code that that loads a Lazy propery where the Entity has a collection
var postCollectionKeys = collectionKeys.Cast<object>().ToList();
var newLength = postCollectionKeys.Count;
if (newLength != collectionValuesClount) {
foreach (var newKey in postCollectionKeys.Except(roundCollectionKeys)) {
var collectionEntry = (CollectionEntry)collections[newKey];
collectionEntry.IsProcessed = true;
}
}
However this didn't entirly solve the issue. In some cases I'd still get the exception.
When OnPostUpdate is called the values in the CollectionEntries dictionary should all already be set to IsProcessed = true. So I decided to do an extra check to see if the collections not processed matched what I expected.
var valuesNotProcessed = collections.Values.Cast<CollectionEntry>().Where(x => !x.IsProcessed).ToList();
if (valuesNotProcessed.Any()) {
// Assert: valuesNotProcessed.Count() == (newLength - collectionValuesClount)
}
In the cases that my first attempt fixed these numbers would match exactly. However in the cases where it didn't work there were extra items alreay in the dictionary. In my I could be sure these extra items also wouldn't result in updates so I could just set IsProcessed = true for all the valuesNotProcessed.

Resolving HttpRequestScoped Instances outside of a HttpRequest in Autofac

Suppose I have a dependency that is registered as HttpRequestScoped so there is only one instance per request. How could I resolve a dependency of the same type outside of an HttpRequest?
For example:
// Global.asax.cs Registration
builder.Register(c => new MyDataContext(connString)).As<IDatabase>().HttpRequestScoped();
_containerProvider = new ContainerProvider(builder.Build());
// This event handler gets fired outside of a request
// when a cached item is removed from the cache.
public void CacheItemRemoved(string k, object v, CacheItemRemovedReason r)
{
// I'm trying to resolve like so, but this doesn't work...
var dataContext = _containerProvider.ApplicationContainer.Resolve<IDatabase>();
// Do stuff with data context.
}
The above code throws a DependencyResolutionException when it executes the CacheItemRemoved handler:
No scope matching the expression 'value(Autofac.Builder.RegistrationBuilder`3+<>c__DisplayClass0[MyApp.Core.Data.MyDataContext,Autofac.Builder.SimpleActivatorData,Autofac.Builder.SingleRegistrationStyle]).lifetimeScopeTag.Equals(scope.Tag)' is visible from the scope in which the instance was requested.
InstancePerLifetimeScope(), rather than HttpRequestScoped(), will give the result you need.
There is a caveat though - if IDatabase requires disposal, or depends on something that requires disposal, this won't happen if you resolve it from the ApplicationContainer. Better to do:
using (var cacheRemovalScope =
_containerProvider.ApplicationContainer.BeginLifetimeScope())
{
var dataContext = cacheRemovalScope.Resolve<IDatabase>();
// Do what y' gotta do...
}

LINQ SQL Attach, Update Check set to Never, but still Concurrency conflicts

In the dbml designer I've set Update Check to Never on all properties. But i still get an exception when doing Attach: "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." This approach seems to have worked for others on here, but there must be something I've missed.
using(TheDataContext dc = new TheDataContext())
{
test = dc.Members.FirstOrDefault(m => m.fltId == 1);
}
test.Name = "test2";
using(TheDataContext dc = new TheDataContext())
{
dc.Members.Attach(test, true);
dc.SubmitChanges();
}
The error message says exactly what is going wrong: You are trying to attach an object that has been loaded from another DataContext, in your case from another instance of the DataContext. Dont dispose your DataContext (at the end of the using statement it gets disposed) before you change values and submit the changes. This should work (all in one using statement). I just saw you want to attach the object again to the members collection, but it is already in there. No need to do that, this should work just as well:
using(TheDataContext dc = new TheDataContext())
{
var test = dc.Members.FirstOrDefault(m => m.fltId == 1);
test.Name = "test2";
dc.SubmitChanges();
}
Just change the value and submit the changes.
Latest Update:
(Removed all previous 3 updates)
My previous solution (removed it again from this post), found here is dangerous. I just read this on a MSDN article:
"Only call the Attach methods on new
or deserialized entities. The only way
for an entity to be detached from its
original data context is for it to be
serialized. If you try to attach an
undetached entity to a new data
context, and that entity still has
deferred loaders from its previous
data context, LINQ to SQL will thrown
an exception. An entity with deferred
loaders from two different data
contexts could cause unwanted results
when you perform insert, update, and
delete operations on that entity. For
more information about deferred
loaders, see Deferred versus Immediate
Loading (LINQ to SQL)."
Use this instead:
// Get the object the first time by some id
using(TheDataContext dc = new TheDataContext())
{
test = dc.Members.FirstOrDefault(m => m.fltId == 1);
}
// Somewhere else in the program
test.Name = "test2";
// Again somewhere else
using(TheDataContext dc = new TheDataContext())
{
// Get the db row with the id of the 'test' object
Member modifiedMember = new Member()
{
Id = test.Id,
Name = test.Name,
Field2 = test.Field2,
Field3 = test.Field3,
Field4 = test.Field4
};
dc.Members.Attach(modifiedMember, true);
dc.SubmitChanges();
}
After having copied the object, all references are detached, and all event handlers (deferred loading from db) are not connected to the new object. Just the value fields are copied to the new object, that can now be savely attached to the members table. Additionally you do not have to query the db for a second time with this solution.
It is possible to attach entities from another datacontext.
The only thing that needs to be added to code in the first post is this:
dc.DeferredLoadingEnabled = false
But this is a drawback since deferred loading is very useful. I read somewhere on this page that another solution would be to set the Update Check on all properties to Never. This text says the same: http://complexitykills.blogspot.com/2008/03/disconnected-linq-to-sql-tips-part-1.html
But I can't get it to work even after setting the Update Check to Never.
This is a function in my Repository class which I use to update entities
protected void Attach(TEntity entity)
{
try
{
_dataContext.GetTable<TEntity>().Attach(entity);
_dataContext.Refresh(RefreshMode.KeepCurrentValues, entity);
}
catch (DuplicateKeyException ex) //Data context knows about this entity so just update values
{
_dataContext.Refresh(RefreshMode.KeepCurrentValues, entity);
}
}
Where TEntity is your DB Class and depending on you setup you might just want to do
_dataContext.Attach(entity);