Optaplanner - continuous planning with changing constraints, that do not set prior assignments invalid - optaplanner

we are using Optaplanner 7.0 beta + Graphhopper for a calculation of shortest paths in a warehouse, where goods have to be collected into boxes by workers (vrptw). Since the business is about collecting online-ordered goods approx. 70% of the items to collect are added to the problem during the day. We use ProblemFactChange to add the incoming order items and already completed order items in the chain are set to immovable (these 'restarts' are performed each full hour). So far everything works.
The question now is about changing restrictions/conditions, that can occur due to unbalanced workload over warehouse-zones. The warehouse is logically divided into areas, to avoid that all workers have to serve all areas (I know your opinion about segmentation of planning problems, but this is, how the work currently is organised). The limited assignment of items to available workers within one zone is currently defined by a hard-constraint.
The new requirement that we are confronted with is, that a worker should be temporary assigned to a different zone, if the workload there is higher compared to his actual zone. Afterwards he can switch back to his original zone. To my understanding an update of the constraint condition would result in hard constraint violations for the previous assigned, locked items, which should be avoided. Are there mechanisms to support temporary changing restrictions or would a SelectionFilter for items help ? (btw: we are using drools).
Hints are welcome, Thank you
Michael

If there are 2 different tenants, each with their own set of employees, tasks, etc and each in their own Solver, then the Borrow Pattern can be used, especially if the employee borrowing involves some human interaction (usually paperwork or a phone call between managers):
Suppose tenant A has an employee called John and tenant B wants to borrow him. Then assign one or more entities from B to John and make them immovable (usually a boolean borrowLocked). Then add the same entities to tenant A. Neither the solver of A nor B will be able to move them (so they won't change), but both of them will take them into account: tenant A won't give John work when he's working for tenant B and tenant B will agree that those entities are assigned (and it won't try to assign John to other entities as it's no in it's value range).

Related

Correct relations to Loans table?

Hello stackers!
Ive made a library databse, i was wondering.. i am making one-to-one relation between Copies and Loans. and one-to-many relation from Users to Loans.
Since a copy of a book only should be allowed to be assigned one loan at a time, and a loan should only be able to containt one copy of a book. if they rent other books, its multiple loans.
and a user should be able to make multiple loans, but a loan can only be assigned one specific user.
is my current relations between these three tables correct?
if not, i would love to know how to fix it, and the reason to my failed logic on the issue.
thank you in advance!
Following on from the earlier answer. If you add a User_Roles table it could/(will) prevent you from falling into the membership trap. If you assume a user with the Admin role can perform every function a user with only Basic role, then every function which requires role-checking has to have a list of acceptable roles (Admin + Basic). Many times it is more efficient to just directly assign all the different roles, i.e. Basic AND Admin, to individual users. Then when a feature requires Basic role-authorization all users can treated the same way. It's simpler in the long run.
The Loans table has a number of issues. First, there's no primary key, to be consistent with the rest of your design, it could be a LoanID. CopyID should a foreign key to the Copies table (maybe that's what is currently drawn).
One 'advanced' or modern approach to your data model could be to use a temporal table to model the Copies. Since a single copy may only be lent out 1 time, properties of the loan could be add to the Copies table. Then anytime a change is made the System Version'ed Copies table the Copies_history table would automatically keep a full accounting of all prior loan activity.
The model looks good to me. You may need to apply some in the logic to enforce a user to have only one loan with same copy of a book.
A user will be able loan a copy over and over again ? then the relationship to loan to copy 1:M

One entity, several models in different bounded contexts with different databases

Let's say we have an Order entity that will be modeled in 2 diff. BCs in a e-commerce application.
The first BC is Order Placement. This BC takes care of collecting all orders placed by our customers from our different websites, validates them and populates its corresponding database by Orders with state either Placed or Rejected.
The 2nd BC is Shipment. This allows the employees in the warehouses to mark an Order as Shipped in its database once it leaves the warehouse.
Now since both BCs use different databases which are empty at first, there will be a need to inform the Shipment BC of the orders that were Placed, so that when a scanner wants to Ship an Order it will be there in the Shipment BC.
My initial approach was to create a domain event once an Order is placed in the Order placement BC and have the Shipment BC subscribe to that event and create a corresponding Order entity in its database for every order placed.
However, I can't stop that feeling that I'm duplicating data across different databases.
My second approach is to ask the Order Placement each time an order is being Shipped for an Order entity, but I still need to maintain the state of the Order in case a failure of a failure in the shipment.
Is there a better approach to all this from a DDD POV?
Your first approach is perfectly fine in my opinion. You are not duplicating data, because as you already noticed, that data is from another context. Same data in different contexts means different things.
As Vernon Vaughn pointed out in his book «Implementing Domain Driven Design»: "A greater degree of autonomy can be achieved when dependent state is already in place in our local system. Some may think of this as a cache of whole dependent objects, but that’s not usually the case when using DDD. Instead we create local domain objects translated from the foreign model, maintaining only the minimal amount of state needed by the local model.”
So copying data is okay as long as it is the only data other BCs need.
But he also mentions that if you use exact copies, it might be a sign of a modeling problem.

Table structure for Scheduling App in SQL DB

I'm working on a database to hold information for an on-call schedule. Currently I have a structure that looks about like this:
Table - Person: (key)ID, LName, FName, Phone, Email
Table - PersonTeam: (from Person)ID, (from Team)ID
Table - Team: (key)ID, TeamName
Table - Calendar: (key dateTime)dt, year, month, day, etc...
Table - Schedule: (from Calendar)dt, (id of Person)OnCall_NY, (id of Person)OnCall_MA, (id of Person)OnCall_CA
My question is: With the Schedule table, should I leave it structured as is, where the dt is a unique key, or should I rearrange it so that dt is non-unique and the table looks like this:
Table - Schedule: (from Calendar)dt, (from Team)ID, (from Person)ID
and have multiple entries for each day, OR would it make sense to just use:
Table - Schedule: (from Calendar)dt, (from PersonTeam)KeyID - [make a key ID on each of the person/team pairings]
A team will always have someone on call, but a person can be on call for more than one team at a time (if they are on multiple teams).
If a completely different setup would work better let me know too!
Thanks for any help! I apologize if my question is unclear. I'm learning fast but nevertheless still fairly new to using SQL daily, so I want to make sure I'm using best practices when I learn so I don't develop bad habits.
The current version, one column per team, is probably not a good idea. Since you're representing teams as a table (and not as an enum or equivalent), it means you expect to add/remove teams over time. That would force you to add/remove columns to the table, which is always a much larger task than adding/removing a few rows.
The 2nd option is the usual solution to a problem like this. A safe choice. You can always define an additional foreign key constraint from Schedule(teamID, personID) to PersonTeam to ensure you don't mistakenly assign schedule duty to a person not belonging to the team.
The 3rd option is pretty much equivalent to the 2nd, only you're swapping a composite natural key for PersonTeam for a surrogate simple key. Since the two components of said composite key are already surrogate, there is no advantage (in terms of immutability, etc.) to adding this additional one. Plus it would turn a very simple N-M relationship (PersonTeam) which most DB managers / ORMs will handle nicely into a more complex object which will need management on its own.
By Occam's razor, I'd do away with the additional surrogate key and use your 2nd option.
In my view, the answer may depend on whether the number of teams is fixed and fairly small. Of course, whether the names of the teams are fixed or not, may also matter, but that would probably have more to do with column naming.
More specifically, my view is this:
If the business requirement is to always have a small and fixed number of people (say, three) on call, then it may well be more convenient to allocate three columns in Schedule, one for every team to hold the ID of the appointed person, i.e. like your current structure:
dt OnCall_NY OnCall_MA OnCall_CA
--- --------- --------- ---------
with dt as the primary key.
If the number of teams (in the Team table) is fixed too, you could include teams' names/designators in the column names like you are doing now, but if the number of teams is more than three and it's just the number of teams in Schedule that is limited to three, then you could just use names like OnCallID1, OnCallID2, OnCallID3.
But even if that requirement is fixed, it may only turn out fixed today, and tomorrow your boss says, "We no longer work with a fixed number of teams (on call)", or "We need to extend the number of teams supported to four, and we may need to extend it further in the future". So, a more universal approach would be the one you are considering switching to in your question, that is
dt Team Person
--- ---- ------
where the primary key would now be dt, Team.
That way you could easily extend/reduce the number of people on call on the database level without having to change anything in the schema.
UPDATE
I forgot to address your third option in my original answer (sorry). Here goes.
Your first option (the one actually implemented at the moment) seems to imply that every team can be presented by (no more than) one person only. If you assign surrogate IDs to the Person/Team pairs and use those keys in Schedule instead of separate IDs for Person and Team, you will probably be unable to enforce the mentioned "one person per team in Schedule" requirement (or, at least, that might prove somewhat troublesome) at the database level, while, using separate keys, it would be just enough to set Team to be part of a composite key (dt, Team) and you are done, no more than one team per day now.
Also, you may have difficulties letting a person change the team over time if their presence in the team was fixated in this way, i.e. with a Schedule reference to the Person/Team pair. You would probably have to change the Team reference in the PersonTeam table, which would result in misrepresentation of historical info: when looking at the people on call back on certain day, the person's Team shown would be the one they belong to now, not the one they did then.
Using separate IDs for people and teams in Schedule, on the other hand, would allow you to let people change teams freely, provided you do not make (Schedule.Team, Schedule.Person) a reference to (PersonTeam.Team, PersonTeam.Person), of course.

Opinions on sensor / reading / alert database design

I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database.
In this system, an alert is an entity that must be acknowledged, acted upon, etc.
Initially I related readings to alerts like this (very cut down) : -
[Location]
LocationId
[Sensor]
SensorId
LocationId
UpperLimitValue
LowerLimitValue
[SensorReading]
SensorReadingId
Value
Status
Timestamp
[SensorAlert]
SensorAlertId
[SensorAlertReading]
SensorAlertId
SensorReadingId
The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not.
The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?).
I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this:
[Location]
LocationId
[Sensor]
SensorId
LocationId
[SensorReading]
SensorReadingId
SensorId
Value
Status
Timestamp
[SensorAlert]
SensorAlertId
SensorId
Timestamp
[SensorAlertEnd]
SensorAlertId
Timestamp
Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do.
Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary.
Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable?
Is there perhaps another way of doing this that I may be missing?
Thanks.
EDIT:
Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : -
[Location]
LocationId
[Sensor]
SensorId
LocationId
[SensorState]
SensorStateId
SensorId
Timestamp
Status
IsInAlert
[SensorReading]
SensorReadingId
SensorStateId
Value
Timestamp
There must be an elegant solution to this!
Revised 01 Jan 11 21:50 UTC
Data Model
I think your Data Model should look like this:▶Sensor Data Model◀. (Page 2 relates to your other question re History).
Readers who are unfamiliar with the Relational Modelling Standard may find ▶IDEF1X Notation◀ useful.
Business (Rules Developed in the Commentary)
I did identify some early business Rules, which are now obsolete, so I have deleted them
These can be "read" in the Relations (read adjacent to the Data Model). The Business Rules and all implied Referential and Data Integrity can be implemented in, and thus guaranteed by, RULES, CHECK Constraints, in any ISO SQL database. This is a demonstration of IDEF1X, in the development of both the Relational keys, and the Entities and Relations. Note the Verb Phrases are more than mere flourish.
Apart from three Reference tables, the only static, Identifying entities are Location, NetworkSlave, and User. Sensor is central to the system, so I ahve given it its own heading.
Location
A Location contains one-to-many Sensors
A Location may have one Logger
NetworkSlave
A NetworkSlave collects Readings for one-to-many NetworkSensors
User
An User may maintain zero-to-many Locations
An User may maintain zero-to-many Sensors
An User may maintain zero-to-many NetworkSlaves
An User may perform zero-to-many Downloads
An User may make zero-to-many Acknowledgements, each on one Alert
An User may take zero-to-many Actions, each of one ActionType
Sensor
A SensorType is installed as zero-to-many Sensors
A Logger (houses and) collects Readings for one LoggerSensor
A Sensor is either one NetworkSensor or one LoggerSensor
A NetworkSensor records Readings collected by one NetworkSlave
.
A Logger is periodically Downloaded one-to-many times
A LoggerSensor records Readings collected by one Logger
.
A Reading may be deemed in Alert, of one AlertType
An AlertType may happen on zero-to-many Readings
.
An Alert may be one Acknowledgement, by one User
.
An Acknowledgement may be closed by one Action, of one ActionType, by one User
An ActionType may be taken on zero-to-many Actions
Responses to Comments
Sticking Id columns on everything that moves, interferes with the determination of Identifiers, the natural Relational keys that give your database relational "power". They are Surrogate Keys, which means an additional Key and Index, and it hinders that relational power; which results in more joins than otherwise necessary. Therefore I use them only when the Relational key becomes too cumbersome to migrate to the child tables (and accept the imposed extra join).
Nullable keys are a classic symptom of an Unnormalised database. Nulls in the database is bad news for performance; but Nulls in FKs means each table is doing too many things, has too many meanings, and results is very poor code. Good for people who like to "refactor" their databases; completely unnecessary for a Relational database.
Resolved: An Alert may be Acknowledged; An Acknowledgement may be Actioned.
The columns above the line are the Primary Key (refer Notation document). SensorNo is a sequential number within LocationId; refer Business Rules, it is meaningless outside a Location; the two columns together form the PK. When you are ready to INSERT a Sensor (after you have checked that the attempt is valid, etc), it is derived as follows. This excludes LoggerSensors, which are zero:INSERT Sensor VALUES (
#LocationId,
SensorNo = ( SELECT ISNULL(MAX(SensorNo), 0) + 1
FROM Sensor
WHERE LocationId = #LocationId
)
#SensorCode
)
For accuracy or improved meaning, I have changed NetworkSlave monitors NetworkSensor to NetworkSlave collects Readings from NetworkSensor.
Check Constraints. The NetworkSensor and LoggerSensor are exclusive subtypes of Sensor, and their integrity can be set by CHECK constraints. Alerts, Acknowledgements and Actions are not subtypes, but their integrity is set by the same method, so I will list them together.
Every Relation in the Data Model is implemented as a CONSTRAINT in the child (or subtype) as FOREIGN KEY (child_FK_columns) REFERENCES Parent (PK_columns)
A Discriminator is required to identify which subtype a Sensor is. This is SensorNo = 0 for LoggerSensors; and non-zero for NetworkSensors.
The existence of NetworkSensors and LoggerSensors are constrained by the FK CONSTRAINTS to NetworkSlave and Logger, respectively; as well as to Sensor.
In NetworkSensor, include a CHECK constraint to ensure SensorNo is non-zero
In LoggerSensor, include a CHECK constraint to ensure SensorNo is zero
The existence of Acknowledgements and Actions are constrained by the identified FK CONSTRAINTS (An Acknowledgement cannot exist without an Alert; an Action cannot exist without an Acknowledgement). Conversely, an Alert with no Acknowledgement is in an unacknowledged state; an Alert with and Acknowledgementbut no Action is in an acknowledged but un-actioned state.
.
Alerts. The concept in a design for this kind of (live monitoring and alert) application is many small programs, running independently; all using the database as the single version of the truth. Some programs insert rows (Readings, Alerts); other programs poll the db for existence of such rows (and send SMS messages, etc; or hand-held units pick up Alerts relevant to the unit only). In that sense, the db is a may be described as an message box (one program puts rows in, which another program reads and actions).
The assumption is, Readings for Sensors are being recorded "live" by the NetworkSlave, and every minute or so, a new set of Readings is inserted. A background process executes periodically (every minute or whatever), this is the main "monitor" program, it will have many functions within its loop. One such function will be to monitor Readings and produce Alerts that have occurred since the last iteration (of the program loop).
The following code segment will be executed within the loop, one for each AlertType. It is a classic Projection:
-- Assume #LoopDateTime contains the DateTime of the last iteration
INSERT Alert
SELECT LocationId,
SensorNo,
ReadingDtm,
"L" -- AlertType "Low"
FROM Sensor s,
Reading r
WHERE s.LocationId = r.LocationId
AND s.SensorNo = r.SensorNo
AND r.ReadingDtm > #LoopDtm
AND r.Value < s.LowerLimit
INSERT Alert
SELECT LocationId,
SensorNo,
ReadingDtm,
"H" -- AlertType "High"
FROM Sensor s,
Reading r
WHERE s.LocationId = r.LocationId
AND s.SensorNo = r.SensorNo
AND r.ReadingDtm > #LoopDtm
AND r.Value > s.UpperLimit
So an Alert is definitely a fact, that exists as a row in the database. Subsequently that may be Acknowledged by an User (another row/fact), and Actioned with an ActionType by an User.
Other that this (the creation by Projection act), ie. the general and unvarying case, I would refer to Alert only as a row in Alert; a static object after creation.
Concerns re Changing Users. That is taken care of already, as follows. At the top of my (revised yesterday) Answer, I state that the major Identifying elements are static. I have re-sequenced the Business Rules to improve clarity.
For the reasons you mention, User.Name is not a good PK for User, although it remains an Alternate Key (Unique) and the one that is used for human interaction.
User.Name cannot be duplicated, there cannot be more than one Fred; there can be in terms of FirstName-LastName; two Fred Bloggs, but not in terms of User.Name. Our second Fred needs to choose another User.Name. Note the identified Indices.
UserId is the permanent record, and it is already the PK. Never delete User, it has historical significance. In fact the FK constraints will stop you (never use CASCADE in a real database, that is pure insanity). No need for code or triggers, etc.
Alternately (to delete Users who never did anything, and thus release User.Name for use) allow Delete as long as there are no FK violations (ie. UserId is not referenced in Download, Acknowledgement, Action).
To ensure that only Users who are Current perform Actions, add an IsObsolete boolean in User (DM Updated), and check that column when that table is interrogated for any function (except reports) You can implement a View UserCurrent which returns only those Users.
Same goes for Location and NetworkSlave. If you need to differentiate current vs historical, let me know, I will add IsObsolete to them as well.
I don't know: you may purge the database of ancient Historical data periodically, delete rows that are (eg) over 10 years old. That has to be done from the bottom (tables) first, working up the Relations.
Feel free to ask Questions.
Note the IDEF1 Notation document has been expanded.
Here are my two cents on the problem.
AlertType table holds all possible types of alerts. AlertName may be something like high temperate, low pressure, low water level, etc.
AlertSetup table allows for setup of alert thresholds from a sensor for a specific alert type.
For example, TresholdLevel = 100 and TresholdType = 'HI' should trigger alert for readings over 100.
Reading table holds sensor readings as they are streamed into the server (application).
Alert table holds all alerts. It keeps links to the first reading that triggered the alert and the last one that finished it (FirstReadingId, LastReadingId). IsActive is true if there is an active alert for the (SensorId, AlertTypeId) combination. IsActive can be set to false only by reading going below the alert threshold. IsAcknowledged means that an operator has acknowledged the alert.
The application layer inserts the new reading into the Reading table, captures the ReadingId.
Then application checks the reading against alert setups for each (SensorId, AlertTypeId) combination. At this point a collection of objects {SensorId, AlertTypeId, ReadingId, IsAlert} is created and the IsAlert flag is set for each object.
The Alert table is then checked for active alerts for each object {SensorId, AlertTypeId, ReadingId, IsAlert} from the collection.
If the IsAlert is TRUE and there are no active alerts for the (SensorId, AlertTypeId) combination, a new row is added to the Alert table with the FirstReadingID pointing to the current ReadingId. The IsActive is set to TRUE, the IsAcknowledged to FALSE.
If the IsAlert is TRUE and there is an active alert for the (SensorId, AlertTypeId) combination, that row is updated by setting the LastReadingID pointing to the current ReadingId.
If the IsAlert is FALSE and there is an active alert for the (SensorId, AlertTypeId) combination, that row is updated by setting the IsActive FALSE.
If the IsAlert is FALSE and there are no active alerts for the (SensorId, AlertTypeId) combination, the Alert table is not modified.
The main "triangle" you have to deal with here is Sensor, [Sensor]Reading, and Alert. Presuming you have to track activity as it is occuring (as opposed to a "load it all at once" design), your third solution is similar to something we did recently. A few tweaks and it would look like:
[Location]
LocationId
[Sensor]
SensorId
LocationId
CurrentSensorState -- Denormalized data!
[SensorReading]
SensorReadingId
SensorState
Value
Timestamp
[SensorStateLog]
SensorId
Timestamp
SensorState
Status -- Does what?
IsInAlert
(Primary key is {SensorId, Timestamp})
"SensorState" could be SensorStateId, with an associated lookup table listing (and constraining) all possible states.
The idea is, you Sensor contains one row per sensor and shows it's current state. SensorReading is updated continuously with sensor readings. If and when a given sensors current state changes (i.e. new Reading's state differs from Sensor's current state), you change the current state and add a row to the SensorStateLog showing the change in state. (Optionally, you could update the "prior" entry for that sensor with a "state ended" timestamp, but that's fussy code to write.)
CurrentSensorState in the Sensor table is denormalized data, but if properly maintained (and if you have millions of rows) it will make querying current state vastly more efficient and so may be worth the effort.
The obvious downside of all this is that Alerts are no longer an entity, and they become that much harder to track and identify. If these must be readily and immediately identifiable and usable, your third scheme won't do what you need it to do.

SSIS Population of Slowly Changing Dimension with outrigger

Working on a data warehouse, a suitable analogy for the problem is that we have Healthcare Practitioners. Healthcare Practitioners have a number of professional attributes and work in an open number of teams and in an open number of clinical areas.
For example, you may have a nurse who works in children's services across a number of teams as a relief/contractor/bank staff person. Or you may have a newly qualified doctor who works general medicine who is doing time in a special area pending qualifying as a consultant of that special area.
So we have an open number of areas of work and an open number of teams, we can't have team 1, team 2 etc in our dimensions. The other attributes may change over time also, like base location (where they work out of), the main team and area they work in..
So, following Kimble I've gone for outriggers:
Table DimHealthProfessionals:
Key (primary key, identity)
Name
Main Team
Main Area of Work
Base Location
Other Attribute 1
Other Attribute 2
Start Date
End Date
Table OutriggerHealthProfessionalTeam:
HPKey (foreign key to DimHealthPRofessionals.Key)
Team Name
Team Type
Other Team Attribute 1
Other Team Attribute 2
Table OutriggerHealthProfessionalAreaOfWork:
HPKey (as above)
Area of Work
Other AoW attribute 1
If any attribute of the HP changes, or the combination of teams or areas of work in which they work change, we need to create a new entry in the SCD and it's outrigger tables to encapsulate this.
And we're doing this in SSIS.
The source data is basically an HP table with the main attributes, a table of areas of work, a table of teams and a pair of mapping tables to map a current set of areas of work to an HP.
I have three data sources, one brings in the HCP information, one the areas of work of all HCPs and one the team memberships.
The problem is how to run over all three datasets to determine if an HP has changed an attribute, and if they have changed an attribute, how we update the DIM and two outriggers appropriately.
Can anyone point me at a best practice for this? OR suggest an alternative way of modelling this dimension?
Admittedly I may not understand everything here, but it seems to me that the relationship in this example should be reversed. Place TeamKey and the WorkAreaKey in the dimHealthProfessionals -- this should simplify things.
With this in place, you simply make sure to deliver outriggers before the dimHealthProfessionals.
Treat outriggers as dimensions in their own right. You may want to treat dimHealthProfessionals as a type 2 dimension, to properly capture the history.
EDIT
Considering that team to person is many-to-many, a fact is more appropriate.
A column in a dimension table is appropriate only if a person can belong to only one team at a time. Same with work areas.
The problem is how to run over all three datasets to determine if an HP has changed an attribute, and if they have changed an attribute, how we update the DIM and two outriggers appropriately.
Can anyone point me at a best practice for this? OR suggest an alternative way of modelling this dimension?
I'm not sure I understand your question fully. If you are unsure about change detection, then use Checksums in the package. Build up a temp table with the data as it is in the source, then compare each row to its counterpart (joined via the business keys) by computing the checksum for both rows and comparing those. If they differ, the data has changed.
If you are talking about cascading updates in a historized dimension hierarchy (and you can treat the outriggers like a hierarchy in this context) then the foreign key lookups will automatically lookup the newer entry in DimHealthProfessionals if you have a historization (i.e. have validFrom / validThrough timestamps in DimHealthProfessionals). Those different foreign keys result in a different checksum.