I have an agent called truck, which will perform some actions (e.g. loading packages).
The problem here is related to the random sequence of agents executing actions. For instance, suppose I have three trucks, the loading sequence is random at each different run.
Run-1: truck-1, truck-3, truck-2
Run-2: truck-2, truck-1, truck-3
Run-3: truck-3, truck-1, truck-2
...
How to make sure the agent(truck) executing actions based on a sequence, e.g. by their id so that we can always get the consistant result from the simulation.
Run-1: truck-1, truck-2, truck-3
Run-2: truck-1, truck-2, truck-3
Run-3: truck-1, truck-2, truck-3
...
There's at least 3 ways to do this.
If you set the random seed, the order of the trucks should be the same across runs, all other things being equal. It most likely won't be ordered by id, but it should be the same.
Add all the trucks to an ArrayList when they are created. Sort this list by id and each tick of the simulation iterate through this list, executing the truck action on each truck. A quick google should show you how to order a Java List using a Comparator.
Adapt the scheduling to reflect truck id - for example, truck 1 executes at 1.0 and every tick thereafter, truck 2 at 1.1 and every tick thereafter, truck 3 at 1.2, and so on.
A kind of variation on 3. Set the scheduling priority by id -- all the trucks could execute at 1.0 and every tick thereafter, but with truck 1 having the highest priority, truck 2 the next, and so on.
As a side note, the random iteration of the items in a schedule is the default to prevent common ABM behavior execution ordering issues, such as first mover advantage.
Related
I'm trying to understand an illustrative example of how Lamport's algorithm is applied. In the course that I'm taking, we were presented with two representations of the clocks within three [distant] processes, one with the lamport alogrithm applied and the other without.
Without the Lamport algorithm:
With the lamport algorithm applied:
My question is concerning the validity of the change that was applied to the third entry of the table pertaining to the process P1. Shouldn't it be, as the Lamport algorithm instructs, max(2, 2) + 1, which is 3 not 4?
When I asked some of my classmates regarding this issue, one of them informed me that the third entry of the table of P1 represents a "local" event that happened within P1, and so when message A is arrived, the entry is updated to max(2, 3) + 1, which is 4. However, if that was the case, shouldn't the receipt of the message be represented in a new entry of its own, instead of being put in the same entry that represents the local event that happened within P1?
Upon further investigation, I found, in the same material of the course, a figure that was taken from Tannenbaum's Distributed Systems: Principles and Paradigms, in which the new values of an entry that corresponds to the receipt of a message is updated by adding 1 to the max of the entry before it in the same table and the timestamp of the received message, as shown below, which is quite different from what was performed in the first illustration.
I'm unsure if the problem relates to a faulty understanding that I have regarding the algorithm, or to the possibility that the two illustrations are using different conventions with respect to what the entries represent.
validity of the change that was applied to the third entry of the table pertaining to the process P1
In classical lamport algorithm, there is no need to increase local counter before taking max. If you do that, that still works, but seems like an useless operation. In the second example, all events are still properly ordered. In general, as long as counters go up, the algorithm works.
Another way of looking at correctness is trying to rebuild the total order manually. The hard requirement is that if an event A happens before an event B, then in the total order A will be placed before B. In both picture 2 and 3, everything is good.
Let's look into picture 2. Event (X) from second cell in P0 happens before the event (Y) of third cell of P1. To make sure X does come before Y in the total order it is required that the time of Y to be larger than X's. And it is. It doesn't matter if the time difference is 1 or 2 or 100.
in which the new values of an entry that corresponds to the receipt of a message is updated by adding 1 to the max of the entry before it in the same table and the timestamp of the received message, as shown below, which is quite different from what was performed in the first illustration
It's actually pretty much the same logic, with exception of incrementing local counter before taking max. Generally speaking, every process has its own clock and every event increases that clock by one. The only exception is when a clock of a different process is already in front, then taking max is required to make sure all events have correct total order. So, in the third picture, P2 adjusts clock (taking max) as P3 is way ahead. Same for P1 adjust.
We have many actions players can take in a game. Imagine a card game (like poker) or a board game where there are multiple choices at each decision point and there is a clear sequence of events. We keep track of each action taken by a player. We care about the action's size (if applicable), other action possibilities that weren't taken, the player who took the action, the action that player faced before his move. Additionally, we need to know whether some action happened or did not happen before the action we're looking at.
The database helps us answer questions like:
1. How often is action A taken given the opportunity? (sum(actionA)/sum(actionA_opp)
2. How often is action A taken given the opportunity and given that action B took place?
3. How often is action A taken with size X, or made within Y seconds given the opportunity and given that action B took place and action C did not?
4. How often is action A taken given that action B took place performed by player P?
So for each action, we need to keep information about the player that took the action, size, timing, the action performed, what action opportunities were available and other characteristics. There is a finite number of actions.
One game can have on average 6 actions with some going up to 15.
There could be million of games and we want the aggregate queries across all of them to run as fast as possible. (seconds)
It could be represented in document database with an array of embedded documents like:
game: 123
actions: [
{
player: Player1,
action: deals,
time: 0.69,
deal_opp: 1
discard_opp: 1
},
{
player: Player2,
action: discards,
time: 1.21
deal_opp: 0,
discard_opp: 1,
}
...
]
Or in a relational model:
game | player | seq_n | action | time | deal_opp | discard_opp
123 | Player | 1 | deals | 0.28 | 1 | 1
All possible designs that I come up with can't satisfy all my conditions.
In the relational model presented, to see the previous actions taken in the same game requires N inner joins where N is previous actions we want to filter for. Given that the table would hold billions of rows, it would require several self joins on a billion row table which seems very inefficient.
If we instead store it in a wide column table, and represent the entire sequence in one row, we have very easy aggregates (can filter what happened and did not by comparing column values, eg. sum(deal)/sum(deal_opp) where deal_opp = 1 to get frequency of deal action given the player had the opportunity to do it) but we don't know WHO took the given action which is a necessity. We cannot just append a player column next to an action to represent who took that action because an action like call or discard or could have many players in a row (in a poker game, one player raises, 1 or more players can call).
More possibilities:
Graph database (overkill given that we have at most 1 other connecting node? - basically a linked list)
Closure tables (more efficient querying of previous actions)
??
If i understand very well, is you're dealing with How to store a decision tree within your database. Right ?
I remember i programmed a chess game yeasr ago, which means every action is a consequetive set of previous actions of both users. So to keep record of all the actions, with all the details you need, i think you should check the following :
+ In relational database, the most efficient way to store a Tree is a Modified Preorder Tree Traversal. Not easy tbh, but you can give it a try.
This will help you : https://gist.github.com/tmilos/f2f999b5839e2d42d751
If you are working with access control, you must have faced the issue where the Automatic Record Permission field (with Rules) does not update itself on recalculating the record. You either have to launch full recalculation or wait for a considerable amount of time for the changes to take place.
I am facing this issue where based on 10 different field values in the record, I have to give read/edit access to 10 different groups respectively.
For instance:
if rule 1 is true, give edit access to 1st group of users
if rule 1 and 2 are true, give edit access to 1st AND 2nd group of
users.
I have selected 'No Minimum' and 'No Maximum' in the Auto RP field.
How to make the Automatic Record Permission field to update itself as quickly as possible? Am I missing something important here?
If you are working with access control, you must have faced the issue
where the Automatic Record Permission field (with Rules) does not
update itself on recalculating the record. You either have to launch
full recalculation or wait for a considerable amount of time for the
changes to take place.
Tanveer, in general, this is not a correct statement. You should not face this issue with [a] well-designed architecture (relationships between your applications) and [b] correct calculation order within the application.
About the case you described. I suggest you check and review the following possibilities:
1. Calculation order.Automatic Record Permissions [ARP from here] are treated by Archer platform in the same way as calculated fields. This means that you can modify the calculation order in which calculated field and automatic record permissions will be updated when you save the record.So it is possible that your ARP field is calculated before certain calculated fields you use in the rules in ARP. For example, let say you have two rules in ARP field:
if A>0 then group AAA
if B>0 then groub BBB
Now, you will have a problem if calculation order is the following:
"ARP", "A", "B"
ARP will not be updated after you click "Save" or "Apply", but it will be updated after you click "Save" or "Apply" twice within the save record.With calculation order "A","B","ARP" your ARP will get recalculated right away.
2. Full recalculation queue.
Since ARPs are treated as calculated fields, this mean that every time ARP needs to get updated there will be recalculation job(s) created on the application server on the back end. And if for some reason the calculation queue is full, then record permission will not get updated right away. Job engine recalculation queue can be full if you have a data feed running or if you have a massive amount of recalculations triggered via manual data imports. Recalculation job related to ARP update will be created and added to the queue. Recalculation job will be processed based on the priorities defined for job queue. You can monitor the job queue and alter default job's processing priorities in Archer v5.5 via Archer Control Panel interface. I suggest you check the job queue state next time you see delays in ARP recalculations.
3. "Avalanche" of recalculations
It is important to design relationships and security inheritance between your applications so recalculation impact is minimal.
For example, let's say we have Contacts application and Department application. - Record in the Contacts application inherits access using Inherited Record Permission from the Department record.-Department record has automatic record permission and Contacts record inherits it.-Now the best part - Department D1 has 60 000 Contacts records linked to it, Department D2 has 30 000 Contacts records linked to it.The problem you described is reproducible in the described configuration. I will go to the Department record D1 and updated it in a way that ARP in the department record will be forced to recalculate. This will add 60 000 jobs to the job engine queue to recalculate 60k Contacts linked to D1 record. Now without waiting I go to D2 and make change forcing to recalculate ARP in this D2 record. After I save record D2, new job to recalculate D2 and other 30 000 Contacts records will be created in the job engine queue. But record D2 will not be instantly recalculated because first set of 60k records was not recalculated yet and recalculation of the D2 record is still sitting in the queue.
Unfortunately, there is not a good solution available at this point. However, this is what you can do:
- review and minimize inheritance
- review and minimize relationships between records where 1 record reference 1000+ records.
- modify architecture and break inheritance and relationships and replace them with Archer to Archer data feeds if possible.
- add more "recalculation" power to you Application server(s). You can configure your web-servers to process recalculation jobs as well if they are not utilized to certain point. Add more job slots.
Tanveer, I hope this helps. Good luck!
Hi and happy new year to all Optaplanner users,
we have a requirement to plan tours. These tours contain chained and time-windowed activities (deliveries) executed by a weekly changing number of trucks.
The start time of a single tour can vary and is dependent on several conditions (i.e. the goods to be delivered must be produced, before the tour can start; only a limited number of trucks can be served at the plants gates at the same time; truck must be back before starting a new tour). Means: also the order of tours can vary and time gaps between the tours of a truck can occur.
My design plan is, to annotate TourStartTime as a second planning variable in Optaplanners VRPTW-example and to assign TourStartTime to 2-hours time grains (planning horizon is 1 week and tours normally do not start during night times; so the mentioned time grains reflect a simplified calendar for possible tour starts).
The number of available trucks (from external logistic companies) can vary from week to week. Regarding this I wanted to plan with a 'unlimited' number of trucks. But the number of trucks per logistic company, that can be actually assigned with deliveries, should be controlled by a constraint (i.e. 'trucks_to_be_used_in_parallel').
Can anybody tell me, if this is a feasable design approach, or where do I have to avoid traps (ca. 1000 deliveries/week, 40-80 trucks per day) ?
Thank you
Michael
A second planning variable is possible (and might be even the best design depending on your requirements), but it will blow up the search space and maybe even custom course grained moves might become needed to get great results.
Instead, I 'd first investigate if the Truck's TourStartTime can be made a shadow variable. For example, give all trucks a unique priority number. Then make a Truck's TourStartTime a shadow variable: the soonest time a truck can leave. If there are only 3 lanes and 4 trucks want to leave, the 3 trucks with the highest priority number leave first (so get the original TourStartTime, the 4th truck gets a later one).
The system should make a entry in the database (lets say a car with a registration number).
The box in the table with the registration number has ex. ID232 . I have no problem with verifying the registration number of the first car that comes up in the results (The verification is done based on a search which brings results from the database). The problem comes if I want to verify the next car based on reg. number , because the second registration number box has the same ID.
An example:
Car ID Registration Number
1 BS2344 <--- ID232
2 BS3224 <--- ID232
The selenium IDE can verify the first entry, but the second verifyText will fail because it verifies the first one only (because the second box is has the same ID). The only difference is a automatically incrementing ID (Car ID) that I can use, but then i will have to input this manually (And the whole automation point is gone). The whole test process is to Create multiple Cars, and then verify them.
use the loop and verify the same Id as many times as the entries in the database. as the car code is generated randomly, for the same id different car code will be generated and you will be able to check for all the ids..
I hope you got my point..
hope this answer would help you!