first of all, I am new in Anylogic and no idea to know about this case. If you don't mind please tell me resource for complete optimization material resource.
I have operation process above, and want to know how many minimize person for each shift I and II for process I and Process II. should I have variabel for resource pool? and how to link it to optimization objective?
Please help me and tell step by step. Thank You!
Go to File->New->Experiment and select Optimization from the list. There on the right side, you need to define your objective function and constraints. This topic is explained here in AnyLogic documentation. You should have a parameter for each of the resources. In the optimization objective you need to say minimize noResourcePoolProcess1 for example. And a good constraint would be utilization being below 85% for example. Otherwise minimization will not care if your system is throttled.
Related
Could someone please spend a few words to explain to someone who does not come from a formal methods background what is the difference between verifying a specification using Symbolic Model Checking and doing the same using Concrete Model Checking, when the search is bounded in time? I am referring to the definition of SMC and Concrete MC made in UPPAAL.
In particular, I wrote a program that uses UPPAAL Java API to verify a query against a network of timed automata. If the query is verified, UPPAAL returns a symbolic trace to parse or something else if it is not. If the verification takes too long I decided to halt the verification process, return a message and move on with the next query to verify. Everything is good so far.
Recently, I have been playing around with UPPAAL Stratego which natively offers the possibility of choosing a maximum time or depth of exploration to bound the search. However, this options can be applied only when the verification is carried out using Concrete Model Checking.
My question is : is there any difference in halting the symbolic verification process, as I am doing in my Java program and what UPPAAL Stratego does natively? In both case I don't get an answer (or a trace) but what about the "reliability" of the exploration?
Which would be better (i.e. more complete) between the two options? Halting the symbolic verification or halting the concrete verification?
My understanding so far is that in Symbolic Model Checking, the possible states are defined by using intervals of variables whilst in Concrete Model Checking variables assume an actual value. My view is that, in terms of "completeness", halting the SMC after some time is more "complete" since the exploration of the state space happens systematically using BFS or DFS algorithm and, if I use BFS, I can be "sure" that within N steps nothing bad happens. But again, my background in model checking is not rich, so I might have get it completely wrong.
Also, if could drop any reference to the strategies, it would be really appreciated.
Thanks!
According to the documentation the tsfresh algorithm makes use of the Benjamini-Yekutieli in its final step. To give you a little bit of context,
tsfresh automatically calculates a large number of time series characteristics, the so called features. Further the package contains methods to evaluate the explaining power and importance of such characteristics for regression or classification tasks.
I have tried to read the linked references but I found them very technical. Can anybody provide an high-level description of the Benjamini-Yekutieli and explain why it is needed? I would like to understand what is its main purpose.
If you don’t know what FRESH is, I would still be happy to read an explanation of the Benjamini-Yekutieli test.
I'm just starting learning to use OptaPlanner recently. Please pardon me if there is any technically inaccurate description below.
Basically, I have a problem to assign several tasks on a bunch of machines. Tasks have some precedence restrictions such that some task cannot be started before the end of another task. In addition, each task can only be run on certain machines. The target is to minimize the makespan of all these tasks.
I modeled this problem with Chained Through Time Pattern in which each machine is the anchor. But the problem is that tasks on certain machine might not be executed sequentially due to the precedence restriction. For example, Task B can only be started after Task A completes while Tasks A and B are executed on machines I and II respectively. This means during the execution of Task A on machine I, if there is no other task that can be run on machine II, then machine II can only keep idle until Task A completes at which point Task B could be started on it. This kind of gap is not deterministic as it depends on the duration of Task A with respect to this example. According to the tutorial of OptaPlanner, it seems that additional planning variable gaps should be introduced for this kind of problem. But I have difficulty in modeling this gap variable now. In general, how to integrate the gap variable in the model using Chained Through Time Pattern? Some detailed explanation or even a simple example would be highly appreciated.
Moreover, I'm actually not sure whether chained through time pattern is suitable for modeling this kind of task assigning problem or I just used an entirely inappropriate method. Could someone please shed some light on this? Thanks in advance.
I'am using chained through time pattern to solve the same question as yours.And to solve the precedence restriction you can write drools rules.
For my model I have about 120 people and 650 tasks. I now want to allocate those tasks with choco 3.3.3. For that I have a boolMatrix "assignment" 120x650 where there is a 1 if the task is assigned to the person and a 0 otherwise. But now I have to optimize with different criteria, for example minimize overtime, abide to wishes from the people and so on. What is the best way to do that?
My intuition: I don't see a way to just accumulate penalties, so my intuition is having a matrix where for every person there is an array of "penalties" so that if person i has overtime, penalties[i][0] has penalty 5 for example and if he doesn't want to do the task penalties[i][1] has penalty 4. Then I have an IntVar score, that is the sum of penalties and I optimize over score.
Is the penalty matrix the way to go?
And how can I initialize these Variables?
Is that optimizable (every feasible solution has a score) in a reasonable time with choco?
In the nurse scheduling example this strategy was used:
solver.set(IntStrategyFactory.domOverWDeg(ArrayUtils.flatten(assignment), System.currentTimeMillis()));
What strategy should I use? Reading the choco user guide didn't help me get a good idea...
It seems from your questions that you have not tried yet to implement and test your model so we cannot help much. Anyway:
Q1) I did not understand clearly your approach but there may be many ways to go. It is by testing it that you will know whether it solves your problem or not. Maybe you could also use and integer variable x where x=k means task x is done by resource k. You could also use a set variable to collect all the tasks of each resource.
Regarding penalties, you should formalize how they should be computed in a mathematical way (from the problem specifications) before wondering how to encode it with the library. More generally, you must make very clear and formal what you want to get before working on how to get it.
Q2) To create variables, you shall use VariableFactory. Initial domains should contain all possible values.
Q3) It depends of the precise problem and of your model. Presumably, yes you can get very good solutions in a very short time. If you want a mathematically optimal solution, with a proof it is optimal, then this could be long.
Q4) It is not mandatory to specify a search strategy. Choosing the best one requires experience and benchmarking so you should try some of them to figure out yourself which one is best in your case. You can also add LNS (a kind of local search) to boost optimization...
Hope this helps
Can anyone give me some tips to make a binary integer programming model faster?
I currently have a model that runs well with very small amount of variables but as soon as I increase the number of variables in my model SCIP keeps running without giving me an optimal solution. I'm currently using SCIP with Soplex to find an optimal solution.
You should have a look at the statistics (type display statistics in the interactive shell). Watch out for time consuming heuristics that don't find a solution and try disabling them. You should also play around with the parameters to find better suited settings for your instances (different branching rule or node selection). Without further information, though, we won't be able to help you.