I am developing a model to solve a MILP problem using Google OR-Tools and Python. I have a problem with t tasks. Every task i needs w_i weeks to be completed and p_i_t workers on that specific week. There is a total time in weeks to complete all tasks. I need to optimize (min) the max value of workers needed in a particular week (variables to define: starting week for every particular task, or similar).
There is also a constraint: once a task starts, it should be finished. All tasks could run in parallel if needed.
Is it possible to model this problem using Google OR-Tools?. I have been trying to add a max Python function inside solver.Minimize but it is not working. How do I implement it correctly in Google OR tools?
Related
We use python to programmatically grant authorized view / routine access to a large number of views to various datasets.
However since this week we have been receiving the following error :
Dataset time travel window can only be modified once in 1 hours. The previous change happened 0 hours ago
This is preventing our current deployment process.
And so far we have not been able to find a work around to resolve this error. Note that we do not touch the time travel configurations at all as a part of our process.
This seems to be an issue with the BigQuery API.
Google have said that they will be rolling back the breaking change to restore functionality within the day
So I hit the wall pretty hard on my current implementation.
I have two planning entities: a machine and a task.
The task has a custom shadow variable which calculates the task start time.
It was all working well, however I needed to add a span when the task cannot start at the calculated time, because there are no employees available ( there's a fixed number of employees per machine )
To implement this last feature, after calculating the start time, if the task couldn't be started at that time it searches for the next available time where there are enough employees for this task to start. It is done by looping through the ordered planned tasks of all machines, and calculating if at the end of that task it has enough employees for this task.
The problem with this is: this span time does not go away if the task it spanned until the end of changes position.
I'll leave an image trying to explain this:
Is there a better way to add these spans? Or if I'm in the right direction, is there a way to make sure optaplanner invalidates the start times and recalculates them when such a move occurs?
Thank you in advance!
Update: I've managed to trigger the update for every entity after one changes, however this gets real slow real fast, but i do not see any other way around this, as if an entity changes it may cause lack of employees on another machine's entity, anyone has something else in mind for this issue?
The problem I'm trying to solve, can be expressed as:
A team of N astronauts preparing for the re-entry of their space shuttle
into the atmosphere. There is a set of tasks that must be accomplished by the team before
re-entry. Each task must be carried out by exactly one astronaut, and certain tasks can not
be started until other tasks are completed. Which tasks should be performed by which
astronaut, and in which order, to ensure that the entire set of tasks is accomplished as
quickly as possible?
So:
Each astronaut is able to preform every tasks
Some tasks might depend on each other (e.g. task i must be completed before task j)
Tasks do not have a specific start time or deadline
The objective is to minimize the makespan (time it takes to complete all tasks)
I have found solutions to similar problems (e.g. Job Shop problem, RCPSP, etc) but non of those problems completely captures the problem described above, as they don't involve worker allocation to the tasks, i.e. the solution assumes specific workers must work on specific tasks.
I am running hyper parameter tuning using Google Cloud ML. I am wondering if it is possible to benefit from (possibly partial) previous runs.
One application would be :
I launch an hyperparameter tuning job
I stop it because I want to change the type of cluster I am using
I want to restart my hypertune job on a new cluster, but I want to benefit from previous runs I already paid for.
or another application :
I launch an hypertune campain
I want to extend the number of trials afterwards, without starting from scratch
and then for instance, I want remove one degree of liberty (e.g. training_rate), focusing on other parameters
Basically, what I need is "how can I have a checkpoint for hypertune ?"
Thx !
Yes, this is an interesting workflow -- Its not exactly possible with the current set of APIs, so its something we'll need to consider in future planning.
However, I wonder if there are some workarounds that can pan out to approximate your intended workflow, right now.
Start with higher number of trials - given you can cancel a job, but not extend one.
Finish a training job early based on some external input - eg. once you've arrived at a fixed training_rate, you could record that in a file in GCS, and mark subsequent trials with different training rate as infeasible, so those trials end fast.
To go further, eg. launch another job (to add runs, or change scale tier), you could potentially try using the same output directory, and this time lookup previous results for a given set of hyperparameters with an objective metric (you'll need to record them somewhere where you can look them up -- eg. create gcs files to track the trial runs), so the particular trial completes early, and training moves on to the next trial. Essentially rolling your own "checkpoint for hypertune".
As I mentioned, all of these are workarounds, and exploratory thoughts on what might be possible from your end with current capabilities.
I am writing an economic project in anylogic. I want to sum all the money that flows between two stocks, in fact I need to sum all the values that a flow get during simulation, till a specific condition, how can I do that?
Thanks
Well,
you need to be careful to understand how system dynamics works: it is a continuous process!
If you want to track your flows, the easiest option is to use a dataset object which tracks the flow at specific points in time.
Below, a flow "AdoptionRate" is tracked by dataset "AdoptersDS" every 0.1 minutes:
However, be aware that this tracks the flow at specific points in time. You can set up similar datasets for your stocks as well.
Alternatively, you could write a recurring event which stores the values at specific points in time into your build-in database.