What optimization algorithm is more suitable for timetable rescheduling? - optimization

I'm working on the project where university course is represented as a to-do list, where:
course owner (teacher of the course) can add tasks (containing the URL to the resource needs to be learned and two datetime fields - when to start and when to complete the task)
course subscriber (student) can mark tasks as complete or not complete and their marks are saved individually for each account.
If student marks task as complete - his account + element he marked are shown in the course activity tab for teacher where he can:
initiate a conversation in JavaScript-based chat with him
evaluate the result of the conversation
What optimization algorithm you could recommend me to use for timetable rescheduling (changing datetime fields for to-do element if student procrastinates) here?
Actually, we can use the student activity on the resource + fact that he marked the task as complete + if he clicked or not on the URL placed on the to-do element leading to the external learning material (for example Google Book).
For example, are genetic algorithms suitable for this model and what pitfalls do they have: https://medium.com/#vijinimallawaarachchi/time-table-scheduling-2207ca593b4d ?

I'm not sure I completely understand your problem but it sounds like you have a feasible timetable to begin with and you just need to improve it.
If so genetic algorithms will work very well, but I think representing everything as binary 'chromosomes' like in the link might not be practical.
There are many other ways you can represent a timetable, such as in a 2D array, or giving an event a slot number.
You could look into algorithms such as Tabu search, Simulated Annealing and Great Deluge and Hill Climbing. They are all based on similar ideas but some work better with some problems than others. For example if you have a very rough search space simulated annealing won't be the best and Hill Climbing usually only finds a local optimum.
The general architecture of the algorithms mentioned above and many other genetic algorithms and Metaheuristics is: select a neighbouring solution using a move operator (e.g. swapping the time of one or two or three events or swapping the rooms of two events etc...), check the move doesn't violate any hard constraints, use an acceptance strategy such as, simulated annealing or Great Deluge, to determine if the move is accepted. If it is keep the solution and repeat the steps until the termination criterion is met. This can be max time, number of iterations reached or improving move hasn't been found in x number of iterations.
Whilst this is running keep a log of the 'best' solution so when the algorithm is terminated you have the best solution found. You can determine what is considered 'best' based on how many soft constraints the timetable violates
Hope this helps!

Related

Looking for a base for a constraint solving problem

I am new to OptPlanner but I have a reasonable understanding of constraint solving alebit somewhat dated.
I have a problem I want to model. On the one hand the National Grid have requirements to save electricity between defined time slots on specific days in specific locations (post codes). On the other individuals with static or mobile batteries charge their batteries at some point during a 24 hour cycle and have a need to get a specific amount of charge into those batteries. I need to model a set of constraints at the top (the grid) and the constraints at the bottom (the individuals) to ensure the individuals get what they need and the grid saves what it requires.
What model should I pick and why?
I am just starting this so I have not tried anything yet. I would prefer a Java/SpringBoot solution.
Many thanks for any help.
Steve T
First read the domain modeling guide in the docs to understand my answer below.
https://www.optaplanner.org/docs/optaplanner/latest/design-patterns/design-patterns.html#domainModelingGuide
I think the maintenance scheduling quickstart might be a good start. Code is here:
https://github.com/kiegroup/optaplanner-quickstarts/tree/stable/use-cases/maintenance-scheduling
Motivation: it sounds like there could be gaps between charging at the charging stations, so a chained through time model does not fit. You're not solving a VRP anyway. So I suspect a timegrain model it is, which is what the maintenance scheduling quickstart actually uses.

Optaplanner Employee Rostering with VRPTW

I am working on a use case which is a combination between the Nurse Rostering example and a VRP problem. In isolation, I understand and can tweak both to a certain extent, but I'm not quite sure how to merge them.
To illustrate my use case further, I am trying to schedule nurses (considering skills, contract, and preferences) to patients homes, located within a 20-40 mile radius.
As an example, a nurse with the "insulin" skill would need to travel to a patient, arriving within a certain time window, perform a task for 15 mins, then travel to another patient, perform the same task, and continue until its 8 hour shift is complete. There are multiple skills and tasks to be considered.
I reviewed the Nurse Rostering example and it is a great fit for my use case, but I don't see how to modify it to account for traveling between "shift locations". The VRPTW example is again a great fit, but it does not account for skills, contracts, and preferences.
Any thoughts on how to go about modelling this problem would be highly appreciated.
Even if my answer can possibly be still too general for you to use, I would combine both models in a way more or less similar to the following :
use the nurse rostering example's model as the start point
in that model, include all intervention locations's properties; this includes at least : the intervention location data, the time window in which to intervene at the location, and the needed skill(s) needed to do the intervention locally
combine both model's constraints, not to forget at least the constraint(s) penalising a visit to a location with an insufficient skillset (sounds like a hard constraint) and the constraint(s) penalising lengthy travel times/distances.
I also suspect a more intensive usage of the shadow variables due to the combination of both models.
May still sound too vague, but that would be the direction where I would work towards.

How to encode inputs like artist or actor

I am currently developing a neural network that tries to make a suggestion for a specific user based on his recent activities. I will try to illustrate my problem with an example.
Now, let's say im trying to suggest new music to a user based on the music he recently listened to. Since people often listen to artists they know, one input of such a neural network might be the artists he recently listened to.
The problem is the encoding of this feature. As the id of the artist in the database has no meaning for the neural network, the only other option that comes to my mind would be one-hot encoding every artist, but that doesn't sound to promising either regarding the thousands of different artists out there.
My question is: How can i encode such a feature?
The approach you describe is called content-based filtering. The intuition is to recommend items to customer A similar to previous items liked by A. An advantage to this approach is that you only need data about one user, which tends to result in a "personalized" approach for recommendation. But some disadvantages include the construction of features (the problem you're dealing with now), the difficulty to build an interesting profile for new users, plus it will also never recommend items outside a user's content profile. As for the difficulty of representation, features are usually handcrafted and abstracted afterwards. For music specifically, features would be things like 'artist', 'genre', etc. and abstraction for informative keywords (if necessary) is widely done using tf-idf.
This may go outside the scope of the question, but I think it is also worth mentioning an alternative approach to this: collaborative filtering. Rather than similar items, here we instead try to find users with similar tastes and recommend products that they liked. The only data you need here are some sort of user ratings or values of how much they (dis)liked some data - eliminating the need for feature design. Furthermore, since we analyze similar persons rather than items for recommendation, this approach tends to also work well for new users. The general flow for collaborative filtering looks like:
Measure similarity between user of interest and all other users
(optional) Select a smaller subset consisting of most similar users
Predict ratings as a weighted combination of "nearest neighbors"
Return the highest rated items
A popular approach for the similarity weighting in the algorithm is based on the Pearson correlation coefficient.
Finally, something to consider here is the need for performance/scalability: calculating pairwise similarities for millions of users is not really light-weight on a normal computer.

ACO Pheromone update

I'm working on ACO and a little confused about the probability of choosing the next city. I have read some papers and books but still the idea of choosing is unclear. I am looking for a simple explanation how this path building works.
Also how does the heuristics and pheromone come into this decision making?
Because we have same pheromone values at every edge in the beginning and the heuristics (closeness) values remains constant, so how will different ants make different decisions based on these values?
Maybe is too late to answer to question but lately I have been working with ACO and it was also a bit confusing for me. I didn't find much information on stackoverflow about ACO when I needed it, so I have decided to answer this question since maybe this info is usefull for people working on ACO now or in the future.
Swarm Intelligence Algorithms are a set of techniques based on emerging social and cooperative behavior of organisms grouped in colonies, swarms, etc.
The collective intelligence algorithm Ant Colony Optimization (ACO) is an optimization algorithm inspired by ant colonies. In nature, ants of some species initially wander randomly until they find a food source and return to their colony laying down a pheromone trail. If other ants find this trail, they are more likely not to keep travelling at random, but instead to follow the trail, and reinforcing it if they eventually find food.
In the Ant Colony Optimization algorithm the agents (ants) are placed on different nodes (usually it is used a number of ants equal to the number of nodes). And, the probability of choosing the next node (city) is based on agents chose the next node using the equation known as the transition rule, that represents the probability for ant π‘˜ to go from city 𝑖 to city 𝑗 on the 𝑑th tour.
In the equation, πœπ‘–π‘— represents the pheromone trail and πœ‚π‘–π‘— the visibility between the two cities, while 𝛼 and 𝛽 are adjustable parameters that control the relative weight of trail intensity and visibility.
At the beginning there is the same value of pheromone in all the edges. Based on the transition rule, which, in turn, is based on the pheromone and the visibility (distance between nodes), some paths will be more likely to be chosen than others.
When the algorithm starts to run each agent (ant) performs a tour (visits each node), the best tour found until the moment will be updated with a new quantity of pheromone, which will make that tour more probably to be chosen next time by the ants.
You can found more information about ACO in the following links:
http://dataworldblog.blogspot.com.es/2017/06/ant-colony-optimization-part-1.html
http://dataworldblog.blogspot.com.es/2017/06/graph-optimization-using-ant-colony.html
http://hdl.handle.net/10609/64285
Regards,
Ester

I am looking for a radio advertising scheduling algorithm / example / experience

Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.