Flat reservations optimization system - optimization

i have a booking system and i need to write a script that optimize the reservations.
When a customer book a flat, the system assigns the first flat available. The problem is that after a few reservations my "grid" become fragmented.
Grid example:
Grid example
In practice I need to minimize the white space so as I can accept the maximum number of reservations.
My question is: there are some know problem that fit my problem? I had thought to some knapsack problem variations.
I can provide more info if needed.
Thanks.

This is a scheduling problem. One very important question is: can you reassign a flat to a different number once the reservation has been made?
If the answer is yes, you will find a solution if and only if you have no day with more reservations than you have flats: simply take the first idle slot of the very first day d1, and if there is a conflict with a future reservation, reallocate the future reservation by taking the first idle slot of its very first day d2 (note that d2>d1), and your algorithm will converge because you will have a strictly increasing sequence of days where you need reallocation.
If the answer is no, we come into a tricky world where your algorithm has to guess what would be the future reservations. A good heurisic, I think, would be to score the placements. For instance, you can check how many iddle slots you leave before and after the reservation, and take the option that leaves a few empty slots as possible.

Related

Party people come an go: How to use Bayes to know who is in or out?

story/problem
Imagine there are N party guests and a bouncer guarding the only door. Before the party starts, all the guests are outside (that's for sure). However, once the party starts, people come and go. Each time such an event occurs, the bouncer makes a note for each potential guest of the likelihood that it could have been him or her. One could call this score the bouncer's classification confidence. For each event, this is a list of N candidates that adds up to one. All in all, T events have been observed until the next morning.
Unfortunately, some valuables were stolen that night. To narrow down the group of suspects, the host checked the bouncers notes. However, he soon found contradictory and therefore unreliable data: For example, according to the data, the same person entered the place of high confidences twice in a row, which we know is impossible. Therefore, he attempts by cleaning the data from contradictions to improve the quality of the classifications.
where I am stuck/what I tried
First I solved this issue by formulating a Linear Program and solved this in Python. However, as the number of guests increase, this soon becomes computationally infeasible. Therefore want use Bayes' theorem to compute the probability of the guests presence.
Since we have prior beliefs of the guests attending or not and repeated information updates, I felt that a bayesian approach would be the natural way to tackle this problem.
Even though this approach seemed intuitive to me, I faced some problems:
(1) The problem of being 100% sure of the first state
Being 100% sure of a person being outside at time t=0 seems to "break" the Bayesian formular. Let A be person A and data be the information available by the bouncer.
If the prior P(A present) is either 0 or 1 the formular collapses to 0 or 1 respectively. Am I wrong here?
(2) The problem of using all available information
I did not find a way not only to use data that happened before time step t but also information gathered at a later time. E.g. given three candidates the bouncer makes three observations with confidence ct:
cout->in1=(0.5, 0.4, 0.1)T,
cout->in2=(0.4, 0.4, 0.2)T,
cout->in3=(0.9, 0.05, 0.05)T.
Using only information that was available until t, the most plausible solution would be: (A entered, B entered, C entered) in. However, using all the information available this is a better solution: (B entered, C entered, A entered).
(3) The problem of noisy observations.
I was thinking of using a Bernoulli distribution as prior, however, I do not observe binary events but confidences.
Since I'm stuck, I'm looking forward to your help on Bayesian reasoning and ultimately finding the thief.

What optimization algorithm is more suitable for timetable rescheduling?

I'm working on the project where university course is represented as a to-do list, where:
course owner (teacher of the course) can add tasks (containing the URL to the resource needs to be learned and two datetime fields - when to start and when to complete the task)
course subscriber (student) can mark tasks as complete or not complete and their marks are saved individually for each account.
If student marks task as complete - his account + element he marked are shown in the course activity tab for teacher where he can:
initiate a conversation in JavaScript-based chat with him
evaluate the result of the conversation
What optimization algorithm you could recommend me to use for timetable rescheduling (changing datetime fields for to-do element if student procrastinates) here?
Actually, we can use the student activity on the resource + fact that he marked the task as complete + if he clicked or not on the URL placed on the to-do element leading to the external learning material (for example Google Book).
For example, are genetic algorithms suitable for this model and what pitfalls do they have: https://medium.com/#vijinimallawaarachchi/time-table-scheduling-2207ca593b4d ?
I'm not sure I completely understand your problem but it sounds like you have a feasible timetable to begin with and you just need to improve it.
If so genetic algorithms will work very well, but I think representing everything as binary 'chromosomes' like in the link might not be practical.
There are many other ways you can represent a timetable, such as in a 2D array, or giving an event a slot number.
You could look into algorithms such as Tabu search, Simulated Annealing and Great Deluge and Hill Climbing. They are all based on similar ideas but some work better with some problems than others. For example if you have a very rough search space simulated annealing won't be the best and Hill Climbing usually only finds a local optimum.
The general architecture of the algorithms mentioned above and many other genetic algorithms and Metaheuristics is: select a neighbouring solution using a move operator (e.g. swapping the time of one or two or three events or swapping the rooms of two events etc...), check the move doesn't violate any hard constraints, use an acceptance strategy such as, simulated annealing or Great Deluge, to determine if the move is accepted. If it is keep the solution and repeat the steps until the termination criterion is met. This can be max time, number of iterations reached or improving move hasn't been found in x number of iterations.
Whilst this is running keep a log of the 'best' solution so when the algorithm is terminated you have the best solution found. You can determine what is considered 'best' based on how many soft constraints the timetable violates
Hope this helps!

Using Optaplanner to solve VRPTW with large number of customers and sophisticated constraints

I'm developing a solver for a VRPTW problem using the OptaPlanner and I have faced a problem when large number of customers need to be serviced. By the large number I mean up to 10,000 customers. I have tried running a solver for about 48 hours but no feasible solution was ever reached.
I use a highly customized VRPTW domain model that introduces additional planning entity so-called "Workbreak". Workbreaks are like customers but they can have a location that is actually another planning value - because every day a worker can return home or go to the hotel. Workbreaks have fixed time of departure (usually next day morning), and a variable time of arrival (because it depends on the previous entity within a chain). A hard constraint cares about not allowing to "arrive" to the Workbreak after certain point of time. There are other hard constraints too, like:
multiple service time windows per customer
every week the last customer in chain must be a special customer "storage space visit" (workers need to gather materials before the next week)
long jobs management (when a customer needs to be serviced longer than specified time it should be serviced before specific hour of a day)
max number of jobs per workday
max total job duration per workday (as worker cannot work longer than specified time)
a workbreak cannot have a location of a hotel that is too close to worker's home.
jobs can not be serviced on Sundays
... and many more - there is a total number of 19 hard constrains that have to be applied. There are 3 soft constraints too.
All the aforementioned constraints were initially written as Drools rules, but because of many accumulation-based constraints (max jobs per day, max hours per day, overtime hours per week) the overall speed of the solver (benchmarks) was about 400 step/sec.
At first I thought that solver's speed is too slow to reach a feasible solution in a reasonable time, so I have rewritten all rules into easy score calculator, and it had a decent speed - about 4600 steps/sec. I knew that is will only perform best for a really small number of customers, but I wanted to know if the Drools was the cause of that poor performance. Then I have rewritten all these rules into incremental score calculator (and survived the pain of corrupted score bugs until all of them were successfully fixed). Surprisingly incremental score calculation is a bit slower for a small number of customers, comparing to easy score calculator, but it is not an issue, because overall speed is about 4000 steps/sec - no matter how many entities I have.
The thing that bugs me the most is that above a certain number of customers (problems start at 1000 customers) the solver cannot reach feasible solution. Currently I'm using Late Acceptance and Step Counting algorithms, because they perform really good for this kind of a problem (at least for a less number of customers). I used Simulated Annealing too, but without success, mostly because I could not find good values for algorithm specific parameters.
I have implemented some custom moves too:
Composite move that changes workbreak's location when sibling entities are changed using other moves like change/swap moves (it helps escaping many score traps, as improving step usually needs at least two moves to be performed in a single step)
Move factory for better long jobs assignment (it generates moves that tries to put customers with longer service time in the front of a workday chain)
Workbreak assignment move factory (it generates moves that helps putting workbreaks in proper sequence)
Now I'm scratching my head, and wondering what I should do to diagnose the source of my problem. I suspected that maybe it was hitting a score trap, but I have modified the solver so it saves snapshots of best score each minute. After reading these snapshots I realized that the score was still decreasing. Can the number of hard constraints play the role? I suspect that many moves need to be performed to find out a move that improves the score. The fact is that maybe 48 hours isn't that much for this kind of a problem, and it should make computations a whole week? Unfortunately I have nothing to compare with.
I would like to know how to find out if it is solely a performance problem, or a solver (algorithm, custom moves, hard/soft score) configuration problem.
I really apologize for my bad English.
TL;DR but FWIW:
To scale above 1k locations you need to use NearBy selection.
To scale above 10k locations, add Partitioned Search too.

Using Redis for "trending now" functionality

I'm working on a very high throughput site with many items, am looking into implementing "trending now" type functionality, that would allow users to quickly get a prioritized list of the top N items that have been viewed recently by many people, that gradually fade away as they get fewer views.
One idea about how to do this is to give more weight to recent views of an item, something like a weight of 16 for every view of an item the past 15 minutes, a weight of 8 for every view of an item in the past 1 hour, a weight of 4 for things in the past 4 hours, etc but I do not know if this is the right way to approach it.
I'd like to do this in Redis, we've had good success with Redis in the past for other projects.
What is the best way to do this, both technologically and the determination of what is trending?
The first answer hints at a solution but I'm looking for more detail -- starting a bounty.
These are both decent ideas, but not quite detailed enough. One got half the bounty but leaving the question open.
So, I would start with a basic time ordering (zset of item_id scored by timestamp, for example), and then float things up based on interactions. So you might decided that a single interaction is worth 10 minutes of 'freshness', so each interaction adds that much time to the score of the relevant item. If all interactions are valued equally, you can do this with one zset and just increment the scores as interactions occur.
If you want to have some kind of back-off, say, scoring by the square root of the interaction count instead of the interaction count directly, you could build a second zset with your score for interactions, and use zunionstore to combine this with your timestamp index. For this, you'll probably want to pull out the existing score, do some math on it and put a new score over it (zadd will let you overwrite a score)
The zunionstore is potentially expensive, and for sufficiently large sets even the zadd/zincrby gets expensive. To this end, you might want to keep only the N highest scoring items, for N=10,000 say, depending on your application needs.
These two links are very helpful:
http://stdout.heyzap.com/2013/04/08/surfacing-interesting-content/
http://word.bitly.com/post/41284219720/forget-table
The Reddit Ranking algorithm does a pretty good job of what you describe. A good write up here that talks through how it works.
https://medium.com/hacking-and-gonzo/how-reddit-ranking-algorithms-work-ef111e33d0d9
consider an ordered set with the number of views as the scores. whenever an item is accessed, increment its score (http://redis.io/commands/zincrby). this way you can get top items out of the set ordered by scores.
you will need to "fade" the items too, maybe with an external process that would decrement the scores.

I am looking for a radio advertising scheduling algorithm / example / experience

Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.