Dispatch Planner Optimization Problem using Google OR-Tools for Open VRP - optimization

I have been assigned a project related to VRP in which I have a fleet of trucks scattered in different locations that need to pick up and deliver goods from and to different locations.
Google's OR tools works on using depot for optimization, but in my case there is no starting or ending depot and I need to create a planner that can find the optimised routes for all the vehicles with different constraints.
Im struggling to find a solution for such problem, please help for the same.

Related

Can Optaplanner handle the Dial-A-Ride Problem (aka Uber Pooling)?

Optaplanner looks like it's great at Vehicle Routing for problems involving single entities, such as a taxi fleet of taxis transporting single customers between locations.
But what about more complex systems where the vehicle is shared by multiple people at once, such as DARP or Uber pooling where it could like something like:
Pick up customer 1 -> Pick up customer 2 -> Drop off customer 1 -> Pick up customer 3 -> Drop off customer 2 -> Drop off customer 3
As per the description of DARP:
The Dial-a-Ride Problem (DARP) consists of designing vehicle routes and schedules for n users who specify pickup and delivery requests between origins and destinations. The aim is to plan a set of m minimum cost vehicle routes capable of accommodating as many users as possible, under a set of constraints. The most common example arises in door-to-door transportation for elderly or disabled people.
Is this sort of thing possible with Optaplanner?
I looked through the documentation to grasp what Optaplanner could do, but not too sure where its limits lie at.
In theory, doing a mixed VRP in OptaPlanner is possible. In practice, we have not yet gotten around to finding the best possible model which we could recommend to users.
We have an old JIRA for it where some proposals were outlined, but no definitive conclusion was reached.

Choosing a chat-bot framework for data science research project and understanding the hidden costs of the development and rollout?

The question is about using a chat-bot framework in a research study, where one would like to measure the improvement of a rule-based decision process over time.
For example, we would like to understand how to improve the process of medical condition identification (and treatment) using the minimal set of guided questions and patient interaction.
Medical condition can be formulated into a work-flow rules by doctors; possible technical approach for such study would be developing an app or web site that can be accessed by patients, where they can ask free text questions that a predefined rule-based chat-bot will address. During the study there will be a doctor monitoring the collected data and improving the rules and the possible responses (and also provide new responses when the workflow has reached a dead-end), we do plan to collect the conversations and apply machine learning to generate improved work-flow tree (and questions) over time, however the plan is to do any data analysis and processing offline, there is no intention of building a full product.
This is a low budget academy study, and the PHD student has good development skills and data science knowledge (python) and will be accompanied by a fellow student that will work on the engineering side. One of the conversational-AI options recommended for data scientists was RASA.
I invested the last few days reading and playing with several chat-bots solutions: RASA, Botpress, also looked at Dialogflow and read tons of comparison material which makes it more challenging.
From the sources on the internet it seems that RASA might be a better fit for data science projects, however it would be great to get a sense of the real learning curve and how fast one can expect to have a working bot, and the especially one that has to continuously update the rules.
Few things to clarify, We do have data to generate the questions and in touch with doctors to improve the quality, it seems that we need a way to introduce participants with multiple choices and provide answers (not just free text), being in the research side there is also no need to align with any specific big provider (i.e. Google, Amazon or Microsoft) unless it has a benefit, the important consideration are time, money and felxability, we would like to have a working approach in few weeks (and continuously improve it) the whole experiment will run for no more than 3-4 months. We do need to be able to extract all the data. We are not sure about which channel is best for such study WhatsApp? Website? Other? and what are the involved complexities?
Any thoughts about the challenges and considerations about dealing with chat-bots would be valuable.

OptaPlanner Right Tool for Scheduling of Manufacturing Orders

Would you consider OptaPlanner to be the right tool for the planning of manufacturing operations with multiple level routings (final product, subassembly1, subassembly2, subassembly11, subassembly12, ...)?
We are talking about several 1000s of manufacturing orders with 10-20 operations each.
Looks like project shop scheduling, I know. I'm just concerned a about the amount of data and the ability to find an optimal solution in a reasonable amount of time...
Are there real world examples for this problem domain and OptaPlanner out there?
See the project job scheduling example. That's not our easiest or prettiest example, but it works and you can make it pretty.
For scaling, if it would end up as a problem (I doubt it for only 1k entities), there are plenty of power tweaking options (multithreaded solving, partitioned search, ...)

Dynamic VRP with Pickup (from one or few places) and Delivery

The problem I'm facing is:
pickups goods from many places, not from depots.
there is no main/depot place. All drivers may start driving wherever they want.
dynamically adding goods locations and their destinations (while drivers are on the road).
There could be only one driver to reduce the complexity of the problem.
Do you know any implementations to solve that problem?
Here's an implementation with OptaPlanner.

Using OptaPlanner to solve large Vehicle Routing case

I have 4 people to visit 22.000 places. So, I need to minimize the total time of the visits.
I have the spatial location of the places, and I'm thinking of getting a distance between them or using euclidian distance or using the Google Maps API.
It's possible to solve this problem using OptaPlanner.
I think of solving using the Vehicle Routing modeling. This is the best option? Would OptaPlanner support this amount of input data?
OptaPlanner has done cases like this, but you'll need to enable "nearby selection" explicitly because it's above 1k locations.
Because it's above 10k locations, it might be interesting to benchmark (using the benchmarker) with Partitioned Search too. For example, to speed up the Construction Heuristic, you might want to wrap that in a Partitioned Search. You probably can't wrap it all, because there are only 4 people.
As for using Google Maps API, first read this blog. Then: 10k locations takes 2GB of RAM IIRC to store the distance matrix in its most efficient form (double array of 32-bits) - this has nothing to do with optaplanner. I suspect 22k will bring you near 10GB of RAM just to load that in memory.