Find bounds of voronoi diagram that results equal area regions - optimization

I have voronoi diagram that contains 4 sites:
I am trying to find bounds B that would divide my voronoi regions into equal areas or close as possible. The only requirement here is that B would have constant aspect ratio c. In other words B width divided with height would always result to c (c = width/height).
Images here as example I am looking for general solution that would work on any 4 sites. I plan to use this solution on software realtime with constantly changing sites, so it is preferred it would not require huge number of iterations.
I am curious is there any algorithm to solve this issue. So far I tried:
Floyd relaxation, that is used to find equal area regions, but it modifies sites.
Reinforced learning, could not get anything relevant out of it.
Managed to solve it for 3 sites, but that did not scaled well to 4 sites.

Related

Experimental blocking design analysis in r

I have some difficulties choosing the right analysis to perform on my data. The goal of my study is to verify the impact of 3 treatment (walking, ORV use and motorcycle use) on vegetation growth of a coastal ecosystem. My response variables are the density of plants (# of individuals), their height, the biomass of plants and the nutrient content of leaves and soil. I made an experimental design where i am testing my 3 treatments following a gradient of 4 intensities of use (from null to very intense), on 4 distinct transect lines. I also want to know if the slope have an effect on vegetation growth, so every transect have been placed in order to have a portion on an ascending, descending and plane slope. All 12 transect lines (4 intensities, 3 treatments) have been placed in a block to facilitate sampling and 8 replicas (8 blocks) were made across the ecosystem. In every block, the 4 transects of intensities of a treatment are placed aside to avoid sampling mistakes, but the arrangement of the intensities transects within the treatment section was randomly made. I do not want to compare the treatments with each other, but the effect that one treatment has on vegetation growth following 4 intensities and 3 slopes options. I joined an example of a block to facilitate comprehension of the experimental design.
Example of a block
I first thought of a split plot design, but as i do not compare my treatments with each other, i i do not think it is the best option. My idea here, making the analysis on one treatment at the time, is to perform a blocking analysis with an ANOVA on every response variable (density, vegetation height, biomass, nutrients), considering the slope as the block in the analysis and to compare the intensities within those blocks. In this case, i do not think that i can consider the natural environmental variations of the true experimental blocks (ex. difference of slope steepness between blocks). Is there a way to include a random effect (experimental block) in the analysis? Is this way the best way to analyse my data?

Optimized containing of same-size squares in rectangles

Suppose that we have several squares of the same size. We want to draw n rectangles (red and yellow rectangles here) to contain these squares.
The goal is to have the least wasted space possible.
In the example below, n = 2 and the solution on the right is preferred because it results in only one wasted space.
Are there any known algorithms already in place to solve these kind of problems?
UPDATE:
The arrangement of the squares is arbitrary and they are always above the X axis!
UPDATE2:
To make the question easier, let's assume that the so called container rectangles are on top of each other! (Red and yellow rectangles here)
A little more complicated case:
let's assume two rectangles are used for this one too. As it can be seen, the 3rd solution results in the least wasted space.
This question is almost identical to a hiring puzzle that ITA Software posed, called "Strawberry Fields" (scroll down for Strawberry Fields; change the greenhouse cost from 10 to 0). I can confirm that integer programming, specifically branch and price where the high-level decisions are whether to put two squares in the same rectangle, works very, very well for this problem. Here's my custom solver, written in C. You'll need to change the greenhouse cost in strawberry_fields.h from 10 to 0.
This type of rectangle cover is hard (NP-hard actually, you can use it to solve the Rectangle Cover Problem), but you can solve this with integer linear programming, as follows:
minimize sum[i] take[i] * area[i]
st
sum[i] take[i] == n
for every filled cell x,y:
sum[rectangle i covers x,y] take[i] == 1
take[i] in { 0, 1 }
Where the lists of rectangles contains only "reasonable" rectangles that you might need. ie only rectangles that cannot be made smaller without uncovering some filled cell, and you can skip certain "interior rectangles" that you can tell can never be part of a solution because they would leave a shape that's harder to cover. Generating those rectangles is a fun exercise in its own right, but generating too many isn't a big problem, just slower. In the solution, any take[i] that is 1 corresponds to a rectangle that you take.
You can throw this into any available solver, such as GLPK (free) or Gurobi (commercial and academic licenses available).
This should generally be faster than brute force, because the linear relaxation (same model as above, but the last constraint converted to 0 <= take[i] <= 1) can be used to guide the search, and various plane cutting tricks can be applied.
More advanced tricks can be found in this paper, such as tricks that use the fractional solution from the linear relaxation.

Is there a prdefined name for the following solution search/optimization algorithm?

Consider a problem whose solution maximizes an objective function.
Problem : From 500 elements, 15 needs to be selected (candidate solution), Value of Objective function depends on the pairwise relationships between the elements in a candidate solution and some more.
The steps for solving such a problem is described here:
1. Generate a set of candidate solutions in guided random manner(population) //not purely random the direction is given to generate the population
2. Evaluating the objective function for current population
3. If the current_best_solution exceeds the global_best_solution, then replace the global_best with current_best
4. Repeat steps 1,2,3 for N (arbitrary number) times
where size of population and N are smaller (approx 50)
After N iterations it returns a candidate solution stored in global_best_solution
Is this the description of a well-known algorithm?
If it is, what is the name of that algorithm or if not under which category these type of algorithms fit?
What you have sounds like you are just fishing. Note that you might as well get rid of steps 3 and 4 since running the loop 100 times would be the same as doing it once with an initial population 100 times as large.
If you think of the objective function as a random variable which is a function of random decision variables then what you are doing would e.g. give you something in the 99.9th percentile with very high probability -- but there is no limit to how far the optimum might be from the 99.9th percentile.
To illustrate the difficulty, consider the following sort of Travelling Salesman Problem. Imagine two clusters of points A and B, each of which has 100 points. Within the clusters, each point is arbitrarily close to every other point (e.g. 0.0000001). But -- between the clusters the distance is say 1,000,000. The optimal tour would clearly have length 2,000,000 (+ a negligible amount). A random tour is just a random permutation of those 200 decision points. Getting an optimal or near optimal tour would be akin to shuffling a deck of 200 cards with 100 read and 100 black and having all of the red cards in the deck in a block (counting blocks that "wrap around") -- vanishingly unlikely (It can be calculated as 99 * 100! * 100! / 200! = 1.09 x 10^-57). Even if you generate quadrillions of tours it is overwhelmingly likely that each of those tours would be off by millions. This is a min problem, but it is also easy to come up with max problems where it is vanishingly unlikely that you will get a near-optimal solution by purely random settings of the decision variables.
This is an extreme example, but it is enough to show that purely random fishing for a solution isn't very reliable. It would make more sense to use evolutionary algorithms or other heuristics such as simulated annealing or tabu search.
why do you work with a population if the members of that population do not interact ?
what you have there is random search.
if you add mutation it looks like an Evolution Strategy: https://en.wikipedia.org/wiki/Evolution_strategy

Combinatorial optimization for puzzle solving

My problem is explained in the following image
http://i.stack.imgur.com/n6mZt.png
I have a finite (but rather large) amount of such pieces that need to be stacked in a way so that the REMAINING area is the smallest possible. The pieces are locked in the horizontal axis (time) and have fixed height. They can only be stacked.
The remaining area is defined by the maximum point of the stack that depends on which pieces have been selected. The best combination in the example image would be the [1 1 0]. (The trivial [0 0 0] case will not be allowed by other constraints)
My only variables are binaries (Yes or No) for each piece. The objective is a little more complicated than what I am describing, but my greatest problem right now is how to formulate the expression
Max{Stacked_Pieces} - Stacked_Pieces_Profile
in the objective function. The result of this expression is a vector of course (timeseries) but it will be further reduced to a number through other manipulations.
Essentially my problem is how to write
Max{A} - A, where A = 1xN vector
In a way compatible with a linear (or even quadratic) objective. Or am I dealing with a non-linear problem?
EDIT: The problem is like a Knapsack problem the main difference being that there is no knapsack to fill up. i.e. the size of the knapsack varies according to the selected pieces and is always equal to the top of the stacked profile
Thanks everybody!
From what I understand you can basically try to solve it as a normal knapsack problem in multiple iterations, finding the minimal.
Now, finding the height of the knapsack is a problem, which means you need multiple iterations. Because you need to solve the knapsack problem to see if a certain height will work, you need multiple iterations.
Note that you do know an upper and a lower bound for the height. I'm not sure if rotation is applicable, but you can fill in the gaps here:
Min = max(max height of smallest piece, total size / width)
Max = sum(height of all pieces).
Basically solving it means finding the smallest height [Min <= x <= Max] that fits all pieces. The easiest way to do that is by using a 'for' loop, but you can do it better:
Try min, max, half
if half fits -> max = half; iterate (goto 1)
if half doesn't fit -> min = half; iterate (goto 1)
As for solving the knapsack problem, for each iteration, I'd check if all pieces can still be fitted. Use bit-masks and AND/OR/XOR operations if you can to speed things up.
Basically you can do it like this:
Grab bit 'x'. Fill with next block
Check if this leads to a possible solution
Find next bit that can be filled
Note that you might want to use intrinsics in C++ to speed this up. Modern CPU's are quite good with this.
As for code: I've made some code that solves the bedlam cube in the past; I'm pretty sure that if you google for that, you'll find some fast solvers.
Good luck!

Not a knapsack or bin algorithm

I need to find a combination of rectangles that will maximize the use of the area of a circle. The difference between my situation and the classic problems is I have a set of rectangles I CAN use, and a subset of those rectangles I MUST use.
By way of an analogy: Think of an end of a log and list of board sizes. I can cut 2x4s, 2x6s and 2x8s and 2x10 from a log but I must cut at least two 2x4s and one 2x8.
As I understand it, my particular variation is mildly different than other packing optimizations. Thanks in advance for any insight on how I might adapt existing algorithms to solve this problem.
NCDiesel
This is actually a pretty hard problem, even with squares instead of rectangles.
Here's an idea. Approach it as an knapsack-Integer-Program, which can give you some insights into the solution. (By definition it won't give you the optimal solution.)
IP Formulation Heuristic
Say you have a total of n rectangles, r1, r2, r3, ..., rn
Let the area of each rectangle be a1, a2, a3, ..., an
Let the area of the large circle you are given be *A*
Decision Variable
Xi = 1 if rectangle i is selected. 0 otherwise.
Objective
Minimize [A - Sum_over_i (ai * Xi)]
Subject to:
Sum_over_i (ai x Xi) <= A # Area_limit constraint
Xk = 1 for each rectangle k that has to be selected
You can solve this using any solver.
Now, the reason this is a heuristic is that this solution totally ignores the arrangement of the rectangles inside the circle. It also ends up "cutting" rectangles into smaller pieces to fit inside the circle. (That is why the Area_limit constraint is a weak bound.)
Relevant Reference
This Math SE question addresses the "classic" version of it.
And you can look at the link provided as comments in there, for several clever solutions involving squares of the same size packed inside a circle.