I'm looking for references to algorithms for plotting on a mechanical pen plotter.
Specifically, I have a list of straight vectors, each representing a line to be plotted. First I want to remove duplicate vectors, so each line is only plotted once. That's easy enough.
Second, there are many vectors that intersect, sometimes at endpoints, but not always. They can be plotted in any order, but I want to find an order that reduces the number of times the pen must be lifted, preferably to a minimum though I understand that may take a long time to compute, if it's computable at all. Vectors that intersect can be broken into smaller vectors if that helps. But generally, if the pen is moving in a straight line, it's best to keep it moving that way as long as possible. So, two parallel vectors joined end to end could be combined into a single vector, etc.
This sounds like some variety of graph theory problem, but I don't know much about that. Can anyone point me to references or algorithms I need to study? Or maybe example code?
Thanks,
Neil
The problem is an example of the Chinese postman problem which is an NP-complete problem. The most wellknown NP-complete problem is the Travelling Salesman. Common for all NP-complete problems are that they can all be translated into eachother. There are no known algorithms for solving any of them in a time that is polynomial dependent of the number of nodes in the input, they are non-polynomial (NP).
For your case I would suggest some simple heuristics. Don't overdo it, just pick anything quite simple like going in a straight line as long as possible and then lift the pen to the closest available starting point and go on from there.
Related
I am implementing a complex algorithm for determining the n-nearest neighbors in a high dimensional embedding space from a paper I found online. After I finished the implementation, I wanted to check the results to make sure the code did indeed return the desired n-nearest neighbors. In order to to do so, I check to see if the results are equal to a brute-force search across every element in the embedding space and find the n-nearest neighbors.
The issue arises when there are multiple elements with the same distance from the query input.
For example, if I am checking for the 3-nearest neighbors, and I have four points, one of which is closest and the other 3 all equidistant from the search key, one element will necessarily be left out. I'd like to test to ensure that the two implementations are roughly the same, and I am not interested in the exact details of which elements are left out. As a result, I can't just do an element-wise equality check across the complex algorithm and the brute-force solution.
For business reasons, it is actually helpful if the element left out is random, because I want the end user to see a variety of results, as long as all results are equally relevant. I specifically do not want a stable-ordering on the results.
Is there an off-the-shelf solution for this problem? I am implementing this code in Python, but the solution can be language agnostic.
I'm working on a problem that will eventually run in an embedded microcontroller (ESP8266). I need to perform some fairly simple operations on linear equations. I don't need much, but do need to be able work with points and linear equations to:
Define an equations for lines either from two known points, or one
point and a gradient
Calculate a new x,y point on an equation line that is a specific distance from another point on that equation line
Drop a perpendicular onto an equation line from a point
Perform variations of cosine-rule calculations on points and triangle sides defined as equations
I've roughed up some code for this a while ago based on high school "y = mx + c" concepts, but it's flawed (it fails with infinities when lines are vertical), and currently in Scala. Since I suspect I'm reinventing a wheel that's not my primary goal, I'd like to use someone else's work for this!
I've come across CGAL, and it seems very likely it's capable of all this and more, but I have two questions about it (given that it seems to take ages to get enough understanding of this kind of huge library to actually be able to answer simple questions!)
It seems to assert some kind of mathematical perfection in it's calculations, but that's not important to me, and my system will be severely memory constrained. Does it use/offer memory efficient approximations?
Is it possible (and hopefully easy) to separate out just a limited subset of features, or am I going to find the entire library (or even a very large subset) heading into my memory limited machine?
And, I suppose the inevitable follow up: are there more suitable libraries I'm unaware of?
TIA!
The problems that you are mentioning sound fairly simple indeed, so I'm wondering if you really need any library at all. Maybe if you post your original code we could help you fix it--your problem sounds like you need to redo a calculation avoiding a division by zero.
As for your point (2) about separating a limited number of features from CGAL, giving the size and the coding style of that project, from my experience that will be significantly more complicated (if at all possible) than fixing your own code.
In case you want to try a simpler library than CGAL, maybe you could try Boost.Geometry
Regards,
I have SVG abirtrary paths which i need to pack as efficiently as possible within a given rectangle(as less waste of space as possible). After some research i found the bin packing algorithms which seems to be dealing with boxes and not curved random shapes(my SVG shapes are quite complex and include beziers etc.).
AFAIK, there is no deterministic algorithm for actually packing abstract shapes.
I wish to be proven wrong here which would be ideal(having a mathematical deterministic method for packing them). In case I am right however and there is not, what would be the best approach to this problem
The subject name is Shape Nesting, Nesting Problem or Nesting Process.
In Shape Nesting there is no single/uniform algorithm or mathematical method for nesting shapes and getting the least space waste possible.
The 1st method is the packing algorithm(creates an imaginary bounding
box for each shape and uses a rectangular 2D algorithm to pack the
bounding boxes).
This method is fast but the least efficient in regards to space
waste.
The 2nd method is some kind of incremental rotation. The algorithm
rotates the shape at incremental steps and checks if it fits in the
space. This is better than the packing method in regards to space
waste but it is painstakingly slow,
What are some other classroom examples for achieving a solution to this problem?
[Edit1] new answer
as mentioned before bin-packing is NP complete (hard) so forget about algebraic solution
known approaches are:
generate and test
either you test all possibility of the problem and remember the best solution or incrementally add items (not all at once) one by one with the same way. It is basically what you are doing now without proper heuristic is unusably slow. But has the best space efficiency (the first one is much better but much slower) O(N!)
take advantage of sorting items by size
something like this it is much faster almost O(N.log(N)) (according to used sorting algorithm). Space efficiency strongly depends on the items size range and count. For rectangular shapes is this the best approach (fastest and usable even for N>1000). For complex shapes is this not a good way but look at it anyway maybe you get some idea ...
use of Neural network
This is extremly vague approach without any warrant of solution but possible best space efficiency/runtime ratio
I think there could be some field approach out there
I sow a few for generating graph layouts. All items create fields (booth attractive and repulsive) so they are moving to semi-stable state.
At first all items are at random locations
When the movement stop remember best solution and shake all items a little or randomize their position again.
Cycle this few times
This approach is much faster then genere and test and can provide very close solution to it but it can hang in local min/max or oscillate if the fields are not optimally choosed. For example all items can have constant attractive force to each other and repulsive force getting stronger only when the items are very close. You have to prevent overlapping of items (either by stronger repulsion or by collision tests). You have also to create some rotation moment for example with that repulsive force. It differs on any vertex so it creates a rotation moment (that can automatically align similar sides closer together). Also you can have semi-stable state with big distances between items and after finding best solution just turn off repulsion fields so they stick together. Sometimes it can have better results some times not ... here is nice example for graph layout computation
Logic to strategically place items in a container with minimum overlapping connections
Demo from the same QA
And here solver for placing sliders in 2D:
How to implement a constraint solver for 2-D geometry?
[Edit0] old answer before reformulating the question
I am not clear what you want to achieve.
have SVG picture and want to separate its parts to rectangular regions
as filled as can be
least empty space in them
no shape change in picture
have svg picture and want to change its shapes according to some purpose
if this is the case some additional info is needed
[solution for 1]
create a list of points for whole SVG in global SVG space (all points are transformed)
for line you need add 2 points
for rectangles 4 points
circle/elipse/bezier/eliptic arc 8 points
find local centres of mass
use classical approach
or can speed things up by computing the average density of points per x,y axis separately and after that just check all combinations of found positions of local max of densities if they really are sub cluster center or not.
all sub cluster center is the center of your region
now find the most far points which are still part of your cluster (the are close enough to neighbour points)
create rectangular area that cover all points from sub cluster.
you also can remove all used points from list
repeat fro all valid sub clusters
until all points are used
another not precise but simpler approach is:
find SVG size
create planar map of svg with some precision for example int map[256][256].
size of map can be constant or with the same aspect as SVG
clear map with 0
for any point of SVG set related map point to 1 (or inc or whatever)
now just segmentate map and you will have find your objects
after segmentation you have position and size of all objects
so finding of bounding boxes should be easy
You can start with a variant of the rectangle bin-packing algorithm and add rotation. There is a method "Guillotine bin packer" and you can download a paper and a library at github.
Sort of a programming question, sort of a general logic question. Imagine a circular base with a pattern of circles:
And another circle, mounted above and able to rotate, with holes that expose the colored circles below:
There must be an optimal pattern of either the colored circles or the openings (or both) that will allow for all N possible combinations of colors... but I have no idea how to attack the problem! At this point, combinations of 2 seem probably the easiest and would be fine as a starting point (red/blue, red/green, red/white, etc).
I would imagine there will need to be gaps in the colors, unlike the example above. Any suggestions welcome!
Edit: clarified the question (hopefully!) thanks to feedback from Robert Harvey
For two holes, you could look for a perfect matching in a bipartite graph, each permutation described by two nodes, one in each partition. Nodes would be connected if they share one element, i.e. the (blue,red) node from the first partition connected to the (red,green) node of the second. The circles arranged in the same distance would allow for both of these patterns. A perfect matching in that graph would correspond to chains or cycles of permutations where two of them always share a single color. A bit like dominoes. If you had a set of cycles of the same length, you could interleave them to form the pattern on the lower disk. I'm not sure how easy it will be to obtain these same length cycles, though, and I also don't know how to generalize this to more than two elements in each permutation.
I have a 3D set of points. These points will undergo a series of tiny perturbations (all points will be perturbed at once). Example: if I have 100 points in a box, each point may be moved up to, but no more than 0.2% of the box width in each iteration of my program.
After each perturbation operation, I want to know the new distance to each point's nearest neighbor.
This needs to use a very fast data structure; I'm optimizing this for speed. It's a somewhat tricky problem because I'm modifying all points at once. Approximate NN algorithms are not suitable for this problem.
I feel like the answer is somewhere between kd-trees and Voronoi tessellations, but I am not an expert on data structures, so I am baffled about what to do. I sure this is a very hard problem that would require a lot of research to reach a truly optimal solution, but even something fairly optimal will work for me.
Thanks
You can try a quadkey or monster curve. It reduce the dimension and fills the plane. Microsoft bing maps quadkey is a good start to learn.