What are some practical applications of taking the vertical sum of a binary tree - sum

I came across this interview question and was wondering why you would want to take the vertical sum of a binary tree. Is this algorithm useful?
http://www.careercup.com/question?id=2375661

For a balanced tree the vertical sum could give you a rough insight into the range of the data. Binary trees, although easier to code, can take on more pathological shapes depending on the order in which the data is inserted. The vertical sum would be a good indicator of that pathology.
Look at the code at vertical sum in binary tree. This algorithm is written assuming a max width for the tree. Using this algorithm you will be able to get a feel for different types of unbalanced trees.
An interesting variation of this program would be to use permutations of a fixed data set to build binary trees and look at the various vertical sums. The leading and trailing zeroes give you a feel for how the tree is balanced, and the vertical sums can give you insight into how data arrival order can affect the height of the tree (And the average access time for the data in the tree). An internet search will return an implementation of this algorithm using dynamic data structures. With these I think you would want to document which sum included the root node.
Your question "Is this algorithm useful?" actually begs the question of how useful is a binary tree compared to a balanced tree. The vertical sum of a tree documents whether the implementation is closer to O(N) or O(log N). Here is an article on [balanced binary trees][3]. Put a balanced tree implementation in your personal toolkit, and try to remember if you would use a pre-order, in-order, or post-order traversal of the tree to calculate your vertical sum. You'll get an A+ for this question.

Related

Iterative deepening in minimax - sorting all legal moves, or just finding the PV-move then using MVV-LVA?

After reading the chessprogramming wiki and other sources, I've been confused about what the exact purpose of iterative deepening. My original understanding was the following:
It consisted of minimax search performed at depth=1, depth=2, etc. until reaching the desired depth. After a minimax search of each depth, sort the root-node moves according to the results from that search, to make for optimal move ordering in the next search with depth+1, so in the next deeper search,the PV-move is searched, then the next best move, then the next best move after that, and so on.
Is this correct? Doubts emerged when I read about MVV-LVA ordering, specifically about ordering captures, and additionally, using hash tables and such. For example, this page recommends a move ordering of:
PV-move of the principal variation from the previous iteration of an iterative deepening framework for the leftmost path, often implicitly done by 2.
Hash move from hash tables
Winning captures/promotions
Equal captures/promotions
Killer moves (non capture), often with mate killers first
Non-captures sorted by history heuristic and that like
Losing captures
If so, then what's the point of sorting the minimax from each depth, if only the PV-move is needed? On the other hand, if the whole point of ID is the PV-move, won't it be a waste to search from every single minimax depth up till desired depth just to calculate the PV-move of each depth?
What is the concrete purpose of ID, and how much computation does it save?
Correct me if I am wrong, but I think you are mixing 2 different concepts here.
Iterative deepening is mainly used to set a maximum search time for each move. The AI will go deeper and deeper, and then when the decided time is up it returns the move from the latest depth it finished searching. Since each increase in depth leads to exponentially longer search times, searching each depth from e.g. 1 to 12 take almost the same time as only searching with depth 12.
Sorting the moves is done to maximize the effect of alpha-beta pruning. If you want an optimal alpha-beta pruning you look at the best move first. Which is of course impossible to know beforehand, but the points you stated above is a good guess. Just make sure that the sorting algorithm doens't slow down your recursive function, and by that removing the effect from the alhpa-beta.
Hope this helps and that I understood your question correctly.

Conditions for binary search to succeed in finding nearest neighbor on array of 3D points?

I have a ordered array of 3D points. The points represent a path in 3D space.
Given an arbitrary point I want to find the nearest point on the path.
If the path was relatively straight this would be trivial application of binary search, but since the path can have arbitrary curvature(looping back on itself) binary search may fail to find the nearest point.
My question is as follows:
What is the least strict constraint under which binary search will succeed to find the nearest point? Is it monotomic in each dimension? Is it related to the paths curvature? etc...
It depends a little on whether your path is given or whether you are free to use any path you like.
Let's assume your path is given.
To answer your question: A simple binary search cannot be guaranteed to find the closest point. Imagine your path is a circle that is cut open at one place. The first and last point of your curve (the circle) will always be very close, but no binary search can fix that. As #Yann Vernier suggested, you can use spatial searches for that, search for "nearest neighbor query", these can usually be done with spatial indexes such as kd-tree, quadtree or R-Tree. You can find Java implementations here.
In case the path is not predefined, you can order the points with a z-curve (morton order) or Hilbert curve (the curves being your path). This gives you a linear ordering which can searched with a binary search. This does not always give the closest point, but it is very fast, space efficient and will often give you the closest point. Hilbert curve is more likely to give you the closest point than z-curve, but it is harder to calculate.

Computational complexity and shape nesting

I have SVG abirtrary paths which i need to pack as efficiently as possible within a given rectangle(as less waste of space as possible). After some research i found the bin packing algorithms which seems to be dealing with boxes and not curved random shapes(my SVG shapes are quite complex and include beziers etc.).
AFAIK, there is no deterministic algorithm for actually packing abstract shapes.
I wish to be proven wrong here which would be ideal(having a mathematical deterministic method for packing them). In case I am right however and there is not, what would be the best approach to this problem
The subject name is Shape Nesting, Nesting Problem or Nesting Process.
In Shape Nesting there is no single/uniform algorithm or mathematical method for nesting shapes and getting the least space waste possible.
The 1st method is the packing algorithm(creates an imaginary bounding
box for each shape and uses a rectangular 2D algorithm to pack the
bounding boxes).
This method is fast but the least efficient in regards to space
waste.
The 2nd method is some kind of incremental rotation. The algorithm
rotates the shape at incremental steps and checks if it fits in the
space. This is better than the packing method in regards to space
waste but it is painstakingly slow,
What are some other classroom examples for achieving a solution to this problem?
[Edit1] new answer
as mentioned before bin-packing is NP complete (hard) so forget about algebraic solution
known approaches are:
generate and test
either you test all possibility of the problem and remember the best solution or incrementally add items (not all at once) one by one with the same way. It is basically what you are doing now without proper heuristic is unusably slow. But has the best space efficiency (the first one is much better but much slower) O(N!)
take advantage of sorting items by size
something like this it is much faster almost O(N.log(N)) (according to used sorting algorithm). Space efficiency strongly depends on the items size range and count. For rectangular shapes is this the best approach (fastest and usable even for N>1000). For complex shapes is this not a good way but look at it anyway maybe you get some idea ...
use of Neural network
This is extremly vague approach without any warrant of solution but possible best space efficiency/runtime ratio
I think there could be some field approach out there
I sow a few for generating graph layouts. All items create fields (booth attractive and repulsive) so they are moving to semi-stable state.
At first all items are at random locations
When the movement stop remember best solution and shake all items a little or randomize their position again.
Cycle this few times
This approach is much faster then genere and test and can provide very close solution to it but it can hang in local min/max or oscillate if the fields are not optimally choosed. For example all items can have constant attractive force to each other and repulsive force getting stronger only when the items are very close. You have to prevent overlapping of items (either by stronger repulsion or by collision tests). You have also to create some rotation moment for example with that repulsive force. It differs on any vertex so it creates a rotation moment (that can automatically align similar sides closer together). Also you can have semi-stable state with big distances between items and after finding best solution just turn off repulsion fields so they stick together. Sometimes it can have better results some times not ... here is nice example for graph layout computation
Logic to strategically place items in a container with minimum overlapping connections
Demo from the same QA
And here solver for placing sliders in 2D:
How to implement a constraint solver for 2-D geometry?
[Edit0] old answer before reformulating the question
I am not clear what you want to achieve.
have SVG picture and want to separate its parts to rectangular regions
as filled as can be
least empty space in them
no shape change in picture
have svg picture and want to change its shapes according to some purpose
if this is the case some additional info is needed
[solution for 1]
create a list of points for whole SVG in global SVG space (all points are transformed)
for line you need add 2 points
for rectangles 4 points
circle/elipse/bezier/eliptic arc 8 points
find local centres of mass
use classical approach
or can speed things up by computing the average density of points per x,y axis separately and after that just check all combinations of found positions of local max of densities if they really are sub cluster center or not.
all sub cluster center is the center of your region
now find the most far points which are still part of your cluster (the are close enough to neighbour points)
create rectangular area that cover all points from sub cluster.
you also can remove all used points from list
repeat fro all valid sub clusters
until all points are used
another not precise but simpler approach is:
find SVG size
create planar map of svg with some precision for example int map[256][256].
size of map can be constant or with the same aspect as SVG
clear map with 0
for any point of SVG set related map point to 1 (or inc or whatever)
now just segmentate map and you will have find your objects
after segmentation you have position and size of all objects
so finding of bounding boxes should be easy
You can start with a variant of the rectangle bin-packing algorithm and add rotation. There is a method "Guillotine bin packer" and you can download a paper and a library at github.

Binary Search Tree Density?

I am working on a homework assignment that deals with binary search trees and I came across a question that I do not quite understand. The question asks how density affects the time it takes to search a binary tree. I understand binary search trees and big-O notation, but we have never dealt with density before.
Density of a binary search tree can be defined as the number of nodes cumulative to a level. A perfect binary tree would have the highest density. So the question basically asks you about how the number of nodes at each level effect the searching time in the tree. Let me know if that's not clear.

graphic imaginary numbers with vb.net

anyone have experience doing this? when i say imaginary i mean the square root of negative one. how would i graph this?
http://www.wolframalpha.com/input/?i=sqrt(-1)
Or more specifically, http://www.wolframalpha.com/input/?i=plot+sqrt(-1)
Complex numbers have many applications. They are useful for being able to store two properties (the real and imaginary parts) that behave sensibly when you apply standard math operators on them, like multiplication. Many problems become easy to solve by transforming them to the complex number domain, perform an operation on them that is easy to calculate, then transforming them back.
A good example is calculating the behavior of an electronic circuit that has reactive components. The impedance of a coil in the complex domain is jwL, of a capacitor is 1/jwC (w = omega). Driven with a signal in the complex domain, you can easily calculate the response. In this particular case, graphing the response is meaningful by mapping the real part on the X-axis and the imaginary part on the Y-axis. The length of the vector is the amplitude, the angle is the phase.
The Laplace transform is another complex domain transformation, based on Euler's identity. It has a very useful graphical representation too, plotting the complex roots of the equation within the unity circle allows predicting the stability of a feedback system.
These kind of transforms are popular because they simplify the math or their graphical representation are easy to interpret. Whether yours are equally useful really depends on what the transform does.