What is the most possible height when the binary search tree haw n nodes? - binary-search-tree

Is there a mathematical type for the most possible height of a tree with exactly n nodes?

It can be anything. If you are not implementing a balanced binary search tree (like AVL tree or Red-Black tree), then the height of the tree will depend on the inputs you give. In the worst-case, height can be equal to the number of nodes(if each value is greater than the previous one or each value is less than the previous one). If you need more info, please consider describing the specific use case for which this question was asked.

Related

Binary Search tree time complexity

I am right now working with a Binary Search tree. I wonder what is the Tim complexity for a Binary Search tree. More Specific what is the worst case Time complexity for the operation height, leaves and toString for a Binary Search tree and why?
All three operations have a O(n) worst-case time complexity.
For height: all nodes will be visited when the tree is degenerate, and all nodes except one have exactly one child.
For leaves: each node will have to be visited in order to check whether they are a leave.
For toString: obviously all nodes need to be visited.

An alternative method to create an AVL tree from a sorted array in O(n) time

I need some help in this data structure homework problem. I was requested to write an algorithm that creates an AVL tree from a sorted array in O(n) time.
I read this solution method: Creating a Binary Search Tree from a sorted array
They do it recursively for the two halves of the sorted array and it works.
I found a different solution and I want to check if it's valid.
My solution is to store another property of the root called "root.minimum" that will contain a pointer to the minimum.
Then, for the k'th element, we'll add it recursively to the AVL tree of the previous k-1 elements. We know that the k'th element is smaller than the minimum, so we'll add it to the left of root.minimum to create the new tree.
Now the tree is no longer balanced, but all we need to do to fix it is just one right rotation of the previous minimum.
This way the insertion takes O(1) for every node, and in total O(n).
Is this method valid to solve the problem?
Edit: I meant that I"m starting from the largest element. And then continue adding the rest according to the order. So each element I'm adding is smaller than the rest of them so I add it to the left of root.minimum. Then all I have to do to balance the tree is a right rotation which is O(1). Is this a correct solution?
If you pick a random element as the root in the first place (which is probably not the best idea, since we know the root should be the middle element), you put root itself in the root.minimum. Then for each new element, if it is smaller than root.minimum, you do as you said and make the tree balanced in O(1) time. But what if it is larger? In that case we need to compare it with the root.minimum of the right child, and if it is also larger, with the root.minimum of the right child of the right child and so on. This might take O(k) in the worst case, which will result in O(n^2) in the end. Also, this way, you are not using the sorted property of the array.

How is AVL tree insertion O(log n) when you need to recalculate balance factors up the tree after every insertion?

I'm implementing an AVL tree, and I'm trying to wrap my head around the time complexity of the adding process. It's my understanding that in order to achieve O(log n) you need to keep either balance or height state in tree nodes so that you don't have to recalculate them every time you need them (which may require a lot of additional tree traversal).
To solve this, I have a protocol that recursively "walks back up" a trail of parent pointers to the root, balancing if needed and setting heights along the way. This way, the addition algorithm kind of has a "capture" and "bubble" phase down and then back up the tree - like DOM events.
My question is: is this still technically O(log n) time? Technically, you only deal with divisions of half at every level in the tree, but you also need to travel down and then back up every time. What is the exact time complexity of this operation?
Assuming the height of the tree is H and the structure stays balanced during all operation.
Then, as you mentioned, inserting a node will take O(H).
However, every time a node is added to the AVL tree, you need to update the height of the parents all the way up to the root node.
Since the tree is balanced, updating height will traverse only the linked-list like structure with the newly inserted node in the tail.
The height updating can be viewed equivalent to traversing a linked-list with length equals to H.
Therefore, updating height will take another O(H) and the total update time is 2 * O(H), which is still O(log N) if we get rid of the constant factor.
Hope this makes sense to you.
"Technically, you only deal with divisions of half at every level in the tree, but you also need to travel down and then back up every time. What is the exact time complexity of this operation?"
You've stated that you have to travel down and up every time.
So, we can say that your function is upper bounded by a runtime of 2 * logn.
It's clear that this is O(logn).
More specifically, we could assign the constant 3 and a starting value of 1, such that
2 * logn <= 3 * logn for all values of n >= 1.
This reduces to 2 <= 3, which is of course true.
The idea behind big-O is to understand the basic shape of the function that upper-bounds your function's runtime as the input size moves towards infinity - thus, we can drop the constant factor of 2.

convert non balanced binary search tree to red black tree

Is it possible to convert a non balanced BST (the size of the tree is n and the height is h) to RBT in time complexirty of O(n) and space complexity of O(h)?
If you know the number of nodes before hand this is doable, knowing the number of nodes tells you the height of the target RB tree (regardless of what the original tree height).
Therefore you can simply 'peel' nodes off the original tree one-by-one starting from the minimum and place them in the correct tree slot. The easiest way to do this will end up with every row except for a potentially empty bottom row black. (That is, if you have a tree with 7 nodes they will all be black but if you have a tree with 6 the first 2 rows will be black and the bottom row will have 3 red nodes).
This will take O(n) time - to visit each node in the original tree - and O(h) space because you will need to keep track of some bookkeeping depending on where you are in the process.
And note this will only work if you know the number of nodes in the original tree, as it depends on knowing which nodes will be in the bottom row of the produced tree.

Binary Search Trees queries

I have these couple of questions:
Given a BST of floats, find the highest number just below a given float value
Implement a binary search tree for floating-point values
My ideas: I thought a greedy on the given location would give us the right answer for 1) and 2) would be by basically just considering subtrees of depth = precision of the value. This would give us a standard BST but with subtrees to access floating point data points.
Let me know if these are correct.
I don't think there is significant difference between BST for integer node and floating point node, and answer for 1) and 2) are straightforward. By BST in-order traversal, find the highest number below given float value until encounter a value that is greater than give value or traversal done.