Time complexity of a "modified" heap - time-complexity

Suppose that I call a heap* as a heap that does not need to be left aligned at the last level, nor does it need to be full. Now, given a max heap* (which has the analogous property of a max heap-that is, parent's value>child's value), to perform an Extract_Max(), I need to pop the root, and compare its two children and place whichever of its children are larger in the root node. The node occupied by the larger value is vacated, and I consider the children of the now vacant node and place whichever value is bigger in this vacant node, vacating the larger child's node. I keep progressing this way, until the last layer's child is encountered which is when the algorithm gets terminated. If there are $n$ elements in the max heap*, what is the worst case time complexity of the above Extract_Max() algorithm?

Related

An alternative method to create an AVL tree from a sorted array in O(n) time

I need some help in this data structure homework problem. I was requested to write an algorithm that creates an AVL tree from a sorted array in O(n) time.
I read this solution method: Creating a Binary Search Tree from a sorted array
They do it recursively for the two halves of the sorted array and it works.
I found a different solution and I want to check if it's valid.
My solution is to store another property of the root called "root.minimum" that will contain a pointer to the minimum.
Then, for the k'th element, we'll add it recursively to the AVL tree of the previous k-1 elements. We know that the k'th element is smaller than the minimum, so we'll add it to the left of root.minimum to create the new tree.
Now the tree is no longer balanced, but all we need to do to fix it is just one right rotation of the previous minimum.
This way the insertion takes O(1) for every node, and in total O(n).
Is this method valid to solve the problem?
Edit: I meant that I"m starting from the largest element. And then continue adding the rest according to the order. So each element I'm adding is smaller than the rest of them so I add it to the left of root.minimum. Then all I have to do to balance the tree is a right rotation which is O(1). Is this a correct solution?
If you pick a random element as the root in the first place (which is probably not the best idea, since we know the root should be the middle element), you put root itself in the root.minimum. Then for each new element, if it is smaller than root.minimum, you do as you said and make the tree balanced in O(1) time. But what if it is larger? In that case we need to compare it with the root.minimum of the right child, and if it is also larger, with the root.minimum of the right child of the right child and so on. This might take O(k) in the worst case, which will result in O(n^2) in the end. Also, this way, you are not using the sorted property of the array.

Compute Maya Output Attr From Previous Frame's Outputs

Does Maya allow one to compute the output attributes at frame N using the output attributes calculated at Frame (N-1) as inputs? With the proviso that at (e.g.) Frame 0 we don't look at the previous frame but use some sort of initial condition. Negative frames would be calculated by looking forward in time.
e.g. The translate of the ball at Frame N is computed to be the translate of the ball at Frame N-1 + 1cm higher. At frame zero the ball is given an initial translate of zero.
The DataBlock has a setContext function but the docs appear to forbid using that to do 'timed evaluation'. I could hit the attribute plugs directly and get value with a different time but that would be using inputs outside of the data block.
Is the Maya dependency API essentially timeless - only allowing calculation using the state at the current time? With the only solution to use animation curves which are also essentially timeless (their input state of key frames remaining the same regardless of the time)?
A simple node connection is supposed to be updated on demand, ie for the 'current' frame. It's supposed to be ahistorical -- you should be able to jump to a given frame directly and get a complete evaluation of the scene state without history.
If you need offset values you can use a frame cache node to access a different point in the value stream. You connect the attribute you want to lag to the frameCache's 'stream' plug, and then connect either the 'future' or 'past' attribute to the plug on your node. The offset is applied by specifying the index value for the connections, ie, frameCache1.past[5] is 5 frames behind the value that was fed into the frameCache.
You can also do this in a less performant, but more flexible way by using an expression node. The expression can poll an attribute value at a particular time by calling getAttr() with the -t flag to specify the time. This is much slower to evaluate but lets you apply any arbitrary logic to the time offset you might want.

How is AVL tree insertion O(log n) when you need to recalculate balance factors up the tree after every insertion?

I'm implementing an AVL tree, and I'm trying to wrap my head around the time complexity of the adding process. It's my understanding that in order to achieve O(log n) you need to keep either balance or height state in tree nodes so that you don't have to recalculate them every time you need them (which may require a lot of additional tree traversal).
To solve this, I have a protocol that recursively "walks back up" a trail of parent pointers to the root, balancing if needed and setting heights along the way. This way, the addition algorithm kind of has a "capture" and "bubble" phase down and then back up the tree - like DOM events.
My question is: is this still technically O(log n) time? Technically, you only deal with divisions of half at every level in the tree, but you also need to travel down and then back up every time. What is the exact time complexity of this operation?
Assuming the height of the tree is H and the structure stays balanced during all operation.
Then, as you mentioned, inserting a node will take O(H).
However, every time a node is added to the AVL tree, you need to update the height of the parents all the way up to the root node.
Since the tree is balanced, updating height will traverse only the linked-list like structure with the newly inserted node in the tail.
The height updating can be viewed equivalent to traversing a linked-list with length equals to H.
Therefore, updating height will take another O(H) and the total update time is 2 * O(H), which is still O(log N) if we get rid of the constant factor.
Hope this makes sense to you.
"Technically, you only deal with divisions of half at every level in the tree, but you also need to travel down and then back up every time. What is the exact time complexity of this operation?"
You've stated that you have to travel down and up every time.
So, we can say that your function is upper bounded by a runtime of 2 * logn.
It's clear that this is O(logn).
More specifically, we could assign the constant 3 and a starting value of 1, such that
2 * logn <= 3 * logn for all values of n >= 1.
This reduces to 2 <= 3, which is of course true.
The idea behind big-O is to understand the basic shape of the function that upper-bounds your function's runtime as the input size moves towards infinity - thus, we can drop the constant factor of 2.

convert non balanced binary search tree to red black tree

Is it possible to convert a non balanced BST (the size of the tree is n and the height is h) to RBT in time complexirty of O(n) and space complexity of O(h)?
If you know the number of nodes before hand this is doable, knowing the number of nodes tells you the height of the target RB tree (regardless of what the original tree height).
Therefore you can simply 'peel' nodes off the original tree one-by-one starting from the minimum and place them in the correct tree slot. The easiest way to do this will end up with every row except for a potentially empty bottom row black. (That is, if you have a tree with 7 nodes they will all be black but if you have a tree with 6 the first 2 rows will be black and the bottom row will have 3 red nodes).
This will take O(n) time - to visit each node in the original tree - and O(h) space because you will need to keep track of some bookkeeping depending on where you are in the process.
And note this will only work if you know the number of nodes in the original tree, as it depends on knowing which nodes will be in the bottom row of the produced tree.

How do you derive the time complexity of alpha-beta pruning?

I understand the basics of minimax and alpha-beta pruning. In all the literature, they talk about the time complexity for the best case is O(b^(d/2)) where b = branching factor and d = depth of the tree, and the base case is when all the preferred nodes are expanded first.
In my example of the "best case", I have a binary tree of 4 levels, so out of the 16 terminal nodes, I need to expand at most 7 nodes. How does this relate to O(b^(d/2))?
I don't understand how they come to O(b^(d/2)).
O(b^(d/2)) correspond to the best case time complexity of alpha-beta pruning. Explanation:
With an (average or constant) branching factor of b, and a search
depth of d plies, the maximum number of leaf node positions evaluated
(when the move ordering is pessimal) is O(bb...*b) = O(b^d) – the
same as a simple minimax search. If the move ordering for the search
is optimal (meaning the best moves are always searched first), the
number of leaf node positions evaluated is about O(b*1*b*1*...*b) for
odd depth and O(b*1*b*1*...*1) for even depth, or O(b^(d/2)). In the
latter case, where the ply of a search is even, the effective
branching factor is reduced to its square root, or, equivalently, the
search can go twice as deep with the same amount of computation.
The explanation of b*1*b*1*... is that all the first player's moves
must be studied to find the best one, but for each, only the best
second player's move is needed to refute all but the first (and best)
first player move – alpha–beta ensures no other second player moves
need be considered.
Put simply, you "skip" every two level:
O describes the limiting behavior of a function when the argument tends towards a particular value or infinity, so in your case comparing precisely O(b^(d/2)) with small values of b and d doesn't really make sense.