Can anyone explain the time complexity of the below using the master method?
int sum(Node node) {
if (node == null) {
return 0;
}
return sum(node.left) + node.value + sum(node.right);
}
I know a's value is 2 but its hard for me to identify b and d. Is b=1 and d=cO(n)? In that case can anyone explain how b and d should be identified
well, to make the recurrence relation less complicated, we can assume a balanced binary tree that has 2^inodes, so we obtain a recurrence of T(n) = 2T(n/2) + 1 (ignoring the base case).
From above, we can find a = 2, b = 2, and c = 0, since 1 is O(1). Applying the master method, it passes case 1 and we can get our complexity as T(n) = Θ(nlog22) or O(n)
This is a function used to sum up all nodes in a binary tree. First down from root to leave and then goes up (stack unwinding). So, time complexity is O(N), as it needs to visit each node at least once.
Related
I got confused by calculating the worst-case DAG complexity when you do a Naive DFS from one node to another node.
For example in the following DAG,
Node A act as a start node, and if we always pick the last node, in this case, Node D as the end node, use DFS to calculate all path from A to D.
In this case, we have DFS going through Paths:
path 1st: A -> B -> C -> D
path 2nd: A -> B -> D
path 3rd: A -> C -> D
path 4th: A -> D
The computational complexity was 8 because it takes 3 iterations in the first path, 2 in the second, 2 in the third, and 1 in the last one.
Now if we expand this graph, add more nodes after.
Assuming we have N nodes, what is O(N) then?
The way I do the calculation for the number of total paths is like, every time we add a new node to an N-node-DAG, to replace Node A as the new beginning node, we add N new edge here. Because we need to add edges to go from the new start node to all nodes that existed.
If we assume P as total paths, we have
P(N) = P(N-1) + P(N-2) + P(N-3) + .... + P(1)
then you have
P(N) = 2 * P(N-1)
then
P(N) = O(2^N)
However, if we consider that not all paths is using the computational complexity of O(1), for example, the longest path that goes through all the nodes, we take O(N) for that single path, the actual cost is higher than O(2^N).
So what could that be then?
Current Algorithm and Time Complexity
As far as I understand, your current algorithm follows the steps below:
Start from a start node specified by us.
In each node, store the adjacent vertices in a stack. Then pop the first element from stack and repeat step 2 until the stack gets empty.
Terminate execution until the stack gets empty.
Our graph is a DAG, therefore there won't be any cycle in the graph and algorithm is guaranteed to terminate eventually.
For further examination, you mentioned about expanding the graph in the same manner. This means that, whenever we add the i(th) node to the graph, we have to create vertices from each node to that node - which means we have to insert i edges to the graph.
Let's say you start from the first node. In this case, time complexity will be:
T(1) = [1+T(2)] + [1+T(3)] + ... + [1+T(n-3)] + [1+T(n-2)] + [1+T(n-1)] + [1+T(n)]
T(1) = [1+T(2)] + [1+T(3)] + ... + [1+(T(n-2)+T(n-1)+T(n))] + [1+(T(n-1)+T(n))] + [1+T(n)] + 0 (assuming T(n) = 0)
T(1) = [1+T(2)] + [1+T(3)] + ... + (2+1+1) + (1+1) + 1 + 0
T(1) = [1+T(2)] + [1+T(3)] + ... + 4 + 2 + 1
with this manner
(observe the pattern, for T(n-1), we get 2^0 - so for T(2), we'll get 2^(n-3))
T(1) = 2^(n-3) + 2^(n) + ... + 2^(0)
T(1) = 2^(n-2) - 1
Eventually it turns out that the time complexity if O(2^N) for this algorithm. It turns out to be pretty bad, it's because this is extraordinarily brute force. Let's come up with an optimized algorithm that stores the information of visited vertices.
Note:
I spotted that there are two edges from A to B and B to C, but not from C to D. I couldn't understand the pattern here, but if it's the case then it means it requires more operations than 2^N. Well, naive DFS is a bad algorithm anyways, I strongly recommend you to implement the one below.
Optimized Algorithm and Time Complexity
To make your algorithm optimized, you can follow the steps below:
0) Create a boolean array to mark each state as visited when you visit them. Assigned each index to false initially.
Start from a start node specified by us.
In each node, store the adjacent vertices (that are not marked as visited) in a stack. Mark them as visited. Then pop the first element from stack and repeat step 2 until the stack gets empty.
Terminate execution until the stack gets empty.
This time, by storing an array to mark visited nodes, we avoid the phenomenon of visiting same node more than once. This algorithm, traditionally, has the time complexity of O(N + E), N being the number of nodes and E being the number of edges. In your case, your graph seems like a complete graph (if it was undirected though), therefore E ~ N^2, meaning that your time complexity will be O(N^2).
I think I found out.
so if we consider that each new node will add a new edge from the new node to each node in DAG, traversing each of those new edges will take complexity as 1.
Then we could have (if we use C as computational complexity):
C(N) = 1 + C(N-1) + 1 + C(N-2) + 1+ C(N-3) + .... + 1 + C(1)
then you have
C(N) = 2 * C(N-1) + N - 1
= 2^2 * C(N-2) + 2 * (N-2) + N - 1
= 2^3 * C(N-3) + 2^2 * (N-3) + 2 * (N -2) + N - 1
= 2^(N-1) * C(1) + 2^(N-2) * 1 + ...... + N - 1
At this moment this becomes a sum of the geometric progression of N, the ratio is 2, so the highest rank item is at 2^N.
Thus O(2^N) is the answer.
I want to know the time complexity of the code attached.
I get O(n^2logn), while my friends get O(nlogn) and O(n^2).
SomeMethod() = log n
Here is the code:
j = i**2;
for (k = 0; k < j; k++) {
for (p = 0; p < j; p++) {
x += p;
}
someMethod();
}
The question is not very clear about the variable N and the statement i**2.
i**2 gives a compilation error in java.
assuming someMethod() takes log N time(as mentioned in question), and completely ignoring value of N,
lets call i**2 as Z
someMethod(); runs Z times. and time complexity of the method is log N so that becomes:
Z * log N ----------------------------------------- A
lets call this expression A.
Now, x+=p runs Z^2 times (i loop * j loop) and takes constant time to run. that makes the following expression:
( Z^2 ) * 1 = ( Z^2 ) ---------------------- B
lets call this expression B.
The total run time is sum of expression A and expression B. which brings us to:
O((Z * log N) + (Z^2))
where Z = i**2
so final expression will be O(((i**2) * log N) + ((i**2)^2))
if we can assume i**2 is i^2, the expression becomes,
O(((i^2) * log N) + (i^4))
Considering only the higher order variables, like we consider n^2 in n^2 + 2n + 5, the complexity can be expressed as follows,
i^4
Based on the picture, the complexity is O(logNI2 + I4).
We cannot give a complexity class with one variable because the picture does not explain the relationship between N and I. They must be treated as separate variables.
And likewise, we cannot eliminate the logNI2 term because the N variable will dominate the I variable in some regions of N x I space.
If we treat N as a constant, then the complexity class reduces to O(I4).
We get the same if we treat N as being the same thing as I; i.e. there is a typo in the question.
(I think there is mistake in the way the question was set / phrased. If not, this is a trick question designed to see if you really understood the mathematical principles behind complexity involving multiple independent variables.)
Assume that T is a binary search tree with n nodes and height h. Each node x of T stores a
real number x.Key. Give the worst-case time complexity of the following algorithm Func1(T.root). You
need to justify your answer.
Func 1 (x)
if (x == NIL) return 0;
s1 <- Func1(x.left());
if (s1 < 100) then
s2 <- Func1(x.Right());
end
else
s2 <- 0;
end
s <- s1 + s2 + x.Key();
return (s);
x.left() & x.right() return left and right child of node x
x.key() return the key stored at node x
For the worst case run time, I was thinking that this would be O(height of tree) since this basically act like the minimum() or maximum() binary search tree algorithms. However, it's recursive, so I'm slightly hesitant to actually write O(h) as the worst case run-time.
When I think about it, the worst case would be if the function executed the if(s1 < 100) statement for every x.left, which would mean that every node is visited, so would that make the run time O(n)?
You're correct that the worst-case runtime of this function is Θ(n), which happens if that if statement always executes. In that case, the recursion visits each node, recursively visits the full right subtree, then recursively visits the full left subtree. (It also does O(1) work per node, which is why this sums up to O(n)).
I need to write a code that will find all pairs of consecutive numbers in BST.
For example: let's take the BST T with key 9, T.left.key = 8, T.right.key = 19. There is only one pair - (8, 9).
The naive solution that I thought about is to do any traversal (pre, in, post) on the BST and for each node to find its successor and predecessor, and if one or two of them are consecutive to the node - we'll print them. But the problem is that it'll will the O(n^2), because we have n nodes and for each one of them we use function that takes O(h), that in the worst case h ~ n.
Second solution is to copy all the elements to an array, and to find the consecutive numbers in the array. Here we use O(n) additional space, but the runtime is better - O(n).
Can you help me to find an efficient algorithm to do it? I'm trying to think about algorithm that don't use additional space, and its runtime is better than O(n^2)
*The required output is the number of those pairs (No need to print the pairs).
*any 2 consecutive integers in the BST is a pair.
*The BST containts only integers.
Thank you!
Why don't you just do an inorder traversal and count pairs on the fly? You'll need a global variable to keep track of the last number, and you'll need to initialize it to something which is not one less than the first number (e.g. the root of the tree). I mean:
// Last item
int last;
// Recursive function for in-order traversal
int countPairs (whichever_type treeRoot)
{
int r = 0; // Return value
if (treeRoot.leftChild != null)
r = r + countPairs (treeRoot.leftChild);
if (treeRoot.value == last + 1)
r = r + 1;
last = treeRoot.value;
if (treeRoot.rightChild != null)
r = r + countPairs (treeRoot.rightChild);
return r; // Edit 2016-03-02: This line was missing
}
// Main function
main (whichever_type treeRoot)
{
int r;
if (treeRoot == null)
r = 0;
else
{
last = treeRoot.value; // to make sure this is not one less than the lowest element
r = countPairs (treeRoot);
}
// Done. Now the variable r contains the result
}
I just wanted to make sure I'm going in the right direction. I want to find a max value of an array by recursively splitting it and find the max of each separate array. Because I am splitting it, it would be 2*T(n/2). And because I have to make a comparison at the end for the 2 arrays, I have T(1).
So would my recurrence relation be like this:
T = { 2*T(n/2) + 1, when n>=2 ;T(1), when n = 1;
and and therefore my complexity would be Theta(nlgn)?
The formula you composed seems about right, but your analysis isn't perfect.
T = 2*T(n/2) + 1 = 2*(2*T(n/4) + 1) + 1 = ...
For the i-th iteration you'll get:
Ti(n) = 2^i*T(n/2^i) + i
now what you want to know for which i does n/2^i equals 1 (or just about any constant, if you like) so you reach the end-condition of n=1.
That would be the solution to n/2^I = 1 -> I = Log2(n). Plant it in the equation for Ti and you get:
TI(n) = 2^log2(n)*T(n/2^log2(n)) + log2(n) = n*1+log2(n) = n + log2(n)
and you get T(n) = O(n + log2(n) (just like #bdares said) = O(n) (just like #bdares said)
No, no... you are taking O(1) time for each recursion.
How many are there?
There are N leaves, so you know it's at least O(N).
How many do you need to compare to find the absolute maximum? That's O(log(N)).
Add them together, don't multiply. O(N+log(N)) is your time complexity.