Time complexity of BFS algorithm if sorting the queue is necessary - time-complexity

In BFS, you typically have something like
while q:
popped_node = q.popleft()
res.append(do_work(popped_node))
for child in popped_node.children:
q.append(child)
return res
But say for some reason you needed to iterate over the children in order based on some key, so you would have
while q:
popped_node = q.popleft()
res.append(do_work(popped_node))
for child in sorted(popped_node.children, key=lambda x: x._id):
q.append(child)
return res
Typically the time complexity of BFS is O(N), number of nodes. How does adding this sorting portion affect the time complexity?

Let's consider there are V vertices in a graph G with E edges.
Looking at you code:
// This loop runs V times
while q:
popped_node = q.popleft()
res.append(do_work(popped_node)) // assuming this is O(1)
// this loop runs O(Ei + Eilg(Ei)) times, where Ei is number of edges connected with current vertex
// Eilg(Ei) is upper bound for sorting algorithm
for child in sorted(popped_node.children, key=lambda x: x._id):
q.append(child)
return res
Time complexity is as follows:
= O(1) + O(E1) + O(E1lgE1) + O(1) + O(E2) + O(E2lgE2) + ... O(1) + O(EV) + O(EVlgEV)
= V*(O(1) + Ei) + ∑i=1V O(EilgEi)
= O(V + E) + ∑i=1V O(EilgEi)
This extra term ∑i=1V O(EilgEi) in time complexity is due to sorting.
Assuming there is only single edge b/w any two vertices, in worst case Ei -> V i.e. a fully connected graph
Time complexity can then be given as:
= O(V + E) + ∑i=1V O(EilgEi)
= O(V + E) + ∑i=1VO(VlgV)
= O(V + E) + O(V2lgV)
Two points to mention here:
If sorting utilises sorting algorithm other than comparison sort, like counting sort, bucket sort etc. with linear complexity then ∑i=1V O(EilgEi) can be replaced by ∑i=1VO(Ei) = O(E), which evaluates to O(V + E).
Although for adjacency matrix this is a good tight bound, but for adjacency list(or sparse graph) this will be a loose upper bound.

Related

What would be time complexity of a binary search that makes call to another helper function?

The helper retrieves value to be compared in the search function. here mem is an object.
def get_val(mem, c):
if c == "n":
return mem.get_name()
elif c == "z":
return mem.get_zip()
In the function below the helper function above is called in each iteration. Will this impact the time-complexity of the binary search or will it still be O(log n)
def bin_search(array, c, s):
first = 0
last = len(array)-1
found = False
while( first<=last and not found):
mid = (first + last)//2
val = get_val(array[mid], criteria)
if val == s:
return array[mid]
else:
if s < val:
last = mid - 1
else:
first = mid + 1
return None
Since you are calling get_val() once per iteration of your binary search, the total time complexity should be
O(log n * f(x)),
where f(x) is the time complexity of get_val(). If this is constant (does not depend on the input, such as the contents of array), then indeed your total time complexity is still O(log n).

What's the time complexity of Dijkstra's Algorithm

Dijkstra((V, E)):
S = {} //O(1)
for each vertex v ∈ V: //O(V)
d[v] = ∞ //O(1)
d[source] = 0 //O(1)
while S != V: //O(V)
v = non visited vertex with the smallest d[v] //O(V)
for each edge (v, u): //O(E)
if u ∈/ S and d[v] + w(v, u) < d[u]:
d[u] = d[v] + w(v, u)
S = S ∪ {v}
Note: ∈/ means not in, i can't type it in the code.
This question maybe duplicates with some posts.
Understanding Time complexity calculation for Dijkstra Algorithm
Complexity Of Dijkstra's algorithm
Complexity in Dijkstras algorithm
I read them and even some posts on Quora, but still cannot understand. I put some comments in the pseudo code and tried to work it out. I really confuse on why it is O(E log V)
The "non visited vertex with the smallest d[v]" is actually O(1) if you use a min heap and insertion in the min heap is O(log V).
Therefore the complexity is as you correctly mentioned for the other loops:
O((V logV) + (E logV)) = O(E logV) // Assuming E > V which is reasonable
it is O((V logV) + (E logV)) = O(logV * (V + E)) for general graphs.
You wouldn't just assume that the graph is dense i.e. |E| = O(|V|^2) since most graphs in applications are actually sparse i.e. |E| = O(|V|).

Time complexity for all Fibonacci numbers from 0 to n

I was calculating the time complexity of this code that prints all Fibonacci numbers from 0 to n. According to what I calculated, the fib() method takes O(2^n) and since it is being called i number of times, so it came out to be O(n*2^n). However, the book says it is O(2^n). Can anyone explain why the time complexity here will be O(2^n)?
Here is the code:
void allFib(int n){
for(int i = 0 ; i < n ; i++){
System.out.println(i + ": " + fib(i));
}
}
int fib(int n ){
if(n <= 0) return 0;
else if (n == 1) return 1;
return fib(n-1) + fib(n-2);
}
I've figured it out my own way to understand the book's solution, hope it helps those who are still struggling.
Imagine we now call allFib(n).
Since we have a for loop from 0 to n, the following function will be called:
i = 0, call fib(0)
i = 1, call fib(1)
i = 2, call fib(2)
...
i = n-1, call fib(n-1)
As discussed before fib(n) will take O(2^n) = 2^n steps
Therefore,
i = 0, call fib(0) takes 2^0 steps
i = 1, call fib(1) takes 2^1 steps
i = 2, call fib(2) takes 2^2 steps
...
i = n-1, call fib(n-1) takes 2^(n-1) steps
Thus, the runtime of allFib(n) will be
2^0 + 2^1 + 2^2 + ... + 2^(n-1). *
Follow the sum of powers of 2 formula we have:
* = 2^(n-1+1) - 1 = 2^n - 1.
Thus it is O(2^n)
I finally got my answer from my professor and I'll post it here:
According to him: you should not just simply look the for loop iterating from 0 to n, but you must find what are the actual computations by calculating the steps.
fib(1) takes 2^1 steps
fib(2) takes 2^2 steps
fib(3) takes 2^3 steps
..........
fib(n) takes 2^n steps
now adding these:
2^1 + 2^2 + 2^3 + ........+ 2^n = 2^n+1
and ignoring the constant, it is 2^n, hence the time complexity is O(2^n).

complexity for recursive functions (Big O notation)

I have two functions which I would like to determine the complexity for.
The first one I just need to know whether my solving is correct, the second one because of the two recursive calls I am struggling to find the solution to, if possible would be good to have the working out so that I can learn how its done.
First:
def sum(list):
assert len(list)>0
if len(list) == 1:
return list[0]
else:
return sum(list[0:-1]) + list[-1]
Attempted solution :
T(0) = 4
T(n) = T(n-1) + 1 + c -- True for all n >0
T(n) = T(n-1) + 1 + c
= T(n-2) + 2 + 2C
= T(n-k) + k = kC --(n-k = 0 implies that k=n)
T(n) = T(0) + n + nC
= T(0) + 2nC --(T0 is nothing but 4)
= 6nC
Complexity = O(n)
Second:
def binSum(list):
if len(list) == 1:
return list[0]
else:
return binSum(list[:len(list)//2]) + binSum(list[len(list)//2:])
Any help would be greatly appreciated.
Regards
For the first case, you can model the time complexity with the recursive function T(n) = T(n-1) + O(1) and T(0) = O(1). which obviously solves to T(n) = O(n).
Here's a more direct and more formal proof by induction. The base case is easy: T(0)<=C by the definition of O(1). For the inductive step, suppose that T(k) <= C*k for all k<=n for some absolute constant C>0. Now T(n+1) <= D + T(n) <= D + C*n <= max(C,D)*(n+1) by the inductive hypothesis, where D>0 is an absolute constant.
For the second case, you can model the time complexity with T(n) = T(n/2) + T(n/2) = 2T(n/2) for n>1 and T(1)=O(1). This solves to T(n)=O(n) by the master theorem.

Calculate the time complexity of the following function

How do I calculate the time complexity of the following function?
int Compute (int n)
{
int j = 0;
int i = 0;
while (i<=n)
{
i = 2*j + i + 1;
j++;
}
return j-1;
}
Now, I know that the loop has O(n) time complexity, but in this case i grows in a much faster rate. Taking this iteration by iteration I found out that, for every m-th iteration i = m^2. But I'm still confused how to calculate Big-O.
If you look at the values of i and j for a few iterations:
i=1
j=1
i=4
j=2
i=9
j=3
i=16
j=4
and so on. By mathematical induction we can prove that i takes square values: ( 2*n + n^2 + 1 = (n+1)^2 )
Since we loop only while i<=n and since i takes the vales 1, 2^2, 3^2,..., k^2 <=n, it means that we stop when i=k goes over sqrt(n). Hence the complexity seems to be O(k) which means O(sqrt(n)).