What's the time complexity of Dijkstra's Algorithm - time-complexity

Dijkstra((V, E)):
S = {} //O(1)
for each vertex v ∈ V: //O(V)
d[v] = ∞ //O(1)
d[source] = 0 //O(1)
while S != V: //O(V)
v = non visited vertex with the smallest d[v] //O(V)
for each edge (v, u): //O(E)
if u ∈/ S and d[v] + w(v, u) < d[u]:
d[u] = d[v] + w(v, u)
S = S ∪ {v}
Note: ∈/ means not in, i can't type it in the code.
This question maybe duplicates with some posts.
Understanding Time complexity calculation for Dijkstra Algorithm
Complexity Of Dijkstra's algorithm
Complexity in Dijkstras algorithm
I read them and even some posts on Quora, but still cannot understand. I put some comments in the pseudo code and tried to work it out. I really confuse on why it is O(E log V)

The "non visited vertex with the smallest d[v]" is actually O(1) if you use a min heap and insertion in the min heap is O(log V).
Therefore the complexity is as you correctly mentioned for the other loops:
O((V logV) + (E logV)) = O(E logV) // Assuming E > V which is reasonable

it is O((V logV) + (E logV)) = O(logV * (V + E)) for general graphs.
You wouldn't just assume that the graph is dense i.e. |E| = O(|V|^2) since most graphs in applications are actually sparse i.e. |E| = O(|V|).

Related

Time Complexity of a loop iteration by i^2

i=2;
while(i<n) {
i = i*i;
//O(1) complexity here
}
I'm new to time complexity and trying to figure out what this would be.
I know that if the iteration would've been i=2*i then it'd be O(log(n)) but I don't really know how I can calculate iterations of i^2.
Intuitively it'd also be O(log(n)) because it "iterates faster" but I don't know how to formally explain this.
Any help would be appreciated, thanks in advance
You can neatly translate this into the i = 2 * i case you mentioned by considering the mathematical log of i. Pseudocode for how this value changes:
log_i = log(2);
while (log_i < log_n) {
log_i = 2 * log_i;
// O(1) stuff here
}
It should be clear from this that the time complexity is O(log log n), assuming constant multiplication cost of course.
I think it's easier to approach this problem just using mathematics.
Consider your variable i. What sequence does it take? It seems to be
2, 4, 16, 162, ...
If you look at it for a bit, you notice that this sequence is just
2, 22, (22)2, ((22)2)2, ... or
21, 22, 24, 28, ... which is
220, 221, 222, 223, ...
so general term for this sequence is 22k where k = 0, 1, 2, ...
Now, how many iterations does your loop make? It will be in the order of k when 22k = n. So let us solve this equation:
22k = n (apply log2 to both sides)
2k = log2n (apply log2 to both sides again)
k = log2(log2n)
In big-O notation, the base of the logarithm doesn't matter, so we say your algorithm has a time complexity of:
O(log log n).
There are different ways to do i^2, The iterative approach (The one that you have) will have a time complexity of O(N) because we are iterating once till N.
Another method is recursive, something like:
static long pow(int x, int power) {
//System.out.println(x+":"+power);
//base condition
if (power == 0)
return 1 l;
if (power == 1)
return x;
power--;
return x * pow(x, power);}
this will also have a time complexity of O(N) because pow(x,n) is called recursively for each number from 1 to n.
The last method (and more efficient one) is the Divide and conquer which can improve the time complexity by only by calling pow(x, power/2).
static long pow(int x, int power) {
//System.out.println(x + ":" + power);
//base condition
if (power == 0)
return 1 L;
if (power == 1)
return x;
log res = pow(x, power / 2);
//if power is even
else if (power % 2 == 0)
return res * res;
else
return x * res * res; //if power is odd}
The time complexity in this case would be O(log N) because pow(x,n/2) is calculated and then stored for using the same result.

Time complexity of BFS algorithm if sorting the queue is necessary

In BFS, you typically have something like
while q:
popped_node = q.popleft()
res.append(do_work(popped_node))
for child in popped_node.children:
q.append(child)
return res
But say for some reason you needed to iterate over the children in order based on some key, so you would have
while q:
popped_node = q.popleft()
res.append(do_work(popped_node))
for child in sorted(popped_node.children, key=lambda x: x._id):
q.append(child)
return res
Typically the time complexity of BFS is O(N), number of nodes. How does adding this sorting portion affect the time complexity?
Let's consider there are V vertices in a graph G with E edges.
Looking at you code:
// This loop runs V times
while q:
popped_node = q.popleft()
res.append(do_work(popped_node)) // assuming this is O(1)
// this loop runs O(Ei + Eilg(Ei)) times, where Ei is number of edges connected with current vertex
// Eilg(Ei) is upper bound for sorting algorithm
for child in sorted(popped_node.children, key=lambda x: x._id):
q.append(child)
return res
Time complexity is as follows:
= O(1) + O(E1) + O(E1lgE1) + O(1) + O(E2) + O(E2lgE2) + ... O(1) + O(EV) + O(EVlgEV)
= V*(O(1) + Ei) + ∑i=1V O(EilgEi)
= O(V + E) + ∑i=1V O(EilgEi)
This extra term ∑i=1V O(EilgEi) in time complexity is due to sorting.
Assuming there is only single edge b/w any two vertices, in worst case Ei -> V i.e. a fully connected graph
Time complexity can then be given as:
= O(V + E) + ∑i=1V O(EilgEi)
= O(V + E) + ∑i=1VO(VlgV)
= O(V + E) + O(V2lgV)
Two points to mention here:
If sorting utilises sorting algorithm other than comparison sort, like counting sort, bucket sort etc. with linear complexity then ∑i=1V O(EilgEi) can be replaced by ∑i=1VO(Ei) = O(E), which evaluates to O(V + E).
Although for adjacency matrix this is a good tight bound, but for adjacency list(or sparse graph) this will be a loose upper bound.

What would be time complexity of a binary search that makes call to another helper function?

The helper retrieves value to be compared in the search function. here mem is an object.
def get_val(mem, c):
if c == "n":
return mem.get_name()
elif c == "z":
return mem.get_zip()
In the function below the helper function above is called in each iteration. Will this impact the time-complexity of the binary search or will it still be O(log n)
def bin_search(array, c, s):
first = 0
last = len(array)-1
found = False
while( first<=last and not found):
mid = (first + last)//2
val = get_val(array[mid], criteria)
if val == s:
return array[mid]
else:
if s < val:
last = mid - 1
else:
first = mid + 1
return None
Since you are calling get_val() once per iteration of your binary search, the total time complexity should be
O(log n * f(x)),
where f(x) is the time complexity of get_val(). If this is constant (does not depend on the input, such as the contents of array), then indeed your total time complexity is still O(log n).

Time complexity of a funcion with function*function in a loop

can you help me please to find the complexity of the following function:
proc (int n)
{
for (i=0 ; i<n ; i++)
{
x = g(n)+f(n) ;
for (j=0 ; j<n ; j++)
{
y=h(j)*g(j) ;
}
}
return x+y ;
}
With f = O(n^2), g = O(n), h = Θ(log(n)).
The things I am not sure:
What is the complexity of "y = h(j) * g(j)", as to me n*log(n)?
Is there a difference in complexity if in the loop is "y = h(j) * g(j)" or just "h(j) * g(j)"?
Is it right, that the complexity of "x = g(n) + f(n)" is n + n^2?
Thank you!
Complexity of inner loop (sum of h*g)
Since h(j) = Θ(log(j)) and g(j) = O(j), the complexity of h(j).g(j) is O(j.log(j)), that is <= K.j.log(j) for K > 0. Therefore the inner loop yields
K.(1.log(1) + 2.log(2) + ... + (n-1).log(n-1))
<= K'.n^2.log(n)
For a well-chosen constant K' and the inner loop gives a complexity O(n^2.log(n)).
Complexity of g + f
g = O(n) and f = O(n^2) so the complexity of f + g is O(n^2).
Overall complexity
A: Sum of n terms O(n^2) for f+g gives O(n^3).
B: Sum of terms j^2log(j) for 0 <= j < n gives O(n^3.log(n)).
Therefore the complexity of your method is O(n^3.log(n)).
Big O of x = O(n^2)
Big O of y = O(n log(n))
Now to compute the Big O of whole algorithm, we have to look at the inner most loop. And in this case, the inner most loop is contained in y = h(j) * g(j).
The Big O can be computed starting from the lowest and going towards up.
The Big O of x would be added not multiplied, because it is on the same level as of the inner for loop.
Big O = O(n log(n)) * O(n) * O(n) + O(n^2)
It can be written as:
Big O = O ((n log(n) * n * n) + (n * n^2))
Big O = O((n^3 log(n) + n^3)
Neglecting smaller terms would give us:
Big O = O(n^3 log(n))

what is the complexity of int multiplyRec(int m, int n){ if(n == 1) return m; return m + multiplyRec(m, n - 1); }

What is the time complexity of the following recursive relation? and how?
int multiplyRec(int m, int n){
if(n == 1)
return m;
return m + multiplyRec(m, n - 1);
}
I suppose it's O(n), but not if the function is called with n < 1 - in that case you'll get stack overflow error
If each function call of recursive algorithm takes O(m) space and if the maximum depth of recursion tree is 'n' then space complexity of recursive algorithm would be O(nm).
It is O(n) because here we are calculating T(n) = K + T(n-1) and so on here k is constant.
enter code here =o(n) because T(n)=k1+k2+T(n-1) :k1+k2=K so we have T(n)=K+T(n-1)
and by method of substitution method we get T(n)=k(n+1)
T(n)= k*n because ignore term k so time complexity is o(n)
O(n)
recurence relation is T(n)= K+ T(n-1)
here k is constant term
and we search in the linear manner.