a)
sum = 0;
for(i=1;i<2*n;i++)
for(j=1;j<i*i;j++)
for(k=1;k<j;k++)
if (j % i == 1)
sum++;
b)
sum = 0;
for(i=1;i<2*n;i++)
for(j=1;j<i*i;j++)
for(k=1;k<j;k++)
if (j % i)
sum++;
I chanced the above two pseudo code above while looking for algorithm analysis questions to practice on. The answers for the above two snippets are O(n4) and O(n5) respectively.
Note that the running time corresponds here to the number of times the operation
sum++ is executed.
How is it that the time complexity for the above two algorithms different by an order of n when the only difference is the if loop testing for equality to 1? How would I go about counting the O(n) complexity for such a question?
Algorithm A
Let's call f(n) the number of operations aggregated at the level of the outer loop, g(n) that for the first inner loop, h(n) for the most inner loop.
We can see that
f(n) = sum(i=1; i < 2n; g(i))
g(i) = sum(j=1, j < i*i; h(j))
h(j) = sum(k=1; k < j; 1 if j%i = 1, else 0)
How many times j%i = 1 while j varies from 1 to i*i? Exactly i times for the following values of j:
j = 0.i + 1
j = 1.i + 1
j = 2.i + 1
...
j = (i-1)*i + 1
So:
h(j) = sum(k=1; k < j; 1 if `j%i = 1`, else 0)
= i
=> g(i) = sum(j=1, j < i*i; h(j))
= sum(j=1, j < i*i; i)
= i*i * i = i^3
=> f(n) = sum(i=1; i < 2n; g(i))
= sum(i=1; i < 2n; i^3)
<= sum(i=1; i < 2n; 16.n^3) // Here just cap every i^3 with (2n)^3
<= 32.n^4
=> f(n) = O(n^4)
Algorithm B
(With the same naming conventions as algorithm A)
How many times do we have j%i casting to true? Every time j%i is different from 0. And how many times does this happen? We need to remove the occurrences for which j is a multiple of i, which are i, 2.i, ... (i-1).i, over the range of integers 1 to i*i, which has i*i numbers. This quantity is i*i - (i-1) = i^2 - i + 1.
As a result,
h(j) = sum(k=1; k < j; 1 if j%i = 1, else 0)
= i^2 - i + 1
= i^2 // dropping the terms i and 1 which are dominated by i^2 as i -> +Inf.
=> g(i) = sum(j=1, j < i*i; h(j))
= sum(j=1, j < i*i; i^2)
= i*i * i^2
= i^4
=> f(n) = sum(i=1; i < 2n; g(i))
= sum(i=1; i < 2n; i^4)
<= sum(i=1; i < 2n; 32.n^4) // Here just cap every i^4 with (2n)^4
<= 64.n^5
=> f(n) = O(n^5)
Bottom line
The difference of complexities between algorithms A and B comes from the fact that:
You have i values of j for which i%j = 1
You have i^2 - i + 1 values of j for which i%j <> 0
Related
res = 0
for i in range (1,n):
j = i
while j % 2 == 0:
j = j/2
res = res + j
I understand that upper bound is O(nlogn), however I'm wondering if it's possible to find a stronger constraint? I'm stuck with the analysis.
Some ideas that may be helpful:
Could create a function (g(n)) that annotates your function (f(n)) to include how many operations occur when running f(n)
def f(n):
res = 0
for i in range (1,n):
j = i
while j % 2 == 0:
j = j/2
res = res + j
return res
def g(n):
comparisons = 0
operations = 0
assignments = 0
assignments += 1
res = 0
assignments += 1. # i = 1
comparisons += 1. # i < n
for i in range (1,n):
assignments += 1
j = i
operations += 1
comparisons += 1
while j % 2 == 0:
operations += 1
assignments += 1
j = j/2
operations += 1
assignments += 1
res = res + j
operations += 1
comparisons += 1
operations += 1 # i + 1
assignments += 1 # assign to i
comparisons += 1 # i < n ?
return operations + comparisons + assignments
For n = 1, the code runs without hitting any loops: assigning the value of res; assigning i as 1; comparing i to n and skipping the loop as a result.
For n > 1, you get into the for loop, and the for statement is all that is changing the loop varaible, so the complexity of the rest of the code is at least O(n).
Once in the loop:
if i is odd, then you only assign j, perform the mod operation and compare to zero. That will be the case for half the values of i, so each run of the loop from 2 to n will (half the time) add a fixed number of a few operations (including the loop operations). So, that's still O(n), just with a larger constant.
if i is even, then we divide by 2 until it is odd. This is what we need to work out the impact of.
Based on my counting of the different operations, I get:
g_initial_setup = 3 (every time)
g_for_any_i = 6 (half the time, it is just this)
g_for_even_i = 6 for each time we divide by two (the other half of the time)
For a random even i between 2 and n, half the time we will only need to divide by two once, half the remaining time by two again, half the remaining time by two again, etc. So we have an infinite series as n goes to infinity of sum(1/2^i) for 1 < i < n, and multiply that by the 6 operations done for each halving of j.
I would expect from this:
g(n) = 3 + (n * 6) + (n * 6) * sum( 1 / pow(2,m) for m between 1 and n )
Given that the infinite series 1/2^n = 1, we simplify that to:
g(n) = 3 + 12n as n approaches infinity.
That implies that the algorithm is O(n). Huh. I did not expect that.
Let's try out the function g(n) from above, counting all the operations that are occurring as f(n) is computed.
g(1) = 3 operations
g(2) = 9
g(3) = 21
g(4) = 27
g(5) = 45
g(10) = 123
g(100) = 1167
g(1000) = 11943
g(10000) = 119943
g(100000) = 1199931
g(1000000) = 11999919
g(10000000) = 119999907
Okay, unless I've really made a serious error here, it's O(n).
I am trying to find the Time complexity from the lines of code. However, I am not really getting it.
int codeMT(int n) {
int a = 1;
int i = 1;
while (i < n * n) {
j = 1;
while (j < n * n * n) {
j = j * 3;
a = a + j + i;
}
i = i + 2;
}
return a;
}
In this case, this is my thought process.
Firstly, we need to start from the inner loop.
while (j < n * n * n) {
j = j * 3;
a = a + j + i;
}
In here, there is j = j * 3 which makes this log(n)(log base 3) but then we have a loop guard that has nnn which makes the n^3. Therefore, this part of code has log(n^3) time complexity.
Secondly, the outer while loop has the loop guard of n * n which makes it n^2.
Thus, O(n^2(log(n^3))
Is my approach okay? or is there anything that I am missing/things that I could improve my thought process?
Thank you
sort(S);
for i=0 to n-2 do
a = S[i];
start = i+1;
end = n-1;
while (start < end) do
b = S[start]
c = S[end];
if (a+b+c == 0) then
output a, b, c;
start = start + 1;
end = end - 1;
else if (a+b+c > 0) then
end = end - 1;
else
start = start + 1;
end
end
Here sort(S) sorts the given integers with time complexity O(n^2). How do I find the complexity of the above problem. Do we need any higher order mathematics to do this question?
Let's simplify the pseudocode by considering the worst case scenario.
sort(S); # O(N log(N))
for i=0 to n-2 do # O(N)
start = i+1; # O(1)
end = n-1; # O(1)
while (start < end) # O(N - i)
start = start + 1; # O(1)
end
end
which can be also written as:
sort(S);
for i=0 to n-2 do
for j = i+1 to n-1 do:
...
end
end
So the number of iterations is
1/2 N * (N+1) = O(N^2)
which is the dominant term with respect to the sorting function.
In the book Programming Interviews Exposed it says that the complexity of the program below is O(N), but I don't understand how this is possible. Can someone explain why this is?
int var = 2;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j *= 2) {
var += var;
}
}
You need a bit of math to see that. The inner loop iterates Θ(1 + log [N/(i+1)]) times (the 1 + is necessary since for i >= N/2, [N/(i+1)] = 1 and the logarithm is 0, yet the loop iterates once). j takes the values (i+1)*2^k until it is at least as large as N, and
(i+1)*2^k >= N <=> 2^k >= N/(i+1) <=> k >= log_2 (N/(i+1))
using mathematical division. So the update j *= 2 is called ceiling(log_2 (N/(i+1))) times and the condition is checked 1 + ceiling(log_2 (N/(i+1))) times. Thus we can write the total work
N-1 N
∑ (1 + log (N/(i+1)) = N + N*log N - ∑ log j
i=0 j=1
= N + N*log N - log N!
Now, Stirling's formula tells us
log N! = N*log N - N + O(log N)
so we find the total work done is indeed O(N).
Outer loop runs n times. Now it all depends on the inner loop.
The inner loop now is the tricky one.
Lets follow:
i=0 --> j=1 ---> log(n) iterations
...
...
i=(n/2)-1 --> j=n/2 ---> 1 iteration.
i=(n/2) --> j=(n/2)+1 --->1 iteration.
i > (n/2) ---> 1 iteration
(n/2)-1 >= i > (n/4) ---> 2 iterations
(n/4) >= i > (n/8) ---> 3 iterations
(n/8) >= i > (n/16) ---> 4 iterations
(n/16) >= i > (n/32) ---> 5 iterations
(n/2)*1 + (n/4)*2 + (n/8)*3 + (n/16)*4 + ... + [n/(2^i)]*i
N-1
n*∑ [i/(2^i)] =< 2*n
i=0
--> O(n)
#Daniel Fischer's answer is correct.
I would like to add the fact that this algorithm's exact running time is as follows:
Which means:
What's the time complexity of this loop?
j=2
while(j <= n)
{
//body of the loop is Θ(1)
j=j^2
}
You have
j = 2
and each loop :
j = j^2
The pattern is :
2 = 2^(2^0)
2*2 = 2^(2^1)
4*4 = 2^(2^2)
16*16 = 2^(2^3)
Which can be seen as :
2^(2^k) with k being the number of iteration
Hence loop stops when :
2^(2^k) >= n
log2(2^(2^k)) >= log2(n)
2^k >= log2(n)
log2(2^k) >= log2(n)
k >= log2(log2(n))
the complexity is log2(log2(n))