I am using this algorithm in my program :
for( i=0 ; i<N ; i++ )
for( j=i+1 ; j<N+1 ; j++ )
for( k=0 ; k<i ; k++ )
doWork();
Can anyone help me find the time complexity of this snippet ?
I guess for the first two loops it is
N*(N+1)/2
right ? what about the three loops all together?
Thanks to #Tim Meyer to correct me:
Simple equation gives for (N= 0,1,2,3,4,5,6, 7, 8 ...) following series: 0, 0, 1, 4, 10, 20, 35, 56, 84 ... , which is resolved with following formula:
u(n) = (n - 1)n(n + 1)/6
So it will have O((N - 1)N(N + 1)/6) time complexity, which can be simplified to O(N^3)
Formally, you can do the following:
Related
I try to calculate Big O complexity for this code but I always fail....
I tried to nest SUM's or to get the number of steps for each case like:
i=1 j=1 k=1 (1 step)
i=2 j=1,2 k=1,2,3,4 (4 steps)
. . . . . . . . . . . . . . .
i=n (i said n = 2^(log n) j = 1,2,4,8,16,.....,n k=1,2,3,4,.....n^2 (n^2 steps)
then sum all the steps together, I need help.
for (int i=1; i<=n; i*=2)
for (int j=1; j<=i; j*=2)
for(int k=1; k<=j*j; k++)
//code line with complexity code O(1)
Let's take a look at the number of times the inner loop runs: j2. But j steps along in powers of 2 up to i. i in turn steps in powers of 2 up to n. So let's "draw" a little graphic of the terms of the sum that would give us the total number of iterations:
---- 1
^ 1 4
| 1 4 16
log2(n) ...
| 1 4 16 ... n2/16
v 1 4 16 ... n2/16 n2/4
---- 1 4 16 ... n2/16 n2/4 n2
|<------log2(n)------>|
The graphic can be interpreted as follows: each value of i corresponds to a row. Each value of j is a column within that row. The number itself is the number of iterations k goes through. The values of j are the square roots of the numbers. The values of i are the square roots of the last element in each row. The sum of all the numbers is the total number of iterations.
Looking at the bottom row, the terms of the sum are (2z)2 = 22z for z = 1 ... log2(n). The number of times that the terms appear in the sum is modulated by the height of the column. The height for a given term is log2(n) + 1 - z (basically a count down from log2(n)).
So the final sum is
log2(n)
Σ 22z(log2(n) + 1 - z)
z = 1
Here is what Wolfram Alpha has to say about evaluating the sum: http://m.wolframalpha.com/input/?i=sum+%28%28log%5B2%2C+n%5D%29+%2B+1+-+z%29%282%5E%282z%29%29%2C+z%3D1+to+log%5B2%2C+n%5D:
C1n2 - C2log(n) - C3
Cutting out all the less significant terms and constants, the result is
O(n2)
For the outermost loop:
sum_{i in {1, 2, 4, 8, 16, ...}} 1, i <= n (+)
<=>
sum_{i in {2^0, 2^1, 2^2, ... }} 1, i <= n
Let 2^I = i:
2^I = i <=> e^{I log 2} = i <=> I log 2 = log i <=> I = (log i)/(log 2)
Thus, (+) is equivalent to
sum_{I in {0, 1, ... }} 1, I <= floor((log n)/(log 2)) ~= log n (*)
Second outermost loop:
sum_{j in {1, 2, 4, 8, 16, ...}} 1, j <= i (++)
As above, 2^I = i, and let 2^J = j. Similarly to above,
(++) is equivalent to:
sum_{J in {0, 1, ... }} 1, J <= floor((log (2^I))/(log 2)) = floor(I/(log 2)) ~= I (**)
To touch base, only the outermost and second outermost
have now been reduced to
sum_{I in {0, 1, ... }}^{log n} sum_{J in {0, 1, ...}}^{I} ...
Which is (if there would be no innermost loop) O((log n)^2)
Innermost loop is a trivial one if we can express the largest bound in terms of `n`.
sum_{k in {1, 2, 3, 4, ...}} 1, k <= j^2 (+)
As above, let 2^J = j and note that j^2 = 2^(2J)
sum_{k in {1, 2, 3, 4, ...}} 1, k <= 2^(2J)
Thus, k is bounded by 2^(2 max(J)) = 2^(2 max(I)) = 2^(2 log(n) ) = 2n^2 (***)
Combining (*), (**) and (***), the asymptotic complexity of the three nested loops is:
O(n^2 log^2 n) (or, O((n log n)^2)).
I have tried to find but my answer doesn't match with the solution in the text.
Could anyone explain me to find the time complexity?
for (int i=0; i<n; i++)
for (int j=i; j< i*i; j++)
if (j%i == 0)
{
for (int k=0; k<j; k++)
printf("*");
}
Let f(n) be the number of operations aggregated from the outer loop,
Let g(n) be the number of operations aggregated at the level of the first inner loop.
Let h(n) be the number of operations performed at the level of the third (most inner) loop.
Looking at the most inner loop
for (int k=0; k<j; k++)
printf("*");
We can say that h(j) = j.
Now, as j varies from i to i*i, the following values of i satisfy i%j = 0, i.e. i is a multiple of j:
j = 1.i
j = 2.i
j = 3.i
...
j = (i-1).i
So
g(i) = sum(j=i, j<i^2, h(j) if j%i=0, else 0)
= h(i) + h(2.i) + ... + h((i-1).i)
= i + 2.i + ... + (i-1).i
= i.(1 + 2 + ... + i-1) = i.i.(i-1)/2
= 0.5i^3 // dropped the term -0.5i^2 dominated by i^3 as i -> +Inf
=> f(n) = sum(i=0, i<n, g(i))
= sum(i=0, i<n, 0.5i^3)
<= sum(i=0, i<n, 0.5n^3)
<= 0.5n^4
=> f(n) = O(n^4)
Could anyone explain me how to find the time complexity?
A posted claim of cited O( N^5 ) was not supported by experimental data.
Best start with experimentation on low scale:
for ( int aScaleOfBigO_N = 1;
aScaleOfBigO_N < 2147483646;
aScaleOfBigO_N *= 2
){
printf( "START: running experiment for a scale of N( %d ) produces this:\n",
aScaleOfBigO_N
);
int letsAlsoExplicitlyCountTheVisits = 0;
for ( int i = 0; i < aScaleOfBigO_N; i++ )
for ( int j = i; j < i*i; j++ )
if ( j % i == 0 )
{
for ( int k = 0; k < j; k++ )
{
// printf( "*" ); // avoid devastating UI
letsAlsoExplicitlyCountTheVisits++;
}
}
printf( " END: running experiment visits this many( %d ) times the code\n",
letsAlsoExplicitlyCountTheVisits
);
}
Having collected some reasonably large amount of datapoints ( N, countedVisits ), your next step may be to fit the observed datapoints and formulate the best matching O( f(N) ) function of N.
That can go this simple.
START: running experiment for a scale of N( 1 )
END: running experiment visits this many( 0 ) times the code.
START: running experiment for a scale of N( 2 )
END: running experiment visits this many( 0 ) times the code.
START: running experiment for a scale of N( 4 )
END: running experiment visits this many( 11 ) times the code.
START: running experiment for a scale of N( 8 )
END: running experiment visits this many( 322 ) times the code.
START: running experiment for a scale of N( 16 )
END: running experiment visits this many( 6580 ) times the code.
START: running experiment for a scale of N( 32 )
END: running experiment visits this many( 117800 ) times the code.
START: running experiment for a scale of N( 64 )
END: running experiment visits this many( 1989456 ) times the code.
START: running experiment for a scale of N( 128 )
END: running experiment visits this many( 32686752 ) times the code.
START: running experiment for a scale of N( 256 )
END: running experiment visits this many( 529904960 ) times the code.
START: running experiment for a scale of N( 512 )
END: running experiment visits this many( 8534108800 ) times the code.
START: running experiment for a scale of N( 1024 )
END: running experiment visits this many(136991954176 ) times the code.
START: running experiment for a scale of N( 2048 )
...
Experimental data show about this algorithm time-complexity behaviour in-vivo:
So these are the for loops that I have to find the time complexity, but I am not really clearly understood how to calculate.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
}
For the first line, (int i = n; i > 1; i /= 3), keeps diving i by 3 and if i is less than 1 then the loop stops there, right?
But what is the time complexity of that? I think it is n, but I am not really sure. The reason why I am thinking it is n is, if I assume that n is 30 then i will be like 30, 10, 3, 1 then the loop stops. It runs n times, doesn't it?
And for the last for loop, I think its time complexity is also n because what it does is
k starts as 2 and keeps multiplying itself to itself until k is greater than n.
So if n is 20, k will be like 2, 4, 16 then stop. It runs n times too.
I don't really think I am understanding this kind of questions because time complexity can be log(n) or n^2 or etc but all I see is n.
I don't really know when it comes to log or square. Or anything else.
Every for loop runs n times, I think. How can log or square be involved?
Can anyone help me understanding this? Please.
If you want to calculate the time complexity of an algorithm, go through this post here: How to find time complexity of an algorithm
That said, the way you're thinking about algorithm complexity is small and linear. It helps to think about it in orders of magnitude, then plot it that way. If you take:
x, z = 0
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
x = x + 1
}
for (int k = 2; k < n; k = (k * k) {
z = z + 1
}
}
and plot x and z on a graph where n goes from 1 -> 10 -> 100 -> 1000 -> 10^15 or so, you'll get an answer which looks like an n^2 graph. When analyzing algorithmic complexity you're primarily interested in maximum the number of times, in either the worst or most common case, your inputs are looped through omitting constants. So in this case I would expect your algorithm to be O(n^2)
For further reading, I suggest https://en.wikipedia.org/wiki/Introduction_to_Algorithms ; it's not exactly easy but covers this in depth.
I thought the time complexity of the following code is O(log N), but the answer says it's O(N). I wonder why:
int count = 0;
for (int i = N; i > 0; i /= 2) {
for (int j = 0; j < i; j++) {
count += 1;
}
}
For the inners for-loop, it runs for this many times:
N + N/2 + N/4 ...
it seems to be logN to me. Please help me understand why here. Thanks
1, 1/2, 1/4, 1/8... 1/2 ** n is a geometric sequence with a = 1, r = 1/2 (a is the first term, and r is the common ratio).
Its sum can be calculated using the following formula:
In this case, the limit of the sum is 2, so:
n + n/2 + n/4 ... = n(1 + 1/2 + 1/4...) -> n * 2
Thus the complicity is O(N)
Proceeding step by step, based on the code fragment, we obtain:
Here is a nested for loop. I work out the time complexity for it is nlg(n).
int sum = 0;
for(int k = 1; k < n; k*=2){
for(int i = 1; i < n; i++){
sum++;
}
}
My thoughts are as follows.
The outer for loop: k will take values of 1, 2, 4, 8, ... Thus it will take lg(n) iteration.
The inner for loop: i will take n iteration.
Hence, the overall operations taken will be nlg(n).
Am I right?
Yes, the order of growth you suggested is correct. You can show it like the following: