Order of growth of triple for loop - time-complexity

I'm practising algorithm complexity and I came across this code online but I cannot figure out the order of growth for it. Any ideas?
int counter= 0;
for (int i = 0; i*i < N; i++)
for (int j = 0; j*j < 4*N; j++)
for (int k = 0; k < N*N; k++)
counter++;

Take it one step (or loop in this case) at a time:
The first loop increments i as long its square is lower than N, so this must be O(sqrt N), because int(sqrt(N)) or int(sqrt(N)) - 1 is the largest integer value whose square is lower than N;
The same holds for the second loop. We can ignore the 4 because it is a constant, and we do not care about those when dealing with big-oh notation. So the first two loops together are O(sqrt N)*O(sqrt N) = O(sqrt(N)^2) = O(N). You can multiply the complexities because the loops are nested, so the second loop will fully execute for each iteration of the first;
The third loop is obviously O(N^2), because k goes up to the square of N.
So the whole thing has to be O(N) * O(N^2) = O(N^3). You can usually solve problems like this by figuring out the complexity of the first loop, then the second, then the first two and so on.

Sqrt n x 2 Sqrt n x n ^ 2
Which gives:
O n^3
Explanation:
For the first loop, square root both sides of the equation
i^2 = n
For the second loop, square root both sides of the equation
j^2 = 4n^2
The third loop is straight forward.

Related

I can find time complexity for 'for' loop but when the condition is changed I get stuck. Is there any mathematical way to calculate time complexity? [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 2 years ago.
Q1)
int a = 0, i = N;
while (i > 0) {
a += i;
i /= 2;
}
Q2)
int i, j, k = 0;
for (i = n / 2; i <= n; i++) {
for (j = 2; j <= n; j = j * 2) {
k = k + n / 2;
}
}
If you merely approach it mathematically, things can get very complex. most of the time it's better to walk through the logic of the algorithm.
Q1) i changes from N to 0. every time the loop runs, i (which was initially N) halves which gives you logarithmic time complexity. O(log N)
Q2) The outer loop runs n/2 times so O(N/2) and according to time complexity rules you drop the constant so O(N).
Every time the inner loop is run, you are doubling the value of j which means you are closing the distance between j and n in multiplications of 2 which is logarithmic so O(log N).
And since the inner loop runs for each element in the outer loop, the total time complexity is O(N log N).
And notice that I am not taking k = k + n / 2 and a += i into consideration because these operations run in constant time O(1) which is the least dominant time complexity and so has no effect in the calculations.

Time complexity on nested for loop

function(n):
{
for (i = 1; i <= n; i++)
{
for (j = 1; j <= n / 2; j++)
output("")
}
}
Now I have calculated the time complexity for the first for loop which is O(n). Now the second for loop shows j <= n / 2 so any given n I put, for example, a range of [1,2,...,10] will give me O(log(n)) since it will continuously give me a series of n,n/2,n/4,n/8 .... K.
So if we wanted to compare the relationship it must look something like this 2^n = k.
My question is will it give me O(log(n))?
The correct summation according to the code is:
So, it's not O(log n). It's O(n^2).
No, it does not give you o(logn).
The first for loop is O(n). The second loop is O(n) as well, as the number of iterations grows as a function of n (the growth rate is linear).
It would be the same even by changing the second loop to something like
for (j=1; j<=n/2000; j++)
or in general if you replace the denominator with any constant k.
To conclude, the time compexity is quadratic, i.e., O(n^2)

What is the time complexity of below func?

What is the running time complexity of fun()?
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i =i-2)
for (int j = 2; j < i; j=j*j)
for (int k=j; k>0; k=k/2)
count += 1;
return count;
}
is it O(n * lglgn * lglglgn)?
----
Edit:
j = loglog(i) times, big iterative value of j can be almost n(for example n=17,max(j)=16)
k= log(j), since the max value of j is at most n. then the max iterative times can be log(n)
so we can say that the big O of this question is O(n* lglgn * lgn)
Since the value of j and k depends on the previous iterative value (i, j), maybe there is better tight answer to this question.
We need to count this carefully, because the number of iterations of each inner loop depends in a non-trivial way on the outer loop's variable. Simply averaging for each loop and multiplying the results together will not give the right answer.
The outer loop runs O(n) times, because i counts from n down to 0 in constant steps.
The middle loop's values of j are 2, then 2*2 = 4, then 4*4 = 16, and on the m'th iteration, j = 2^2^m. The last iteration will be when 2^2^m >= i, in which case m >= log log i. So this runs O(log log i) times.
The innermost loop runs O(log j) times, because on the m'th iteration, k = j / 2^m. The last iteration will be when k <= 1, in which case m >= log j. So this runs O(log j) times.
However, it is not correct to multiply these together to get O(n * log log n * log log log n) - because i is not n on every iteration, and j is not log log n on every iteration. This gives an upper bound, but not a tight one. To calculate the true time complexity, you will need to write it as a double-summation, and simplify it algebraically.
As a simpler example to think about, consider the following code:
for(i = 1; i < n; i *= 2) {
for(j = 0; j < i; j += 1) {
// do something
}
}
The outer loop runs O(log n) times, and the inner loop runs O(i) times, but the overall complexity is actually O(n). To see this, count how many times // do something is reached; the first time the outer loop iterates it'll be 1, then it'll be 2, then 4, then 8, and so on up to n. This is a geometric progression with a sum <= 2n, giving a total number of steps which is O(n).
Note that if we naively multiply the two loops' complexities we get O(n log n) instead, which is an upper bound, but not a tight one.
Using Big O notation:
With the first outer loop we get O(N/2). You have a loop of N items and you are reducing it by 2 everytime, though you get a total of N/2 loops.
With the outter loop we get O(Log(I)).
With the most inner loop we have O(Log(J)), because you are dividing by 2 your iterator on every loop.
If we multiply the three complexities because they are nested:
O(N/2)*O(Log(I)) + O(Log(j)) ~ O(N/2*Log(I)*Log(J)) ~ O(N/2*Log^2(N)) ~ O(N*Log^2(N)).
We get a linearithmic complexity: O(N*Log^2(N))

What is the time complexity of the code below?

sum =0;
for (int i=1; i<n; i++) {
for (int j=1; j< n/i; j++) {
sum = sum +j;
}
}
In the above outer loop , The variable i runs from 1 to n, thus making complexity of outer loop as O(n).
This explains the n part of O(n logn) complexity.
But for the outer part when we see then j runs from 1 to n/i, meaning whenever i is 1 , the complexity is n so i guess the inner time complexity should also be O(n).
Making the total time Complexity as O(n*n)=O(n^2).
This is what you can do using Sigma notation:
With H_{n-1} is the harmonic function:
Finding Big O of the Harmonic Series

Big-Oh notation for code fragment

I am lost on these code fragments and finding a hard time to find any other similar examples.
//Code fragment 1
sum = 0;
for(i = 0;i < n; i++)
for(J=1;j<i*i;J++)
for(K=0;k<j;k++)
sum++;
I'm guessing it is O(n^4) for fragment 1.
//Code fragment 2
sum = 0;
for( 1; i < n; i++ )
for (j =1;,j < i * i; j++)
if( j % i == 0 )
for( k = 0; k < j; k++)
sum++;
I am very lost on this one. Not sure how does the if statement affects the loop.
Thank you for the help head of time!
The first one is in fact O(n^5). The sum++ line is executed 1^4 times, then 2^4 times, then 3^4, and so on. The sum of powers-of-k has a term in n^(k+1) (see e.g. Faulhaber's formula), so in this case n^5.
For the second one, the way to think about it is that the inner loop only executes when j is a multiple of i. So the second loop may as well be written for (j = 1; j < i * i; j+=i). But this is the same as for (j = 1; j < i; j++). So we now have a sequence of cubes, rather than powers-of-4. The highest term is therefore n^4.
I'm fairly sure the 1st fragment is actually O(n^5).
Because:
n times,
i^2 times, where i is actually half of n (average i for the case, since for each x there is a corresponding n-x that sum to 2n) Which is therefore n^2 / 4 times. (a times)
Then, a times again,
and when you do: n*a*a, or n*n*n/4*n*n/4 = n^5 / 16, or O(n^5)
I believe the second is O(4), because:
It's iterated n times.
Then it's iterated n*n times, (literally n*n/4, but not in O notation)
Then only 1/n are let through by the if (I can't remember how I got this)
Then n*n are repeated.
So, n*n*n*n*n/n = n^4.
With a sum so handy to compute, you could run these for n=10, n=50, and so on, and just look which of O(N^2), O(N^3), O(N^4), O(N^6) is a better match. (Note that the index for the inner-most loop also runs to n*n...)
First off I agree with your assumption for the first scenario. Here is my breakdown for the second.
The If statement will cause the last loop to run only half of the time since an odd value for i*i will only lead to the third inner loop for prime values that i*i can be decomposed into. Bottom line in big-O we ignore constants so I think your looking at O(n^3).