Running time of nested while loops - while-loop

Function f(n)
s = 0
i = 1
while i < 7n^1/2 do
j = i
while j > 5 do
s = s + i -j
j = j -2
end
i = 5i
end
return s
end f
I am trying to solve the running time for big theta with the code above. I have been looking all over the place for something to help me with an example, but everything is for loops or only one while loop. How would you go about this problem with nested while loops?

Let's break this down into two key points:
i starts from 1, and is self-multiplied by 5, until it is greater than or equal to 7 sqrt(n). This is an exponential increase with logarithmic number of steps. Thus we can change the code to the following equivalent:
m = floor(log(5, 7n^(1/2)))
k = 0
while k < m do
j = 5^k
// ... inner loop ...
end
For each iteration of the outer loop, j starts from i, and decreases in steps of 2, until it is less than or equal to 5. Note that in the first execution of the outer loop i = 1, and in the second i = 5, so the inner loop is not executed until the third iteration. The loop limit means that the final value of j is 7 if k is odd, and 6 if even (you can check this with pen and paper).
Combining the above steps, we arrive at:

First loop will do 7 * sqrt(n) iterations. Exponent 1/2 is the same as sqrt() of a number.
Second loop will run m - 2 times since first two values of i are 1 and 5 respectively, not passing the comparison.
i is getting an increment of 5i.
Take an example where n = 16:
i = 1, n = 16;
while( i < 7 * 4; i *= 5 )
//Do something
First value of i = 1. It runs 1 time. Inside loop will run 0 times.
Second value of i = 5. It runs 2 times. Inside loop will run 0 times.
Third value of i = 25. It runs 3 times. Inside loop will run 10 times.
Fourth value of i = 125. It stops.
Outer iterations are n iterations while inner iterations are m iterations, which gives O( 7sqrt(n) * (m - 2) )
IMO, is complex.

Related

What is the complexity of this function in terms of n?

f(n):
s = 1
m = n
while m >= 1:
for j in 1,2,3, .., m:
s = s + 1
m = m / 2
My attempt
How many times does the statement s = s + 1 run? Well, let's try a number or two for n and see what we get.
n = 32
m = n = 32
m = 32 => inner for loop, loops 32 times
m = 16 => -.- 16 times
m = 8 => -.- 8 times
m = 4 => -.- 4 times
m = 2 => -.- 2 times
m = 1 => -.- 1 time
In total 16+8+4+2+1 = 31 times for n = 32
Doing the same for n = 8 gives 15 times
Doing the same for n = 4 gives 7 times
Doing the same for n = 64 gives 127 times
Notice how it always seemingly is 2 * n - 1, I believe this to be no coincidence because what we're really observing is that the total amount of times the statement gets executed is equal to the geometric sum sum of 2^k from k=0 to k=log(n) which has a closed form solution 2n + 1 given that we're using base 2 logarithm.
As such, my guess is that the complexity is O(n).
Is this correct?
Well what you just observed is true indeed. We can prove it mathematically as well.
Inner loop execution for first iteration -> n
Inner loop execution for second iteration -> n/2
Inner loop execution for third iteration -> n/(2^2)
and so on....
Therfore total time is (n + (n/2) + (n/(2^2)) + ... + (n/(2^k))) where n/(2^k) should be equal to 1 which implies k = log(n). Taking n common from all terms will give us a GP And now using GP formulae in above series , total time will come out to be 1(1 - ((1/2)^k)) / (1 - 1/2). putting value of k = log(n), we will get total time as 2*(n-1).
Note -:
This gives us time complexity of O(n).
log is of base 2.

Why is code like i=i*2 considered O(logN) when in a loop?

Why because of the i=i*2 is the runtime of the loop below considered O(logN)?
for (int i = 1; i <= N;) {
code with O(1);
i = i * 2;
}
Look at 1024 = 210. How many times do you have to double the number 1 to get 1024?
Times 1 2 3 4 5 6 7 8 9 10
Result 2 4 8 16 32 64 128 256 512 1024
So you would have to run your doubling loop ten times to get 210 And in general, you have to run your doubling loop n times to get 2n. But what is n? It's the log2 2n, so in general if n is some power of 2, the loop has to run log2n times to reach it.
To have the algorithm in O(logN), for any N it would need (around) log N steps. In the example of N=32 (where log 32 = 5) this can be seen:
i = 1 (start)
i = 2
i = 4
i = 8
i = 16
i = 32 (5 iterations)
i = 64 (abort after 6 iterations)
In general, after x iterations, i=2^x holds. To reach i>N you need x = log N + 1.
PS: When talking about complexities, the log base (2, 10, e, ...) is not relevant. Furthermore, it is not relevant if you have i <= N or i < N as this only changes the number of iterations by one.
You can prove it pretty simply.
Claim:
For the tth (0 base) iteration, i=2^t
Proof, by induction.
Base: 2^0 = 1, and indeed in the first iteration, i=1.
Step: For some t+1, the value of i is 2*i(t) (where i(t) is the value of i in the t iteration). From Induction Hypothesis we know that i(t)=2^t, and thus i(t+1) = 2*i(t) = 2*2^t = 2^(t+1), and the claim holds.
Now, let's examine our stop criteria. We iterate the loop while i <= N, and from the above claim, that means we iterate while 2^t <= N. By Doing log_2 on both sides, we get log_2(2^t) <= log_2(N), and since log_2(2^t) = t, we get that we iterate while t <= log_2(N) - so we iterate Theta(log_2(N)) times. (And that concludes the proof).
i starts in 1. In each iteration you multiply i by 2, so in the K-th iteration, i will be 2K-1.
After a number K of iterations, 2K-1 will be bigger than (or to) N.
this means N ≤ 2K-1
this means log2(N) ≤ K-1
K-1 will be the number of iterations your loop will run, and since K-1 is greater or equal to log(N), your algorithm is logarithmic.

Calculating the time complexity

Can somebody help with the time complexity of the following code:
for(i = 0; i <= n; i++)
{
for(j = 0; j <= i; j++)
{
for(k = 2; k <= n; k = k^2)
print("")
}
a/c to me the first loop will run n times,2nd will run for(1+2+3...n) times and third for loglogn times..
but i m not sure about the answer.
We start from the inside and work out. Consider the innermost loop:
for(k = 2; k <= n; k = k^2)
print("")
How many iterations of print("") are executed? First note that n is constant. What sequence of values does k assume?
iter | k
--------
1 | 2
2 | 4
3 | 16
4 | 256
We might find a formula for this in several ways. I used guess and prove to get iter = log(log(k)) + 1. Since the loop won't execute the next iteration if the value is already bigger than n, the total number of iterations executed for n is floor(log(log(n)) + 1). We can check this with a couple of values to make sure we got this right. For n = 2, we get one iteration which is correct. For n = 5, we get two. And so on.
The next level does i + 1 iterations, where i varies from 0 to n. We must therefore compute the sum 1, 2, ..., n + 1 and that will give us the total number of iterations of the outermost and middle loop: this sum is (n + 1)(n + 2) / 2 We must multiply this by the cost of the inner loop to get the answer, (n + 1)(n + 2)(log(log(n)) + 1) / 2 to get the total cost of the snippet. The fastest-growing term in the expansion is n^2 log(log(n)) and so that is what would typically be given as asymptotic complexity.

Time complexity of nested while loop with if-else

I am getting confused about how to analysis the time complexity within a nested while loop which divide into odd and even situation. could anyone help to explain how to deal with the situation?
i = 1
while (i < n) {
k = i
while ( k < n ) {
if ( k % 2 == 1 )
k ++
else
k = k + 0.01*n
}
i = i + 0.1*n
}
So in a problem like this, the factors 0.01 and 0.1 play a huge role.
First let's consider the inner while loop, if k is odd, we increment k by 1. If k is even, we increment k by one-hundredths of n. How man iterations can this inner while loop run?
Clearly if all iterations were of type-1(odd case), the inner while loop would run n-k times, and similarly if all the iterations were of type-2(even case), the inner while loop would run atmost a 100 times(as we increment the value of k by one-hundredths of n each time).
Given value of k, the number of iterations of the inner while loop is:
max(n-k,100). From now on, we will assume the value of n-k to be greater than 100 always, without loss of generality.
Okay, how does the outer loop iterate? In each iteration of the outer loop, the value of i increases by one-tenths of n each time, so the outer while will run at most 10 times.
Making the running times explicit and calculating the overall running time:
Running time for first iteration of outer loop : n-k
Running time for second iteration of outer loop : + n-(k+0.1*n)
+ n-(k+0.2*n)
...
+ n-(k+0.9*n)
-----------
= 10n-10k-(4.5)n
Plugging in k=1(as this is the start value of k),
10n-10-4.5n = 5.5 n -10 = O(n)
Hence complexity is O(n) time.

What is the order of growth of this function?

I'm new to algorithm and big 0. What is the order of growth of this function?
I do a println and f(10) runs 15 times. f(20) runs 31 times.
It looks to me like log(N)*N/2. So it is logarithmic or linearithmic?
static long f (long N) {
long sum = 0;
for (long i = 1; i < N; i *= 2)
for (long j = 0; j < i; j++)
sum++;
return sum;
}
The runtime is O(n). To see this, note that the inner loop runs 1 time on the first iteration, 2 times on the next iteration, 4 times on the next iteration, and more generally 2i times on the 2ith iteration. The outer loop stops after lg n iterations because it keeps doubling, so the total work done is
1 + 2 + 4 + 8 + ... + 2lg n
This is the sum of a geometric series and works out to 2lg n + 1 - 1 = 2 · 2lg n - 1 = 2n - 1 = O(n).
Hope this helps!
Inner loop j counts i times -> max is n
Outter loop counts from 0 to n, multiplying by 2 each time, so it's lgn times.
So total is o(nlgn)
Proceeding formally, you obtain:
O(2^lgn) should be the complexity.growth of exponential function is more than a linear funtion.Hence 2.2^lgn=O(2^lgn) instead of O(n)