I'm new to algorithm and big 0. What is the order of growth of this function?
I do a println and f(10) runs 15 times. f(20) runs 31 times.
It looks to me like log(N)*N/2. So it is logarithmic or linearithmic?
static long f (long N) {
long sum = 0;
for (long i = 1; i < N; i *= 2)
for (long j = 0; j < i; j++)
sum++;
return sum;
}
The runtime is O(n). To see this, note that the inner loop runs 1 time on the first iteration, 2 times on the next iteration, 4 times on the next iteration, and more generally 2i times on the 2ith iteration. The outer loop stops after lg n iterations because it keeps doubling, so the total work done is
1 + 2 + 4 + 8 + ... + 2lg n
This is the sum of a geometric series and works out to 2lg n + 1 - 1 = 2 ยท 2lg n - 1 = 2n - 1 = O(n).
Hope this helps!
Inner loop j counts i times -> max is n
Outter loop counts from 0 to n, multiplying by 2 each time, so it's lgn times.
So total is o(nlgn)
Proceeding formally, you obtain:
O(2^lgn) should be the complexity.growth of exponential function is more than a linear funtion.Hence 2.2^lgn=O(2^lgn) instead of O(n)
Related
What is the running time complexity of fun()?
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i =i-2)
for (int j = 2; j < i; j=j*j)
for (int k=j; k>0; k=k/2)
count += 1;
return count;
}
is it O(n * lglgn * lglglgn)?
----
Edit:
j = loglog(i) times, big iterative value of j can be almost n(for example n=17,max(j)=16)
k= log(j), since the max value of j is at most n. then the max iterative times can be log(n)
so we can say that the big O of this question is O(n* lglgn * lgn)
Since the value of j and k depends on the previous iterative value (i, j), maybe there is better tight answer to this question.
We need to count this carefully, because the number of iterations of each inner loop depends in a non-trivial way on the outer loop's variable. Simply averaging for each loop and multiplying the results together will not give the right answer.
The outer loop runs O(n) times, because i counts from n down to 0 in constant steps.
The middle loop's values of j are 2, then 2*2 = 4, then 4*4 = 16, and on the m'th iteration, j = 2^2^m. The last iteration will be when 2^2^m >= i, in which case m >= log log i. So this runs O(log log i) times.
The innermost loop runs O(log j) times, because on the m'th iteration, k = j / 2^m. The last iteration will be when k <= 1, in which case m >= log j. So this runs O(log j) times.
However, it is not correct to multiply these together to get O(n * log log n * log log log n) - because i is not n on every iteration, and j is not log log n on every iteration. This gives an upper bound, but not a tight one. To calculate the true time complexity, you will need to write it as a double-summation, and simplify it algebraically.
As a simpler example to think about, consider the following code:
for(i = 1; i < n; i *= 2) {
for(j = 0; j < i; j += 1) {
// do something
}
}
The outer loop runs O(log n) times, and the inner loop runs O(i) times, but the overall complexity is actually O(n). To see this, count how many times // do something is reached; the first time the outer loop iterates it'll be 1, then it'll be 2, then 4, then 8, and so on up to n. This is a geometric progression with a sum <= 2n, giving a total number of steps which is O(n).
Note that if we naively multiply the two loops' complexities we get O(n log n) instead, which is an upper bound, but not a tight one.
Using Big O notation:
With the first outer loop we get O(N/2). You have a loop of N items and you are reducing it by 2 everytime, though you get a total of N/2 loops.
With the outter loop we get O(Log(I)).
With the most inner loop we have O(Log(J)), because you are dividing by 2 your iterator on every loop.
If we multiply the three complexities because they are nested:
O(N/2)*O(Log(I)) + O(Log(j)) ~ O(N/2*Log(I)*Log(J)) ~ O(N/2*Log^2(N)) ~ O(N*Log^2(N)).
We get a linearithmic complexity: O(N*Log^2(N))
Given
for (int i = 1; i <= n - 1; i++)
for (int j = i + 1; j <= n; j++)
Console.WriteLine(i, j);
I understand that the outer for loop runs 4n - 1 times and the inner runs 3n^2 - 3 times, however I don't understand why the print statement runs n(n - 1)/2 times. I am only getting n(n - 1) as my time complexity yet the slides say n(n - 1)/2. What am I missing?
for i = 1, j varies from 2 to n => n-1 times
for i = 2, j varies from 3 to n => n-2 times
...
...
for i=n-1 j varies from n to n => 1 time
so number of operations => (n-1) + (n-2) + (n-3) + .... +1
that solves to n(n-1)/2 (remember the formula for summation of n natural numbers - https://cseweb.ucsd.edu/groups/tatami/handdemos/sum/
You are not missing much because the big O bound of both n(n - 1) and n(n - 1)/2 is O(n^2). The double loop you showed will be upper bounded by O(n^2), and this is the main point here, I think.
Function f(n)
s = 0
i = 1
while i < 7n^1/2 do
j = i
while j > 5 do
s = s + i -j
j = j -2
end
i = 5i
end
return s
end f
I am trying to solve the running time for big theta with the code above. I have been looking all over the place for something to help me with an example, but everything is for loops or only one while loop. How would you go about this problem with nested while loops?
Let's break this down into two key points:
i starts from 1, and is self-multiplied by 5, until it is greater than or equal to 7 sqrt(n). This is an exponential increase with logarithmic number of steps. Thus we can change the code to the following equivalent:
m = floor(log(5, 7n^(1/2)))
k = 0
while k < m do
j = 5^k
// ... inner loop ...
end
For each iteration of the outer loop, j starts from i, and decreases in steps of 2, until it is less than or equal to 5. Note that in the first execution of the outer loop i = 1, and in the second i = 5, so the inner loop is not executed until the third iteration. The loop limit means that the final value of j is 7 if k is odd, and 6 if even (you can check this with pen and paper).
Combining the above steps, we arrive at:
First loop will do 7 * sqrt(n) iterations. Exponent 1/2 is the same as sqrt() of a number.
Second loop will run m - 2 times since first two values of i are 1 and 5 respectively, not passing the comparison.
i is getting an increment of 5i.
Take an example where n = 16:
i = 1, n = 16;
while( i < 7 * 4; i *= 5 )
//Do something
First value of i = 1. It runs 1 time. Inside loop will run 0 times.
Second value of i = 5. It runs 2 times. Inside loop will run 0 times.
Third value of i = 25. It runs 3 times. Inside loop will run 10 times.
Fourth value of i = 125. It stops.
Outer iterations are n iterations while inner iterations are m iterations, which gives O( 7sqrt(n) * (m - 2) )
IMO, is complex.
Can somebody help with the time complexity of the following code:
for(i = 0; i <= n; i++)
{
for(j = 0; j <= i; j++)
{
for(k = 2; k <= n; k = k^2)
print("")
}
a/c to me the first loop will run n times,2nd will run for(1+2+3...n) times and third for loglogn times..
but i m not sure about the answer.
We start from the inside and work out. Consider the innermost loop:
for(k = 2; k <= n; k = k^2)
print("")
How many iterations of print("") are executed? First note that n is constant. What sequence of values does k assume?
iter | k
--------
1 | 2
2 | 4
3 | 16
4 | 256
We might find a formula for this in several ways. I used guess and prove to get iter = log(log(k)) + 1. Since the loop won't execute the next iteration if the value is already bigger than n, the total number of iterations executed for n is floor(log(log(n)) + 1). We can check this with a couple of values to make sure we got this right. For n = 2, we get one iteration which is correct. For n = 5, we get two. And so on.
The next level does i + 1 iterations, where i varies from 0 to n. We must therefore compute the sum 1, 2, ..., n + 1 and that will give us the total number of iterations of the outermost and middle loop: this sum is (n + 1)(n + 2) / 2 We must multiply this by the cost of the inner loop to get the answer, (n + 1)(n + 2)(log(log(n)) + 1) / 2 to get the total cost of the snippet. The fastest-growing term in the expansion is n^2 log(log(n)) and so that is what would typically be given as asymptotic complexity.
I am getting confused about how to analysis the time complexity within a nested while loop which divide into odd and even situation. could anyone help to explain how to deal with the situation?
i = 1
while (i < n) {
k = i
while ( k < n ) {
if ( k % 2 == 1 )
k ++
else
k = k + 0.01*n
}
i = i + 0.1*n
}
So in a problem like this, the factors 0.01 and 0.1 play a huge role.
First let's consider the inner while loop, if k is odd, we increment k by 1. If k is even, we increment k by one-hundredths of n. How man iterations can this inner while loop run?
Clearly if all iterations were of type-1(odd case), the inner while loop would run n-k times, and similarly if all the iterations were of type-2(even case), the inner while loop would run atmost a 100 times(as we increment the value of k by one-hundredths of n each time).
Given value of k, the number of iterations of the inner while loop is:
max(n-k,100). From now on, we will assume the value of n-k to be greater than 100 always, without loss of generality.
Okay, how does the outer loop iterate? In each iteration of the outer loop, the value of i increases by one-tenths of n each time, so the outer while will run at most 10 times.
Making the running times explicit and calculating the overall running time:
Running time for first iteration of outer loop : n-k
Running time for second iteration of outer loop : + n-(k+0.1*n)
+ n-(k+0.2*n)
...
+ n-(k+0.9*n)
-----------
= 10n-10k-(4.5)n
Plugging in k=1(as this is the start value of k),
10n-10-4.5n = 5.5 n -10 = O(n)
Hence complexity is O(n) time.