I'm having trouble understanding the time complexity of pseudocodes.
p=10;
num=0;
plimit=100000;
for (i = p; i<=plimit; i++)
for (j = 1; j<=i; j++)
num = num + 1;
I think it will be a linear search, but just wanted to confirm.
It's not linear time. The inner loop has the incremental operation cost as i increments on each iteration, so 1+2+3...+n gives you O(n2) because of (n+1)*(n/2).
Related
I assume that this code ideally representation of O(n^2) complexity. The reason is for function in another for function
for (int i = 0; i < array. length; i++)
for (int j = 0; j < array.length; j++)
System.out.println(array[i] + "," + arrayfj]);
Also, I read that code below is represent O(ab) time complexity. But why is that way? I don't undersant, because if (arrayA[i] < arrayS[j]) , this is constant and we can ignore that.
for (int i = 0; i < arrayA.length; i++)
for (int j = 0; j < arrayB.length; j++)
if (arrayA[i] < arrayS[j])
System.out.println(arrayA[i] + + arrayBfj]);
This also mentioned as O(ab), although for (int k = 0; k < 160800; k++) is also as constant
for (int i = 0j i < arrayA.length; i++)
for (int j = 0; j < arrayB.length; j++)
for (int k = 0; k < 160800; k++)
System.out.println(arrayA[i] + "," + arrayB[j]);
Different sites write different information about it.
If the first case, each array is the same length (n), and n*n prints are done.
In the second, the sizes of the arrays are a & b, and a*b ifs are done, and (potentially) that many prints are done (maybe everything in A is less than everything in B).
In the third, the sizes of the arrays are a & b, and (a*b)*160800 prints are done, but the constant can be ignored.
I also think that the OP is missing an important point. It's not that the first algorithm is O(n). It's that the first algorithm is O(n) where n is the length of the array. Though it is usually implicit, the variables we use in big-O notation must have some relationship to the inputs to the algorithm.
Likewise, the second algorithm is O(ab), where a=length(arrayA) and b=length(arrayB).
Regarding the if statement. If the if statement is false, then those two lines run in some small constant time. If the if statement is true, then those two lines run in some slightly larger, but still small constant time. The goal of big-Oh notation is to ignore constants, and just to see how the running time of the algorithm is related to the inputs. So a constant is a constant is a constant.
Likewise for the third program. The loop is run a constant number of times. Hence it takes a constant amount of time. A constant is a constant, even if it's large.
I am having problem in identifying the time complexity of this nested loop code. Some of my friends are saying O(n^3) and O(n^5).
sum = 0;
for(int i=0; i<n; i++)
for(int j=0; j<i*i; j++)
for(int k=0; k<j; k++)
sum++;
WolframAlpha gives the total count of increments to sum as
sum_(i=0)^(n-1)( sum_(j=0)^(i^2 - 1)( sum_(k=0)^(j-1) 1))
= 1/20 (n - 2) (n - 1) n (n + 1) (2 n - 1)
= n^5/10 - n^4/4 + n^2/4 - n/10
which is in θ(n^5).
I would say time complexity is about N * (N*N)/2 * N/2. Combined it would be O(N^4).
Edit: it's O(N^5)because the inner loop is squared by the middle loop!
But don't take my word for it. For these kind of questions, why don't you run a few examples of your code with different N and compare the sums, you will figure out what the time-complexity is soon enough.
Question 1
for (i = 0; i < n; i++) {
for (j = 0; j < i * i ; j++){
}
}
Answer: O(n^3)
At first glance, O(n^3) made sense to me, but I remember a previous problem I did:
Question 2
for (int i = n; i > 0; i /= 2) {
for (int j = 0; j < i; j++) {
//statement
}
}
Answer: O(n)
For Question 2, the outer loop is O(log n) and the inner loop is O(2n / log n) resulting in O(n). The inner loop is O(2n / log n) because - see explanation here: Big O of Nested Loop (int j = 0; j < i; j++)
Why we don't do Question 1 like Question 2 since in Question 1, j also depends on i which means we should really be taking the average of how many iterations will occur in the inner loop (as we do in Question 2).
My answer would be: O(n) for the outer loop and O(n^2 / n) for the inner loop which results in O(n^2) for Question 1.
Your answer is wrong. The code is Θ(n³).
To see that note that the inner loop takes i² steps which is at most n² but for half of the outer loop iterations is at least (n/2)² = n²/4.
Therefore the number of total inner iterations is at most n * n² = n³ but at least n/2 * n²/4 = n³/8.
Your consideration is wrong in that the inner loop takes on average proportional to n² many iterations, not n² / n.
What your inner for loop is doing, in combination with the outer for loop, is calculating the sum of i^2. If you write it out you are adding the following terms:
1 + 4 + 9 + 16 + ...
The result of that is (2n^3+3n^2+n)/6. If you want to calculate the average of the number of iterations of the inner for loop, you divide it by n as this is the number of iterations of the outer for loop. So you get (2n^2+3n+1)/6, in terms of Big O notation this will be O(n^2). And having that gives you... nothing. You have not gain any new information as you already knew the complexity of the inner for loop is O(n^2). Having O(n^2) running n times gives you O(n^3) of total complexity, that you already knew...
So, you can calculate the average number of iterations of the inner for loop, but you will not gain any new information. There were no cuts in the number of iteration steps as there were in your previous question (the i /= 2 stuff).
void fun(int n, int k)
{
for (int i=1; i<=n; i++)
{
int p = pow(i, k);
for (int j=1; j<=p; j++)
{
// Some O(1) work
}
}
}
Time complexity of above function can be written as 1k + 2k + 3k + … n1k.
In your case k = 2
Sum = 12 + 22 + 32 + ... n12.
= n(n+1)(2n+1)/6
= n3/3 + n2/2 + n/6
I was going through some practice problem at this page. The question asks for the time complexity for below code and answer is O(n). However, as per my understanding outer loop runs log(n) times and inner one by O(n) thus it should have complexity of O(n*log(n)).
int count = 0;
for (int i = N; i > 0; i /= 2) {
for (int j = 0; j < i; j++) {
count += 1;
}
}
Please clarify what am I missing here.
The inner statement is run N + N/2 + N/4 + N/8 + ... times. Which is 2*N = O(N).
I'm just trying to calculate complexity on some program fragments, but I'm, worried I'm making things too simple. If I put my fragments and answers down, can you tell me if I'm doing anything wrong with it?
(a)
sum = 0;
for (i = 0;i < n;i++)
sum++;
ANSWER: n, only one for loop
(b)
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < n*n;k++)
sum++;
ANSWER: n^2 because of the nested loop, although I wonder if the n*n in the nested loop makes it n^3
(c)
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < i;k++)
sum++;
ANSWER: n^2
(d)
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < i*i;k++)
sum++;
ANSWER: n^2, but I have the same concern as b
(e)
sum= 0;
for (i = 0;i < n;i++)
for (k = i;k < n;k++)
sum++;
ANSWER: n^2
Since in all your examples the main operation is sum++, we are bound to count the number of times this basic operation is performed.
Also, in all cases, there is the i++ operation, that also counts, as well as the k++. Finally, these counters have to be compared with their limits at every step, and we should also take these comparisons into account. Now, these additional operations don't change the number of iterations; they simply make each iteration more expensive. For instance,
(a)
sum = 0;
for (i = 0;i < n;i++)
sum++;
repeats n times: i++, sum++ and i<n, all of which gives 3n operations of similar complexity. This is why the total complexity is O(n).
Once this has been understood, it is no longer necessary to analyze the complexity in as much detail, because the big-O notation will take care of these additional calculations.
The second example
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < n*n;k++)
sum++;
repeats n time the operation
for (k = 0;k < n*n;k++)
sum++;
Because of the previous case, this operation has complexity O(n*n) as here the limit is n*n rather than n. Thus the total complexity is O(n*n*n).
The third example is similar, except that this time the operation being executed n times is
for (k = 0;k < i;k++)
sum++;
which has a complexity that changes with i. Therefore, instead of multiplying by n we have to sum n different things:
O(1) + O(2) + ... + O(n)
and since the constant factor implicit in the O is always the same (= number of variables being increased or compared at every elementary step), we can rewrite it as
O(1 + 2 + ... + n) = O(n(n+1)/2) = O(n*n)
The other examples are similar in that they can be analyzed following these very same ideas.