Am I Oversimplifying Calculating Complexity - time-complexity

I'm just trying to calculate complexity on some program fragments, but I'm, worried I'm making things too simple. If I put my fragments and answers down, can you tell me if I'm doing anything wrong with it?
(a)
sum = 0;
for (i = 0;i < n;i++)
sum++;
ANSWER: n, only one for loop
(b)
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < n*n;k++)
sum++;
ANSWER: n^2 because of the nested loop, although I wonder if the n*n in the nested loop makes it n^3
(c)
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < i;k++)
sum++;
ANSWER: n^2
(d)
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < i*i;k++)
sum++;
ANSWER: n^2, but I have the same concern as b
(e)
sum= 0;
for (i = 0;i < n;i++)
for (k = i;k < n;k++)
sum++;
ANSWER: n^2

Since in all your examples the main operation is sum++, we are bound to count the number of times this basic operation is performed.
Also, in all cases, there is the i++ operation, that also counts, as well as the k++. Finally, these counters have to be compared with their limits at every step, and we should also take these comparisons into account. Now, these additional operations don't change the number of iterations; they simply make each iteration more expensive. For instance,
(a)
sum = 0;
for (i = 0;i < n;i++)
sum++;
repeats n times: i++, sum++ and i<n, all of which gives 3n operations of similar complexity. This is why the total complexity is O(n).
Once this has been understood, it is no longer necessary to analyze the complexity in as much detail, because the big-O notation will take care of these additional calculations.
The second example
sum = 0;
for (i = 0;i < n;i++)
for (k = 0;k < n*n;k++)
sum++;
repeats n time the operation
for (k = 0;k < n*n;k++)
sum++;
Because of the previous case, this operation has complexity O(n*n) as here the limit is n*n rather than n. Thus the total complexity is O(n*n*n).
The third example is similar, except that this time the operation being executed n times is
for (k = 0;k < i;k++)
sum++;
which has a complexity that changes with i. Therefore, instead of multiplying by n we have to sum n different things:
O(1) + O(2) + ... + O(n)
and since the constant factor implicit in the O is always the same (= number of variables being increased or compared at every elementary step), we can rewrite it as
O(1 + 2 + ... + n) = O(n(n+1)/2) = O(n*n)
The other examples are similar in that they can be analyzed following these very same ideas.

Related

Difference between time complexity O(ab) and O(N^2)

I assume that this code ideally representation of O(n^2) complexity. The reason is for function in another for function
for (int i = 0; i < array. length; i++)
for (int j = 0; j < array.length; j++)
System.out.println(array[i] + "," + arrayfj]);
Also, I read that code below is represent O(ab) time complexity. But why is that way? I don't undersant, because if (arrayA[i] < arrayS[j]) , this is constant and we can ignore that.
for (int i = 0; i < arrayA.length; i++)
for (int j = 0; j < arrayB.length; j++)
if (arrayA[i] < arrayS[j])
System.out.println(arrayA[i] + + arrayBfj]);
This also mentioned as O(ab), although for (int k = 0; k < 160800; k++) is also as constant
for (int i = 0j i < arrayA.length; i++)
for (int j = 0; j < arrayB.length; j++)
for (int k = 0; k < 160800; k++)
System.out.println(arrayA[i] + "," + arrayB[j]);
Different sites write different information about it.
If the first case, each array is the same length (n), and n*n prints are done.
In the second, the sizes of the arrays are a & b, and a*b ifs are done, and (potentially) that many prints are done (maybe everything in A is less than everything in B).
In the third, the sizes of the arrays are a & b, and (a*b)*160800 prints are done, but the constant can be ignored.
I also think that the OP is missing an important point. It's not that the first algorithm is O(n). It's that the first algorithm is O(n) where n is the length of the array. Though it is usually implicit, the variables we use in big-O notation must have some relationship to the inputs to the algorithm.
Likewise, the second algorithm is O(ab), where a=length(arrayA) and b=length(arrayB).
Regarding the if statement. If the if statement is false, then those two lines run in some small constant time. If the if statement is true, then those two lines run in some slightly larger, but still small constant time. The goal of big-Oh notation is to ignore constants, and just to see how the running time of the algorithm is related to the inputs. So a constant is a constant is a constant.
Likewise for the third program. The loop is run a constant number of times. Hence it takes a constant amount of time. A constant is a constant, even if it's large.

Will this code have Time Complexity O(n^5) or O(n^3) or anything else?

I am having problem in identifying the time complexity of this nested loop code. Some of my friends are saying O(n^3) and O(n^5).
sum = 0;
for(int i=0; i<n; i++)
for(int j=0; j<i*i; j++)
for(int k=0; k<j; k++)
sum++;
WolframAlpha gives the total count of increments to sum as
sum_(i=0)^(n-1)( sum_(j=0)^(i^2 - 1)( sum_(k=0)^(j-1) 1))
= 1/20 (n - 2) (n - 1) n (n + 1) (2 n - 1)
= n^5/10 - n^4/4 + n^2/4 - n/10
which is in θ(n^5).
I would say time complexity is about N * (N*N)/2 * N/2. Combined it would be O(N^4).
Edit: it's O(N^5)because the inner loop is squared by the middle loop!
But don't take my word for it. For these kind of questions, why don't you run a few examples of your code with different N and compare the sums, you will figure out what the time-complexity is soon enough.

Why does following code has complexity of O(n)?

I was going through some practice problem at this page. The question asks for the time complexity for below code and answer is O(n). However, as per my understanding outer loop runs log(n) times and inner one by O(n) thus it should have complexity of O(n*log(n)).
int count = 0;
for (int i = N; i > 0; i /= 2) {
for (int j = 0; j < i; j++) {
count += 1;
}
}
Please clarify what am I missing here.
The inner statement is run N + N/2 + N/4 + N/8 + ... times. Which is 2*N = O(N).

What is the runtime complexity of this pseudocode?

I'm having trouble understanding the time complexity of pseudocodes.
p=10;
num=0;
plimit=100000;
for (i = p; i<=plimit; i++)
for (j = 1; j<=i; j++)
num = num + 1;
I think it will be a linear search, but just wanted to confirm.
It's not linear time. The inner loop has the incremental operation cost as i increments on each iteration, so 1+2+3...+n gives you O(n2) because of (n+1)*(n/2).

Calculate the time complexity of the following function

How do I calculate the time complexity of the following function?
int Compute (int n)
{
int j = 0;
int i = 0;
while (i<=n)
{
i = 2*j + i + 1;
j++;
}
return j-1;
}
Now, I know that the loop has O(n) time complexity, but in this case i grows in a much faster rate. Taking this iteration by iteration I found out that, for every m-th iteration i = m^2. But I'm still confused how to calculate Big-O.
If you look at the values of i and j for a few iterations:
i=1
j=1
i=4
j=2
i=9
j=3
i=16
j=4
and so on. By mathematical induction we can prove that i takes square values: ( 2*n + n^2 + 1 = (n+1)^2 )
Since we loop only while i<=n and since i takes the vales 1, 2^2, 3^2,..., k^2 <=n, it means that we stop when i=k goes over sqrt(n). Hence the complexity seems to be O(k) which means O(sqrt(n)).