Time complexity : simple loop calculation - time-complexity

I have a simple loop like this :
for (int i = 0; i < n; i++) {
// constant time operation
}
It’s very easy to see that it’s of O(n) time complexity, but if we calculate it, why is it 2*n + 2 + c*n (given answer) and not (1+ (n+1) + 2*n + c*n) = (3+c)*n + 2? I see i++ as 2 operations: addition and assignment; thus, it should be 2*n, and the constant operation is executed n times, so it’s c*n.

2*n
(n comparisons (i<n)
+ n increments (i++)) == 2*n
2
(i=0 1 for assignment
+ 1 for allocation of i) == 2
c*n
(n constant time operations) == c*n

From what I can tell, increment/decrement is treated as a single operation. This makes sense because in assembly, you can perform an increment or decrement with a single line of assembly code. Furthermore, most lines of assembly code translate directly to a single binary instruction. Thus, increment/decrement is effectively a constant time operation.
Therefore, we have n operations from incrementing. We also run the body of the loop n times, and the body of the loop performs a constant time operation, so we have an additional c * n operations. When we enter the loop the first time, there is an additional assignment operation. This yields another operation. Finally, after the loop runs the nth time, the loop checks the condition of the loop one more time. This means there are n + 1 comparisons that the loop performs.
Adding these up, we have n + c * n + 1 + (n + 1) = 2 * n + c * n + 2, which is the answer you saw.

Related

How to calculate time complexity of this code?=.=

char[] chars = String.valueOf(num).toCharArray();
int n = chars.length;
for (int i = n - 1; i > 0; i--) {
if (chars[i - 1] > chars[i]) {
chars[i - 1]--;
Arrays.fill(chars, i, n, '9');
}
}
return Integer.parseInt(new String(chars));
What is the time complexity of this code? Could you teach me how to calculate it? Thank you!
Time complexity is a measure of how a program's run-time changes as input size grows. The first thing you have to determine is what (if any) aspect of your input can cause run-time to vary, and to represent that aspect as a variable. Framing run-time as a function of that variable (or those variables) is how you determine the program's time complexity. Conventionally, when only a single aspect of the input causes run-time to vary, we represent that aspect by the variable n.
Your program primarily varies in run-time based on how many times the for-loop runs. We can see that the for-loop runs the length of chars times (note that the length of chars is the number of digits of num). Conveniently, that length is already denoted as n, which we will use as the variable representing input size. That is, taking n to be the number of digits in num, we want to determine exactly how run-time varies with n by expressing run-time as a function of n.
Also note that when doing complexity analysis, you are primarily concerned with the growth in run-time as n gets arbitrarily large (how run-time scales with n as n goes to infinity), so you typically ignore constant factors and only focus on the highest order terms, writing run-time in "Big O" notation. That is, because 3n, n, and n/2 all grow in about the same way as n goes to infinity, we would represent them all as O(n), because our primary goal is to distinguish this kind of linear growth, from the quadratic O(n^2) growth of n^2, 5n^2 + 10, or n^2 + n, from the logarithmic O(logn) growth of log(n), log(2n), or log(n) + 1, or the constant O(1) time (time that doesn't scale with n) represented by 1, 5, 100000 etc.
So, let's try to express the total number of operations the program does in terms of n. It can be helpful at first to just go line-by-line for this and to add everything up in the end.
The first line, turning num into an n character string and then turning that string to an length n array of chars, does O(n) work. (n work to turn each of n digits into a character, than another n work to put each of those characters into an array. n + n = 2n = O(n) total work)
char[] chars = String.valueOf(num).toCharArray(); // O(n)
The next line just reads the length value from the array and saves it as n. This operation takes the same amount of time no matter how long the array is, so it is O(1).
int n = chars.length; // O(1)
For every digit of num, our program runs 1 for loop, so O(n) total loops:
for (int i = n - 1; i > 0; i--) { // O(n)
Inside the for loop, a conditional check is performed, and then, if it returns true, a value may be decremented and the array from i to n filled.
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n-i) = O(n)
}
The fill operation is O(n-i) because that is how many characters may be changed to '9'. O(n-i) is O(n), because i is just a constant and lower order than n, which, as previously mentioned, means it gets ignored in big O.
Finally, you parse the n characters of chars as an int, and return it. Altogether:
static Integer foo(int num) {
char[] chars = String.valueOf(num).toCharArray(); // O(n)
int n = chars.length; // O(1)
for (int i = n - 1; i > 0; i--) { // O(n)
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n)
}
}
return Integer.parseInt(new String(chars)); // O(n)
}
When we add everything up, we get, the total time-complexity as a function of n, T(n).
T(n) = O(n) + O(1) + O(n)*(O(1) + O(1) + O(n)) + O(n)
There is a product in the expression to represent the total work done across all iterations of the for-loop: O(n) iterations times O(1) + O(1) + O(n) work in each iteration. In reality, some iterations the for loop might only do O(1) work (when the condition is false), but in the worst case the whole body is executed every iteration, and complexity analysis is typically done for the worst case unless otherwise specified.
You can simplify this function for run-time by using the fact that big O strips constants and lower-order terms, along with the facts that that O(a) + O(b) = O(a + b) and a*O(b) = O(a*b).
T(n) = O(n+1+n) + O(n)*O(1 + 1 + n)
= O(2n+1) + O(n)*O(n+2)
= O(n) + O(n)*O(n)
= O(n) + O(n^2)
= O(n^2 + n)
= O(n^2)
So you would say that the overall time complexity of the program is O(n^2), meaning that run-time scales quadratically with input size in the worst case.

What is the time complexity of this nested for loop?

What would be the time complexity be of these lines of code?
Begin
sum = 0
For i = 1 to n do
For j = i to n do
sum = sum + 1
End
I want to say O(n) = 2n^2 + 1 but I'm unsure because of the i=j part.
Your answer is correct!
A nested loop instinctively leads you to the right answer which is O(n^2). In doing Big-O analysis, it usually doesn't matter being specific to the point i.e. saying the time complexity is O(2n^2) or O(3n^2 + 1) -- saying O(n^2) is enough since that is the dominating task of the function.
The i = j condition simply makes it so that there are...
i=1: n operations
i=2: (n-1) operations
...
i=n: 1 operation
So, the sum of all operations you do is 1 + 2 + ... n = n(n+1)/2 which is O(n^2).

What is the time complexity of the given code

I want to know the time complexity of the code attached.
I get O(n^2logn), while my friends get O(nlogn) and O(n^2).
SomeMethod() = log n
Here is the code:
j = i**2;
for (k = 0; k < j; k++) {
for (p = 0; p < j; p++) {
x += p;
}
someMethod();
}
The question is not very clear about the variable N and the statement i**2.
i**2 gives a compilation error in java.
assuming someMethod() takes log N time(as mentioned in question), and completely ignoring value of N,
lets call i**2 as Z
someMethod(); runs Z times. and time complexity of the method is log N so that becomes:
Z * log N ----------------------------------------- A
lets call this expression A.
Now, x+=p runs Z^2 times (i loop * j loop) and takes constant time to run. that makes the following expression:
( Z^2 ) * 1 = ( Z^2 ) ---------------------- B
lets call this expression B.
The total run time is sum of expression A and expression B. which brings us to:
O((Z * log N) + (Z^2))
where Z = i**2
so final expression will be O(((i**2) * log N) + ((i**2)^2))
if we can assume i**2 is i^2, the expression becomes,
O(((i^2) * log N) + (i^4))
Considering only the higher order variables, like we consider n^2 in n^2 + 2n + 5, the complexity can be expressed as follows,
i^4
Based on the picture, the complexity is O(logNI2 + I4).
We cannot give a complexity class with one variable because the picture does not explain the relationship between N and I. They must be treated as separate variables.
And likewise, we cannot eliminate the logNI2 term because the N variable will dominate the I variable in some regions of N x I space.
If we treat N as a constant, then the complexity class reduces to O(I4).
We get the same if we treat N as being the same thing as I; i.e. there is a typo in the question.
(I think there is mistake in the way the question was set / phrased. If not, this is a trick question designed to see if you really understood the mathematical principles behind complexity involving multiple independent variables.)

what is the time complexity of a nested for loop that iterates n - 1 - i?

So if I have a loop like this?
int x, y, z;
for(int i = 0; i < n - 1; i++) {
for(int j = 0; j < n - 1 - i; j++){
x = 1;
y = 2;
z = 3;
}
}
so we start with the x, y, z definition so we have 4 operations there,
int i = 0 occurs once, i < n - 1 and i++ iterate n - 1 times, int j = 0, iterates n - 1 times and j < n - 1 - i and j++ iterates (n - 1) * (n - 1 - i) and xyz = 1 would iterate (n - 1) * (n - 1 - i) as well. So if I were to simplify this, would the above code run at O(n^2)?
so we start with the x, y, z definition so we have 4 operations there
This is not necessary, we need only count critical operations (i.e. in this case how often the loop body executes).
So if I were to simplify this, would the above code run at O(n²)?
A function T(n) is in O(g(n)) if T(n) <= c*g(n) (under the assumption n >= n0) for some constants c > 0, n0 > 0.
So for your code, the loop body is executed n - i times for every i, of which there are n. So we have:
Which is indeed true for c = 1/2, n0 = 1. Therefore T(n) ∈ O(n²).
You are correct that the complexity is O(n^2). There is more than one way to approach the question of why.
The formal way is to count the number of iterations of the inner loop, which will be n-1 the first time, then n-2, then n-3, ... all the way down to 1, giving a total of n*(n-1)/2 iterations, which is O(n^2).
An informal way is to say the outer loop runs O(n) times, and "on average", i is roughly n/2, so the inner loop runs on average about (n - n/2) = n/2 times, which is also O(n). So the total number of iterations is O(n) * O(n) = O(n^2).
With both of these techniques, it's not enough to just say that the loop body iterates O(n^2) times - we also need to check the complexity of the inner loop body. In this code, the body of the inner loop just does a few assignments, so it has a complexity of O(1). This means the overall complexity of the code is O(n^2) * O(1) = O(n^2). If instead the inner loop body did e.g. a binary search over an array of length n, then that would be O(log n) and the overall complexity of the code would be O(n^2 log n), for example.
Yes, you are right. Time complexity of this program will be O(n^2) at its worst case.

How do i find the number of times x=x+1 is executed in terms of N?

im also having trouble finding omega(), and theta() as appropriate
x=0;
for k=1 to n
for j=1 to n-k
X=X+1;
The inner loop is n-1 + n-2 + n-3 ... + 1 + 0. Use this tutorial on calculating the sum of an arithmetic series to find the solution. The outer loop is obviously just "n."
This will be the big-theta. The big-oh will be the same as big-theta when you pull off everything but the first term and remove the multiplier, e.g. Theta(2*log(n) + 5) becomes O(log(n)). Omega is the same as big-Oh in this case, because the best case and worst case are identical; or you can cheat and say that big-Omega is constant time, because the big-Omega of EVERY function is constant time.
First, look at your boundaries. k=1 and k=n.
For k=1, the inside loop is executed (n-1) times.
For k=n the inside loop is execured (0) times.
So, 0 + 1 + ... + (n-1) is an arithmetic sum => (n-1)(n)/2 times.
Now, test it on a few small values :)
the answer is like this :
n-1 + n -2 + n -3 + ... = n*n - (1+2+3+ ... + n) = n^2 - n(n-1)/2