Why is the time complexity of the below snippet O(n) while the space complexity is O(1) - time-complexity

The below-given code has a space complexity of O(1). I know it has something to do with the call stack but I am unable to visualize it correctly. If somebody could make me understand this a little bit clearer that would be great.
int pairSumSequence(int n) {
int sum = 0;
for (int i = 0;i < n; i++) {
sum += pairSum(i, i + l);
}
return sum;
}
int pairSum(int a, int b) {
return a + b;
}

How much space does it needs in relation to the value of n?
The only variable used is sum.
sum doesn't change with regards to n, therefore it's constant.
If it's constant then it's O(1)
How many instructions will it execute in relation to the value of n?
Let's first simplify the code, then analyze it row by row.
int pairSumSequence(int n) {
int sum = 0;
for (int i = 0; i < n; i++) {
sum += 2 * i + l;
}
return sum;
}
The declaration and initialization of a variable take constant time and doesn't change with the value of n. Therefore this line is O(1).
int sum = 0;
Similarly returning a value takes constant time so it's also O(1)
return sum;
Finally, let's analyze the inside of the for:
sum += 2 * i + l;
This is also constant time since it's basically one multiplication and two sums. Again O(1).
But this O(1) it's called inside a for:
for (int i = 0; i < n; i++) {
sum += 2 * i + l;
}
This for loop will execute exactly n times.
Therefore the total complexity of this function is:
C = O(1) + n * O(1) + O(1) = O(n)
Meaning that this function will take time proportional to the value of n.

Time/space complexity O(1) means a constant complexity, and the constant is not necessarily 1, it can be arbitrary number, but it has to be constant and not dependent from n. For example if you always had 1000 variables (independent from n), it would still give you O(1). Sometimes it may even happen that the constant will be so big compared to your n that O(n) would be much better than O(1) with that constant.
Now in your case, your time complexity is O(n) because you enter the loop n times and each loop has constant time complexity, so it is linearly dependent from your n. Your space complexity, however, is independent from n (you always have the same number of variables kept) and is constant, hence it will be O(1)

Related

How to calculate time complexity of this code?=.=

char[] chars = String.valueOf(num).toCharArray();
int n = chars.length;
for (int i = n - 1; i > 0; i--) {
if (chars[i - 1] > chars[i]) {
chars[i - 1]--;
Arrays.fill(chars, i, n, '9');
}
}
return Integer.parseInt(new String(chars));
What is the time complexity of this code? Could you teach me how to calculate it? Thank you!
Time complexity is a measure of how a program's run-time changes as input size grows. The first thing you have to determine is what (if any) aspect of your input can cause run-time to vary, and to represent that aspect as a variable. Framing run-time as a function of that variable (or those variables) is how you determine the program's time complexity. Conventionally, when only a single aspect of the input causes run-time to vary, we represent that aspect by the variable n.
Your program primarily varies in run-time based on how many times the for-loop runs. We can see that the for-loop runs the length of chars times (note that the length of chars is the number of digits of num). Conveniently, that length is already denoted as n, which we will use as the variable representing input size. That is, taking n to be the number of digits in num, we want to determine exactly how run-time varies with n by expressing run-time as a function of n.
Also note that when doing complexity analysis, you are primarily concerned with the growth in run-time as n gets arbitrarily large (how run-time scales with n as n goes to infinity), so you typically ignore constant factors and only focus on the highest order terms, writing run-time in "Big O" notation. That is, because 3n, n, and n/2 all grow in about the same way as n goes to infinity, we would represent them all as O(n), because our primary goal is to distinguish this kind of linear growth, from the quadratic O(n^2) growth of n^2, 5n^2 + 10, or n^2 + n, from the logarithmic O(logn) growth of log(n), log(2n), or log(n) + 1, or the constant O(1) time (time that doesn't scale with n) represented by 1, 5, 100000 etc.
So, let's try to express the total number of operations the program does in terms of n. It can be helpful at first to just go line-by-line for this and to add everything up in the end.
The first line, turning num into an n character string and then turning that string to an length n array of chars, does O(n) work. (n work to turn each of n digits into a character, than another n work to put each of those characters into an array. n + n = 2n = O(n) total work)
char[] chars = String.valueOf(num).toCharArray(); // O(n)
The next line just reads the length value from the array and saves it as n. This operation takes the same amount of time no matter how long the array is, so it is O(1).
int n = chars.length; // O(1)
For every digit of num, our program runs 1 for loop, so O(n) total loops:
for (int i = n - 1; i > 0; i--) { // O(n)
Inside the for loop, a conditional check is performed, and then, if it returns true, a value may be decremented and the array from i to n filled.
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n-i) = O(n)
}
The fill operation is O(n-i) because that is how many characters may be changed to '9'. O(n-i) is O(n), because i is just a constant and lower order than n, which, as previously mentioned, means it gets ignored in big O.
Finally, you parse the n characters of chars as an int, and return it. Altogether:
static Integer foo(int num) {
char[] chars = String.valueOf(num).toCharArray(); // O(n)
int n = chars.length; // O(1)
for (int i = n - 1; i > 0; i--) { // O(n)
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n)
}
}
return Integer.parseInt(new String(chars)); // O(n)
}
When we add everything up, we get, the total time-complexity as a function of n, T(n).
T(n) = O(n) + O(1) + O(n)*(O(1) + O(1) + O(n)) + O(n)
There is a product in the expression to represent the total work done across all iterations of the for-loop: O(n) iterations times O(1) + O(1) + O(n) work in each iteration. In reality, some iterations the for loop might only do O(1) work (when the condition is false), but in the worst case the whole body is executed every iteration, and complexity analysis is typically done for the worst case unless otherwise specified.
You can simplify this function for run-time by using the fact that big O strips constants and lower-order terms, along with the facts that that O(a) + O(b) = O(a + b) and a*O(b) = O(a*b).
T(n) = O(n+1+n) + O(n)*O(1 + 1 + n)
= O(2n+1) + O(n)*O(n+2)
= O(n) + O(n)*O(n)
= O(n) + O(n^2)
= O(n^2 + n)
= O(n^2)
So you would say that the overall time complexity of the program is O(n^2), meaning that run-time scales quadratically with input size in the worst case.

Time complexity on nested for loop

function(n):
{
for (i = 1; i <= n; i++)
{
for (j = 1; j <= n / 2; j++)
output("")
}
}
Now I have calculated the time complexity for the first for loop which is O(n). Now the second for loop shows j <= n / 2 so any given n I put, for example, a range of [1,2,...,10] will give me O(log(n)) since it will continuously give me a series of n,n/2,n/4,n/8 .... K.
So if we wanted to compare the relationship it must look something like this 2^n = k.
My question is will it give me O(log(n))?
The correct summation according to the code is:
So, it's not O(log n). It's O(n^2).
No, it does not give you o(logn).
The first for loop is O(n). The second loop is O(n) as well, as the number of iterations grows as a function of n (the growth rate is linear).
It would be the same even by changing the second loop to something like
for (j=1; j<=n/2000; j++)
or in general if you replace the denominator with any constant k.
To conclude, the time compexity is quadratic, i.e., O(n^2)

Time complexity of the for-loop

I need to calculate the time complexity of the following loop:
for (i = 1; i < n; i++)
{
statements;
}
Assuming n = 10,
Is i < n; control statement going to run for n time and i++; statement for n-1 times? And knowing that i = 1; statement is going to run for a unit of time.
Calculating the total time complexity of the three statements in for-loop yields 1+n+n-1 = 2n and the loop with its statements yields 2n+n-1 = 3n-1 = O(n).
Are my calculations correct to this point?
Yes, your calculations are correct, a for loop like such would have O(n) notation.
Similarly, you could make a calculation like such:
for(int i = 0; i <n*2; i++){
//calculations
}
in this case, the for loop would have a big O notation of O(n^2) (you get the idea)
This loop takes O(n^2) time; math function = n^n This way you can calculate how long your loop need for n 10 or 100 or 1000
This way you can build graphs for loops and such.
as DAle mentioned in the comments the big O notation is not affected by calculations within the loop, only the loop itself.

finding time complexity formula of an algorithm for checking if number is prime

I have some difficulties in finding time complexity formula (T(n)) of an algorithm for checking if number is prime.
Here is the function :
Is_prime_number (n)
{
if (n==1) return 0;
if (n==2) return 1;
if (n mod 2==0) return 0;
for(i=2; i*i<=n; i+=2)
if(n mod i==0)
return 0;
return 1;
}
Now, I know there are 3 comparisons outside the loop, and therefore
T(n)= 3 + c*sqrt(n), but I am not sure about the value of c in this equation.
The main operation inside the for loop is finding the modulo. According to this article, for finding a%b, the time taken is roughly:
O(n.log(b) for a=q⋅p+r and n=log(a).
So,it depends on the number of bits, a and b have and will be roughly equal to O(log(a).log(b)). So in this case, c would be equal to O(log(a).log(b)).

Calculate function time complexity

I am trying to calculate the time complexity of this function
Code
int Almacen::poner_items(id_sala s, id_producto p, int cantidad){
it_prod r = productos.find(p);
if(r != productos.end()) {
int n = salas[s - 1].size();
int m = salas[s - 1][0].size();
for(int i = n - 1; i >= 0 && cantidad > 0; --i) {
for(int j = 0; j < m && cantidad > 0; ++j) {
if(salas[s - 1][i][j] == "NULL") {
salas[s - 1][i][j] = p;
r->second += 1;
--cantidad;
}
}
}
}
else {
displayError();
return -1;
}
return cantidad;
}
the variable productos is a std::map and its find method has a time complexity of Olog(n) and other variable salas is a std::vector.
I calculated the time and I found that it was log(n) + nm but am not sure if it is the correct expression or I should leave it as nm because it is the worst or if I whould use n² only.
Thanks
The overall function is O(nm). Big-O notation is all about "in the limit of large values" (and ignores constant factors). "Small" overheads (like an O(log n) lookup, or even an O(n log n) sort) are ignored.
Actually, the O(n log n) sort case is a bit more complex. If you expect m to be typically the same sort of size as n, then O(nm + nlogn) == O(nm), if you expect n ≫ m, then O(nm + nlogn) == O(nlogn).
Incidentally, this is not a question about C++.
In general when using big O notation, you only leave the most dominant term when taking all variables to infinity.
n by itself is much larger than log n at infinity, so even without m you can (and generally should) drop the log n term, so O(nm) looks fine to me.
In non-theoretical use cases, it is sometimes important to understand the actual complexity (for non-infinite inputs), since sometimes algorithms that are slow at infinity can produce better results for shorter inputs (there are some examples where O(1) algorithms have such a terrible constant that an exponential algorithm does better in real life). quick sort is considered a practical example of an O(n^2) algorithm that often does better than it's O(n log n) counterparts.
Read about "Big O Notation" for more info.
let
k = productos.size()
n = salas[s - 1].size()
m = salas[s - 1][0].size()
your algorithm is O(log(k) + nm). You need to use a distinct name for each independent variable
Now it might be the case that there is a relation between k, n, m and you can re-label with a reduced set of variables, but that is not discernible from your code, you need to know about the data.
It may also be the case that some of these terms won't grow large, in which case they are actually constants, i.e. O(1).
E.g. you may know k << n, k << m and n ~= m , which allows you describe it as O(n^2)