Complexity of single loop that changes its index - time-complexity

Can someone please explain how to evaluate the complexity of the following code? Consider that the array_of_size_n is made of positive random numbers in ascending order.
for(i = 0; i < n; i++){
temp = array_of_size_n[i] + last
if(temp > last){
do_something_else(temp); //doesn't change the complexity
last = temp;
i = 0;
}
}

According to my test the growth is linear with a huge constant factor.
Assume last is 0 in the beginning.
It will always pass the first value because the i++ in the loop.
So when it comes to the second value, if it is 1, then last will be added to INT_MAX. Then if(temp > last) will be false forever, hence linear.
The size of the second value will affect how fast last reach INT_MAX.

Related

Why is the time complexity of the below snippet O(n) while the space complexity is O(1)

The below-given code has a space complexity of O(1). I know it has something to do with the call stack but I am unable to visualize it correctly. If somebody could make me understand this a little bit clearer that would be great.
int pairSumSequence(int n) {
int sum = 0;
for (int i = 0;i < n; i++) {
sum += pairSum(i, i + l);
}
return sum;
}
int pairSum(int a, int b) {
return a + b;
}
How much space does it needs in relation to the value of n?
The only variable used is sum.
sum doesn't change with regards to n, therefore it's constant.
If it's constant then it's O(1)
How many instructions will it execute in relation to the value of n?
Let's first simplify the code, then analyze it row by row.
int pairSumSequence(int n) {
int sum = 0;
for (int i = 0; i < n; i++) {
sum += 2 * i + l;
}
return sum;
}
The declaration and initialization of a variable take constant time and doesn't change with the value of n. Therefore this line is O(1).
int sum = 0;
Similarly returning a value takes constant time so it's also O(1)
return sum;
Finally, let's analyze the inside of the for:
sum += 2 * i + l;
This is also constant time since it's basically one multiplication and two sums. Again O(1).
But this O(1) it's called inside a for:
for (int i = 0; i < n; i++) {
sum += 2 * i + l;
}
This for loop will execute exactly n times.
Therefore the total complexity of this function is:
C = O(1) + n * O(1) + O(1) = O(n)
Meaning that this function will take time proportional to the value of n.
Time/space complexity O(1) means a constant complexity, and the constant is not necessarily 1, it can be arbitrary number, but it has to be constant and not dependent from n. For example if you always had 1000 variables (independent from n), it would still give you O(1). Sometimes it may even happen that the constant will be so big compared to your n that O(n) would be much better than O(1) with that constant.
Now in your case, your time complexity is O(n) because you enter the loop n times and each loop has constant time complexity, so it is linearly dependent from your n. Your space complexity, however, is independent from n (you always have the same number of variables kept) and is constant, hence it will be O(1)

What is the difference between these two loops?

I want to write a loop in which I increase my variable i, until arr[i] is less or equal than v.
I've tried these two loops but only the first loop is working and I can't tell the difference.
first loop:
do{
i++;
if(arr[i] >= v)
break;
}while(true);
second loop:
do{
i++;
}while(arr[i] <= v)
I was wondering what exactly the second loop is doing that I don't get the expected result.
In the first one you are breaking when the value is greater than or equal to v
In the second one you are breaking when the value is greater than v
The break conditions are different for each loop
For the second one to work correctly,
do{
i++;
}while(arr[i] < v)

Time complexity of the for-loop

I need to calculate the time complexity of the following loop:
for (i = 1; i < n; i++)
{
statements;
}
Assuming n = 10,
Is i < n; control statement going to run for n time and i++; statement for n-1 times? And knowing that i = 1; statement is going to run for a unit of time.
Calculating the total time complexity of the three statements in for-loop yields 1+n+n-1 = 2n and the loop with its statements yields 2n+n-1 = 3n-1 = O(n).
Are my calculations correct to this point?
Yes, your calculations are correct, a for loop like such would have O(n) notation.
Similarly, you could make a calculation like such:
for(int i = 0; i <n*2; i++){
//calculations
}
in this case, the for loop would have a big O notation of O(n^2) (you get the idea)
This loop takes O(n^2) time; math function = n^n This way you can calculate how long your loop need for n 10 or 100 or 1000
This way you can build graphs for loops and such.
as DAle mentioned in the comments the big O notation is not affected by calculations within the loop, only the loop itself.

finding time complexity formula of an algorithm for checking if number is prime

I have some difficulties in finding time complexity formula (T(n)) of an algorithm for checking if number is prime.
Here is the function :
Is_prime_number (n)
{
if (n==1) return 0;
if (n==2) return 1;
if (n mod 2==0) return 0;
for(i=2; i*i<=n; i+=2)
if(n mod i==0)
return 0;
return 1;
}
Now, I know there are 3 comparisons outside the loop, and therefore
T(n)= 3 + c*sqrt(n), but I am not sure about the value of c in this equation.
The main operation inside the for loop is finding the modulo. According to this article, for finding a%b, the time taken is roughly:
O(n.log(b) for a=q⋅p+r and n=log(a).
So,it depends on the number of bits, a and b have and will be roughly equal to O(log(a).log(b)). So in this case, c would be equal to O(log(a).log(b)).

Big-Oh notation for code fragment

I am lost on these code fragments and finding a hard time to find any other similar examples.
//Code fragment 1
sum = 0;
for(i = 0;i < n; i++)
for(J=1;j<i*i;J++)
for(K=0;k<j;k++)
sum++;
I'm guessing it is O(n^4) for fragment 1.
//Code fragment 2
sum = 0;
for( 1; i < n; i++ )
for (j =1;,j < i * i; j++)
if( j % i == 0 )
for( k = 0; k < j; k++)
sum++;
I am very lost on this one. Not sure how does the if statement affects the loop.
Thank you for the help head of time!
The first one is in fact O(n^5). The sum++ line is executed 1^4 times, then 2^4 times, then 3^4, and so on. The sum of powers-of-k has a term in n^(k+1) (see e.g. Faulhaber's formula), so in this case n^5.
For the second one, the way to think about it is that the inner loop only executes when j is a multiple of i. So the second loop may as well be written for (j = 1; j < i * i; j+=i). But this is the same as for (j = 1; j < i; j++). So we now have a sequence of cubes, rather than powers-of-4. The highest term is therefore n^4.
I'm fairly sure the 1st fragment is actually O(n^5).
Because:
n times,
i^2 times, where i is actually half of n (average i for the case, since for each x there is a corresponding n-x that sum to 2n) Which is therefore n^2 / 4 times. (a times)
Then, a times again,
and when you do: n*a*a, or n*n*n/4*n*n/4 = n^5 / 16, or O(n^5)
I believe the second is O(4), because:
It's iterated n times.
Then it's iterated n*n times, (literally n*n/4, but not in O notation)
Then only 1/n are let through by the if (I can't remember how I got this)
Then n*n are repeated.
So, n*n*n*n*n/n = n^4.
With a sum so handy to compute, you could run these for n=10, n=50, and so on, and just look which of O(N^2), O(N^3), O(N^4), O(N^6) is a better match. (Note that the index for the inner-most loop also runs to n*n...)
First off I agree with your assumption for the first scenario. Here is my breakdown for the second.
The If statement will cause the last loop to run only half of the time since an odd value for i*i will only lead to the third inner loop for prime values that i*i can be decomposed into. Bottom line in big-O we ignore constants so I think your looking at O(n^3).