Cost function of exponent loop? - time-complexity

How do you calculate the cost function for this loop:
for (int i=3; i^2<N; i=i+2) {
// One operation here
}
Along with the cost function, what is the Big-O notation?

The number of iterations is determined by the loop condition, which is i^2<N. The loop start with 3 and is increased by 2 in each iteration. The condition of the loop ensures that the loop will run until i^2 becomes equal to or greater than N. The value of i^2 increases linearly as i increases, which means that the value of i^2 will increase faster than i. As a result, the value of i^2 will reach N much faster than i will reach N. Finally, the loop will run āŒŠāˆšnāŒ‹ iterations, which means that the time complexity is O(sqrt(n)).

Related

Running Time Calculation

I am trying to learn time complexities. I have come across a problem that is confusing me. I need help to understand this:
my_func takes an array input of size n. It runs three for loops. In the third loop it calls for another function that is O(1) time.
def my_func(A):
for (int i=1; i<n; i++)
for (int j=1; j<i; j++)
for (int k=1; k<j; k++)
some_other_func();
My Questions:
Am I right if I say that total number of steps performed by my_func() is O(n^3) because:
the first for loop goes from 1 to n-1
the second for loop goes from 1 to n-2
the third loop goes from 1 to n-3
What is asymptotic run time and what is the asymptotic run time for the above algorithm?
What is the meaning of the following:
Am I right if I say that total number of steps performed by my_func()
is O(n^3)
Yes, its time-complexity is O(n^3).
What is asymptotic run time and what is the asymptotic run time for
the above algorithm?
The limiting behavior of the execution time of an algorithm when the size of the problem goes to infinity [here]. for example:
lim (n^3) when n-> infinite
What is the meaning of the following
1st, it shows dependant variables k->j->i as it is. Moreover, what if all of the variables(i, j, k) were independent of each other? for example a constant x then each loop would iterate for x times but here k depends on j, and j depends on I. for example:
x(x(x)) = sigma(sigma(sigma(O(1))))
2nd, time-complexity is investigated on large input. Therefore, either the variables are dependant or non-dependant, big O would be O(n^3).
Yes, it's O(n^3), and that IS the asymptotic run time. The sum expression at the end means the same thing as the three nested loops at the top, assuming "some_other_func()" is sum = sum + 1.

How recursion reduces the time complexity in merge sort

As per my understanding, time complexity is derived by calculating increment in number of operations with increasing input size.
In merge sort, there are 2 phases.
Divide the input array into smaller array
Sort and Merge those arrays
As per a video lecture, time complexity to divide an array for a merge sort is O(log n).
But here he is not referring the number of operations to calculate time complexity but number of decompressions or number of times recursive function is called.
*He used recursion to divide an array.
Talking purely in terms of pseudo code, recursion is taking more than n operations in this case;
Instead, this code always takes n operations;
function divide(arr){
for (let i = 0; i < arr.length; i++) {
arr[i] = [arr[i]];
}
}
So how the complexity of recursive code is lesser than loop?
Recursion doesn't reduce time complexity. You've already shown a diagram for top down merge sort. For the original bottom up merge sort, the code treats an array of n elements as n runs of size 1, for O(1) time complexity to divide the array.
Most libraries use some variation of a hybrid insertion sort and bottom up merge sort. Top down merge sort is mostly used for academic purposes.

What is the complexity of this program. Is it O(n)?

This is a simple program I want to know the complexity of this program. I assume this is O(n) as it has only a single operation in one for loop.
a = int(input("Enter a:"))
b = int(input("Enter b:"))
sol = a
for i in range(a,b):
sol = sol & i+1
print("\nSol",sol)
Yes, it is O(n), sort of. You have to remember O(n) means the number of operations grows with the size of the input. Perhaps you're worried about the & and (i+1) operations in the for loop. What you need to keep in mind here is these operations are constant since they're all performing on a 32-bit integer. Therefore, the only parameters changing how long the program will run is the actual number of iterations of the for loop.
If you're assuming n = b - a, then this program is O(n). In fact, if you break down the actual runtime:
per loop: 1 AND operation, 1 addition operation
now do (b-a) iterations, so 2 operations per loop, (b-a) times = 2*(b-a)
If we assume n = b-a, then this runtime becomes 2*n, which is O(n).
I assume you define n := b - a. The complexity is actually n log(n). There is only 1 operation in the loop so the complexity is n * Time(operation in loop), but as i consists of log(n) bits, the complexity is O(n log(n))
EDIT:
I now regard n := b. It does not affect my original answer, and it makes more sense as it's the size of the input. (It doesn't make sense to say that n=1 for some big a,a+1)
To make it more efficient, notice you calculate (a)&(a+1)&(a+2)&..&(b).
So we just need to set 0's instead of 1's in the binary representation of b, in every place in which there is a 0 in this position for some a <= k < b. How can we know whether to set a digit to 0 or not then? I'll leave it
to you :)
It is possible to do in log(n) time, the size of the binary representation of b.
So in this case we get that the time is O(log(n)^2) = o(n)

Time complexity of probabilistic algorithm

The game is simple: a program has to 'guess' some given n such that 1 < n < N, where n and N are positive integers.
Assuming I'm using a true random number generator function ( rng() ) to generate the computer's 'guess', the probability that it will 'guess' the given n at any try is 1/N.
Example pseudo-code:
n = a // we assign some positive integer value to n
N = b // we assign some positive integer value to N
check loop
{
if rng(N) = n
print some success message
exit program
else
back to the beginning of the check loop
}
OK, I'm interested in how do I determine the time complexity (the upper bound especially) of this algorithm? I'm kind of confused. How does the probability aspect affect this? If I understand correctly, the worst case scenario (in theory) is that the program runs forever?
Even if theoretically your program can run forever, complexity here is O(n) - because doubling n you're halving probability of guessing particular value on every step, thus doubling number of steps. Even if program could run forever with given n, it would run twice forever if n is 2 times bigger.
Complexity in O notation doesn't tell you how many operations will be performed. It tells you how number of operations depends on input size.

T(n) of the nested loop, i get the answer as (logn+1)(logn+2) , am i right?

i=n;
while(i>=1){
j=i;
while(j<=n){
thetha(1)
j=j*2;
}
i=i/2;
}
Edit : changed the code because of op's comment below.
Yes, you are correct in that the outer loop is Log(n) and the inner loop is Log(n), which yields (log n)(log n).
The reason for Log(n) complexity is because the number of remaining iterations in the loop is halved at each iteration. Whether this is achieved by dividing the iterating variable i by 2 or by multiplying the variable j by 2 is irrelevant. The time taken to complete the loops grows as Log(n) for each loop.
The multiplication of (log n)(log n) is due to the fact that each iteration of the outer loop executes Log(n) iterations of the inner loop.
The additions are unnecessary because in big-O notation we are only concerned with the rate at which a function grows relative to another function. Offsetting it by a constant (or multiplying by a constant) does not change the relative complexity of the functions, so the end result is (log n)(log n).
In the while(i>=1){ (...) } loop, i is bigger than 1 (stricly bigger except for the last iteration). Thus, after j=i, j is bigger than 1 (stricly bigger except for the last iteration).
For that reason, your code is more or less equivalent to :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
j=i
while(j<=1){
thetha(1)
j=j*2;
}
}
Which can be rewritten :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
thetha(1)
}
And the overall complexity is the complexity of the while loop which is log(n).