What is the complexity of this program. Is it O(n)? - time-complexity

This is a simple program I want to know the complexity of this program. I assume this is O(n) as it has only a single operation in one for loop.
a = int(input("Enter a:"))
b = int(input("Enter b:"))
sol = a
for i in range(a,b):
sol = sol & i+1
print("\nSol",sol)

Yes, it is O(n), sort of. You have to remember O(n) means the number of operations grows with the size of the input. Perhaps you're worried about the & and (i+1) operations in the for loop. What you need to keep in mind here is these operations are constant since they're all performing on a 32-bit integer. Therefore, the only parameters changing how long the program will run is the actual number of iterations of the for loop.
If you're assuming n = b - a, then this program is O(n). In fact, if you break down the actual runtime:
per loop: 1 AND operation, 1 addition operation
now do (b-a) iterations, so 2 operations per loop, (b-a) times = 2*(b-a)
If we assume n = b-a, then this runtime becomes 2*n, which is O(n).

I assume you define n := b - a. The complexity is actually n log(n). There is only 1 operation in the loop so the complexity is n * Time(operation in loop), but as i consists of log(n) bits, the complexity is O(n log(n))
EDIT:
I now regard n := b. It does not affect my original answer, and it makes more sense as it's the size of the input. (It doesn't make sense to say that n=1 for some big a,a+1)
To make it more efficient, notice you calculate (a)&(a+1)&(a+2)&..&(b).
So we just need to set 0's instead of 1's in the binary representation of b, in every place in which there is a 0 in this position for some a <= k < b. How can we know whether to set a digit to 0 or not then? I'll leave it
to you :)
It is possible to do in log(n) time, the size of the binary representation of b.
So in this case we get that the time is O(log(n)^2) = o(n)

Related

Running Time Calculation

I am trying to learn time complexities. I have come across a problem that is confusing me. I need help to understand this:
my_func takes an array input of size n. It runs three for loops. In the third loop it calls for another function that is O(1) time.
def my_func(A):
for (int i=1; i<n; i++)
for (int j=1; j<i; j++)
for (int k=1; k<j; k++)
some_other_func();
My Questions:
Am I right if I say that total number of steps performed by my_func() is O(n^3) because:
the first for loop goes from 1 to n-1
the second for loop goes from 1 to n-2
the third loop goes from 1 to n-3
What is asymptotic run time and what is the asymptotic run time for the above algorithm?
What is the meaning of the following:
Am I right if I say that total number of steps performed by my_func()
is O(n^3)
Yes, its time-complexity is O(n^3).
What is asymptotic run time and what is the asymptotic run time for
the above algorithm?
The limiting behavior of the execution time of an algorithm when the size of the problem goes to infinity [here]. for example:
lim (n^3) when n-> infinite
What is the meaning of the following
1st, it shows dependant variables k->j->i as it is. Moreover, what if all of the variables(i, j, k) were independent of each other? for example a constant x then each loop would iterate for x times but here k depends on j, and j depends on I. for example:
x(x(x)) = sigma(sigma(sigma(O(1))))
2nd, time-complexity is investigated on large input. Therefore, either the variables are dependant or non-dependant, big O would be O(n^3).
Yes, it's O(n^3), and that IS the asymptotic run time. The sum expression at the end means the same thing as the three nested loops at the top, assuming "some_other_func()" is sum = sum + 1.

How can I compare the time-complexity O(n^2) with O(N+log(M))?

My Lua function:
for y=userPosY+radius,userPosY-radius,-1 do
for x=userPosX-radius,userPosX+radius,1 do
local oneNeighborFound = redis.call('lrange', userPosZone .. x .. y, '0', '0')
if next(oneNeighborFound) ~= nil then
table.insert(neighborsFoundInPosition, userPosZone .. x .. y)
neighborsFoundInPositionCount = neighborsFoundInPositionCount + 1
end
end
end
Which leads to this formula: (2n+1)^2
As I understand it correctly, that would be a time complexity of O(n^2).
How can I compare this to the time complexity of the GEORADIUS (Redis) with O(N+log(M))? https://redis.io/commands/GEORADIUS
Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.
My time complexity does not have a M. I do not know how many items are in the index (M) because I do not need to know that. My index changes often, almost with every request and can be large.
Which time complexity is when better?
Assuming N and M were independent variables, I would treat O(N + log M) the same way you treat O(N3 - 7N2 - 12N + 42): the latter becomes O(N3) simply because that's the term that has most effect on the outcome.
This is especially true as time complexity analysis is not really a case of considering runtime. Runtime has to take into account the lesser terms for specific limitations of N. For example, if your algorithm runtime can be expressed as runtime = N2 + 9999999999N, and N is always in the range [1, 4], it's the second term that's more important, not the first.
It's better to think of complexity analysis as what happens as N approaches infinity. With the O(N + log M) one, think about what happens when you:
double N?
double M?
The first has a much greater impact so I would simply convert the complexity to O(N).
However, you'll hopefully have noticed the use of the word "independent" in my first paragraph. The only sticking point to my suggestion would be if M was actually some function of N, in which case it may become the more important term.
Any function that reversed the impact of the log M would do this, such as the equality M = 101010N.

Time complexity and integer inputs

I came across a question asking to describe the computational complexity in Big O of the following code:
i = 1;
while(i < N) {
i = i * 2;
}
I found this Stack Overflow question asking for the answer, with the most voted answer saying it is Log2(N).
On first thought that answer looks correct, however I remember learning about psuedo polynomial runtimes, and how computational complexity measures difficulty with respect to the length of the input, rather than the value.
So for integer inputs, the complexity should be in terms of the number of bits in the input.
Therefore, shouldn't this function be O(N)? Because every iteration of the loop increases the number of bits in i by 1, until it reaches around the same bits as N.
This code might be found in a function like the one below:
function FindNextPowerOfTwo(N) {
i = 1;
while(i < N) {
i = i * 2;
}
return i;
}
Here, the input can be thought of as a k-bit unsigned integer which we might as well imagine as having as a string of k bits. The input size is therefore k = floor(log(N)) + 1 bits of input.
The assignment i = 1 should be interpreted as creating a new bit string and assigning it the length-one bit string 1. This is a constant time operation.
The loop condition i < N compares the two bit strings to see which represents the larger number. If implemented intelligently, this will take time proportional to the length of the shorter of the two bit strings which will always be i. As we will see, the length of i's bit string begins at 1 and increases by 1 until it is greater than or equal to the length of N's bit string, k. When N is not a power of two, the length of i's bit string will reach k + 1. Thus, the time taken by evaluating the condition is proportional to 1 + 2 + ... + (k + 1) = (k + 1)(k + 2)/2 = O(k^2) in the worst case.
Inside the loop, we multiply i by two over and over. The complexity of this operation depends on how multiplication is to be interpreted. Certainly, it is possible to represent our bit strings in such a way that we could intelligently multiply by two by performing a bit shift and inserting a zero on the end. This could be made to be a constant-time operation. If we are oblivious to this optimization and perform standard long multiplication, we scan i's bit string once to write out a row of 0s and again to write out i with an extra 0, and then we perform regular addition with carry by scanning both of these strings. The time taken by each step here is proportional to the length of i's bit string (really, say that plus one) so the whole thing is proportional to i's bit-string length. Since the bit-string length of i assumes values 1, 2, ..., (k + 1), the total time is 2 + 3 + ... + (k + 2) = (k + 2)(k + 3)/2 = O(k^2).
Returning i is a constant time operation.
Taking everything together, the runtime is bounded from above and from below by functions of the form c * k^2, and so a bound on the worst-case complexity is Theta(k^2) = Theta(log(n)^2).
In the given example, you are not increasing the value of i by 1, but doubling it at every time, thus it is moving 2 times faster towards N. By multiplying it by two you are reducing the size of search space (between i to N) by half; i.e, reducing the input space by the factor of 2. Thus the complexity of your program is - log_2 (N).
If by chance you'd be doing -
i = i * 3;
The complexity of your program would be log_3 (N).
It depends on important question: "Is multiplication constant operation"?
In real world it is usually considered as constant, because you have fixed 32 or 64 bit numbers and multiplicating them takes always same (=constant) time.
On the other hand - you have limitation that N < 32/64 bit (or any other if you use it).
In theory where you do not consider multiplying as constant operation or for some special algorithms where N can grow too much to ignore the multiplying complexity, you are right, you have to start thinking about complexity of multiplying.
The complexity of multiplying by constant number (in this case 2) - you have to go through each bit each time and you have log_2(N) bits.
And you have to do hits log_2(N) times before you reach N
Which ends with complexity of log_2(N) * log_2(N) = O(log_2^2(N))
PS: Akash has good point that multiply by 2 can be written as constant operation, because the only thing you need in binary is to "add zero" (similar to multiply by 10 in "human readable" format, you just add zero 4333*10 = 43330)
However if multiplying is not that simple (you have to go through all bits), the previous answer is correct

Time complexity of probabilistic algorithm

The game is simple: a program has to 'guess' some given n such that 1 < n < N, where n and N are positive integers.
Assuming I'm using a true random number generator function ( rng() ) to generate the computer's 'guess', the probability that it will 'guess' the given n at any try is 1/N.
Example pseudo-code:
n = a // we assign some positive integer value to n
N = b // we assign some positive integer value to N
check loop
{
if rng(N) = n
print some success message
exit program
else
back to the beginning of the check loop
}
OK, I'm interested in how do I determine the time complexity (the upper bound especially) of this algorithm? I'm kind of confused. How does the probability aspect affect this? If I understand correctly, the worst case scenario (in theory) is that the program runs forever?
Even if theoretically your program can run forever, complexity here is O(n) - because doubling n you're halving probability of guessing particular value on every step, thus doubling number of steps. Even if program could run forever with given n, it would run twice forever if n is 2 times bigger.
Complexity in O notation doesn't tell you how many operations will be performed. It tells you how number of operations depends on input size.

time comlpexity of enumeration all the subsets

for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).