I have one algorithm that only needs to run in O(N/2) time (basically it is a for loop that only runs over half of the elements).
How does this simplifies in the big O notation? O(log n)?
Big O notation drops the factors. O(N/2) = O(1/2 * N) and simplifies to O(N).
If you want to know why the factor is dropped I would refer you to this other SO question.
Related
Would O(n * 2^n ) simplify to O(2^n) in Big-O notation?
My intuition is that it is not, even though O(2^n) is significantly worse than O(n)
O(n * 2^n) is not equal to O(2^n) and is much worse than O(2^n).
Your intuition is correct. O(n * 2^n) is not equal to O(2^n) and you can see that by definition of big-O. That is,
But with some random k, with only taking n=k+1 you demonstrate that the inequality isn’t true.
One easy way to elucidate this is to compute the quotient of both quantities and let n tend to infinity. In this case the quotient is
n*2^n/2^n = n
which tends to infinity as n goes to infinity. Since the limit is not bounded by any constant, the answer is that O(n*2^n) grows much faster than O(2^n)
This is a Question that my (Data Structure)course teacher did in a Class Test. What would be the proper answer here? Since log n^2 =2 log n , as far as I know in a time complexity it could be written as O(log n) since constant multipliers cancels out. Then is one better than the other in any specific way?
Asymptotically they are the same.
Your reasoning is right, O(log n^2) can be simplified to O(log n) and obviously they are equals.
It's like you have two algorithms that works on an array, the first is O(n) and the second is O(2n).
If you look to the number of performed operation, the second performs double the operation of the first but this is not important for the Asymptotic notation.
They are in the same order that is O(n).
In your specific example the order is O(log n) and they can be considered the same.
I would agree with you that any O(log(x^k)) is O(log(x)). The computational complexity scales the same.
I am trying to answer the following question:
Given n=2k find the complexity
func(n)
if(n==2) return1;
else n=1+func(sqrt(n))
end
I think because there is if-else statement, it will loop n times for sure, but I'm confused with the recursive loop for func(sqrt(n)). Since it is square-rooted, I think the time complexity would be
O(sqrt(n) * n) = O(n^1/2 * n) = O(n^3/2) = O(2k^3/2). However, the possible answer choices are only
O(k)
O(2^n)
O(n*k)
Can O(2k^3/2) be considered as O(k)? I'm confused because although time complexity is often simplified, O(n) and O(n^2) are different, so I thought O(2k^3/2) can only be simplified to O(k^3/2).
I’d argue that none of the answers here are the best answer to this question.
If you have a recursive function that does O(1) work per iteration and then reduces the size of its argument from n to √n, as is the case here, the runtime works out to O(log log n). The intuition behind this is that taking the square root of a number throws away half the digits of the number, roughly, so the runtime will be O(log d), where d is the number of digits of the input number. The number of digits of the number n is O(log n), hence the overall runtime of O(log log n).
In your case, you have n = 2k, so O(log log n) = O(log log k). (Unless you meant n = 2k, In which case O(log log n) = O(log log 2k) = O(log k).)
Notice that we don’t get O(n × √n) as our result. That would be what happens if we do O(√n) work O(n) times. But here, that’s not what we’re doing. The size of the input shrinks by a square root on each iteration, but that’s not the same thing as saying that we’re doing a square root amount of work. And the number of times that this happens isn’t O(n), since the value of n shrinks too rapidly for that.
Reasoning by analogy, the runtime of this code would be O(log n), not O(n × n / 2):
func(n):
if n <= 2 return 1
return func(n/2)
Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n),
does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO doesn't describe the exact running time of your algorithm but rather its growth rate. BigO therefore describes a class of functions, not a single function. O(n^2) doesn't mean that your algorithms for input of size 2 will run in 4 operations, it means that if you were to plot the running time of your application as a function of n it would be asymptotically upper bound by c*n^2 starting at some n0. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k is any constant value.
for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).