for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).
Related
Suppose that you found a solution to the A problem and are trying to get some idea of its complexity. You solve A by calling your B sub-routine a total of n^2 times and also doing a constant amount of additional work.
If B is selection sort, what is the time complexity of this solution?
If B is merge sort, what is the time complexity of this solution?
My answer to 1st question is n^2 and to 2nd one is nlogn. Any idea will be appreciated about my answers.
I assume that by "solution" you mean "algorithm", and by "this solution", you mean the algorithm that solves problem A by calling B n^2 times. Furthermore, I assume that by n you mean the size of the input.
Then if B is selection sort, which is an O(n^2) algorithm, the algorithm for solving A would be O(n^2 * n^2) = O(n^4).
If B is merge sort, which is O(n log n), the algorithm for solving A would be O(n^2* n log n) = O(n^3 log n).
Yeah your are right,
O(B) = n ^ 2 -> Selection Sort;
O(B) = n * log(n). -> Marge Sort
I am trying to answer the following question:
Given n=2k find the complexity
func(n)
if(n==2) return1;
else n=1+func(sqrt(n))
end
I think because there is if-else statement, it will loop n times for sure, but I'm confused with the recursive loop for func(sqrt(n)). Since it is square-rooted, I think the time complexity would be
O(sqrt(n) * n) = O(n^1/2 * n) = O(n^3/2) = O(2k^3/2). However, the possible answer choices are only
O(k)
O(2^n)
O(n*k)
Can O(2k^3/2) be considered as O(k)? I'm confused because although time complexity is often simplified, O(n) and O(n^2) are different, so I thought O(2k^3/2) can only be simplified to O(k^3/2).
I’d argue that none of the answers here are the best answer to this question.
If you have a recursive function that does O(1) work per iteration and then reduces the size of its argument from n to √n, as is the case here, the runtime works out to O(log log n). The intuition behind this is that taking the square root of a number throws away half the digits of the number, roughly, so the runtime will be O(log d), where d is the number of digits of the input number. The number of digits of the number n is O(log n), hence the overall runtime of O(log log n).
In your case, you have n = 2k, so O(log log n) = O(log log k). (Unless you meant n = 2k, In which case O(log log n) = O(log log 2k) = O(log k).)
Notice that we don’t get O(n × √n) as our result. That would be what happens if we do O(√n) work O(n) times. But here, that’s not what we’re doing. The size of the input shrinks by a square root on each iteration, but that’s not the same thing as saying that we’re doing a square root amount of work. And the number of times that this happens isn’t O(n), since the value of n shrinks too rapidly for that.
Reasoning by analogy, the runtime of this code would be O(log n), not O(n × n / 2):
func(n):
if n <= 2 return 1
return func(n/2)
In big O notation of time complexity in algorithmic analysis, is O(n + k log n) the same as O(n log n) if k is larger than n? I am not entirely sure about this.
I am not 100% sure what you mean by N+KlogN. I'm used to seeing K used as a subset of N, for example "the top Kth set of items in N" which for large N it is common to simply return the top K items in N because then the Big-O time is NlogK which is much faster than NlogN (because K is a smaller number).
If you literally mean N+KlogN, then that would be more complex than simply NlogN as K adds to the number. For example, as K goes to zero you simply end up with NlogN, otherwise you get a greater than NlogN, which I hope would be obvious is more complex.
I hope that does something to answer the question. I confess I feel like I might be missing the point here and if so I apologize.
No, in the specific case you’re mentioning these are not the same. For example, consider this algorithm: given an array of length N and a number K ≥ N, do a linear scan over the array, then do K binary searches on the array. How much work is done here? Well, the linear search takes time O(N), and the K binary searches collectively take time O(K log N), so the total work done is O(N + K log N).
However, the work here is not O(N log N). Since K can be arbitrarily large, the value of K log N can exceed the value of N log N by an arbitrary amount. A different way of seeing this: a bound of O(N log N) means that the runtime depends purely on N and not on K. But that can’t be the case here, since cranking K way, way up definitely increases the runtime, independently of what N is.
Hope this helps!
I assume it as N + (K log N) where N is total count and K is the subset count. Now assuming K is very small compared to N (possibly a constant to get top K numbers from varying N) it reduces to linear time.
For example, to get top 100 items from array of 10000 elements
10000 + (100 * log (10000) base 2) = 10000 + 1300
Now when N is 20000, k log n changes to 1400
So as N increases linearly, the k log n increases in logarithmic manner reducing the overall complexity to linear.
O(n + (k log n)) is approximately O(n)
My Lua function:
for y=userPosY+radius,userPosY-radius,-1 do
for x=userPosX-radius,userPosX+radius,1 do
local oneNeighborFound = redis.call('lrange', userPosZone .. x .. y, '0', '0')
if next(oneNeighborFound) ~= nil then
table.insert(neighborsFoundInPosition, userPosZone .. x .. y)
neighborsFoundInPositionCount = neighborsFoundInPositionCount + 1
end
end
end
Which leads to this formula: (2n+1)^2
As I understand it correctly, that would be a time complexity of O(n^2).
How can I compare this to the time complexity of the GEORADIUS (Redis) with O(N+log(M))? https://redis.io/commands/GEORADIUS
Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.
My time complexity does not have a M. I do not know how many items are in the index (M) because I do not need to know that. My index changes often, almost with every request and can be large.
Which time complexity is when better?
Assuming N and M were independent variables, I would treat O(N + log M) the same way you treat O(N3 - 7N2 - 12N + 42): the latter becomes O(N3) simply because that's the term that has most effect on the outcome.
This is especially true as time complexity analysis is not really a case of considering runtime. Runtime has to take into account the lesser terms for specific limitations of N. For example, if your algorithm runtime can be expressed as runtime = N2 + 9999999999N, and N is always in the range [1, 4], it's the second term that's more important, not the first.
It's better to think of complexity analysis as what happens as N approaches infinity. With the O(N + log M) one, think about what happens when you:
double N?
double M?
The first has a much greater impact so I would simply convert the complexity to O(N).
However, you'll hopefully have noticed the use of the word "independent" in my first paragraph. The only sticking point to my suggestion would be if M was actually some function of N, in which case it may become the more important term.
Any function that reversed the impact of the log M would do this, such as the equality M = 101010N.
What I am trying to do is to sort the following functions:
n, n^3, nlogn, n/logn, n/log^2n, sqrt(n), sqrt(n^3)
in increasing order of asymptotic growth.
What I did is,
n/logn, n/log^2n, sqrt(n), n, sqrt(n^3), nlogn, n^3.
1) Is my answer correct?
2) I know about the time complexity of the basic functions such as n, nlogn, n^2, but I am really confused on the functions like, n/nlogn, sqrt(n^3).
How should I figure out which one is faster or slower? Is there any way to do this with mathematical calculations?
3) Are the big O time complexity and asymptotic growth different thing?
I would be really appreciated if anyone blows up my confusion... Thanks!
An important result we need here is:
log n grows more slowly than n^a for any strictly positive number a > 0.
For a proof of the above, see here.
If we re-write sqrt(n^3) as n^1.5, we can see than n log n grows more slowly (divide both by n and use the result above).
Similarly, n / log n grows more quickly than any n^b where b < 1; again this is directly from the result above. Note that it is however slower than n by a factor of log n; same for n / log^2 n.
Combining the above, we find the increasing order to be:
sqrt(n)
n / log^2 n
n / log n
n
n log n
sqrt(n^3)
n^3
So I'm afraid to say you got only a few of the orderings right.
EDIT: to answer your other questions:
If you take the limit of f(n) / g(n) as n -> infinity, then it can be said that f(n) is asymptotically greater than g(n) if this limit is infinite, and lesser if the limit is zero. This comes directly from the definition of big-O.
big-O is a method of classifying asymptotic growth, typically as the parameter approaches infinity.