I have an algorithm and it is mainly composed of k-NN , followed by a computation involving finding permutations, followed by some for loops. Line by line, my computational complexity is :
O(n) - for k-NN
O(2^k) - for a part that computes singlets, pairs, triplets, etc.
O(k!) - for a part that deals with combinatorics.
O(k*k!) - for the final part.
K here is a parameter that can be chosen by the user, in general it is somewhat small (10-100). n is the number of examples in my dataset, and this can get very large.
What is the overall complexity of my algorithm? Is it simply O(n) ?
As k <= 100, f(k) = O(1) for every function f.
In your case, there is a function f such that the overall time is O(n + f(k)), so it is O(n)
Related
I found from various online sources that the time complexity for DTW is quadratic. On the other hand, I also found that standard kNN has linear time complexity. However, when pairing them together, does kNN-DTW have quadratic or cubic time?
In essence, does the time complexity of kNN solely depend on the metric used? I have not found any clear answer for this.
You need to be careful here. Let's say you have n time series in your 'training' set (let's call it this, even though you are not really training with kNN) of length l. Computing the DTW between a pair of time series has a asymptotic complexity of O(l * m) where m is your maximum warping window. As m <= l also O(l^2) holds. (although there might be more efficient implementations, i don't think they are actually faster in practice in most cases, see here). Classifying a time series using kNN requires you to compute the distance between that time series and all time series in the training set which would mean n comparisons, linear with respect to n.
So your final complexity would be in O(l * m * n) or O(l^2 * n). In words: the complexity is quadratic with respect to time series length and linear with respect to the number of training examples.
My Lua function:
for y=userPosY+radius,userPosY-radius,-1 do
for x=userPosX-radius,userPosX+radius,1 do
local oneNeighborFound = redis.call('lrange', userPosZone .. x .. y, '0', '0')
if next(oneNeighborFound) ~= nil then
table.insert(neighborsFoundInPosition, userPosZone .. x .. y)
neighborsFoundInPositionCount = neighborsFoundInPositionCount + 1
end
end
end
Which leads to this formula: (2n+1)^2
As I understand it correctly, that would be a time complexity of O(n^2).
How can I compare this to the time complexity of the GEORADIUS (Redis) with O(N+log(M))? https://redis.io/commands/GEORADIUS
Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.
My time complexity does not have a M. I do not know how many items are in the index (M) because I do not need to know that. My index changes often, almost with every request and can be large.
Which time complexity is when better?
Assuming N and M were independent variables, I would treat O(N + log M) the same way you treat O(N3 - 7N2 - 12N + 42): the latter becomes O(N3) simply because that's the term that has most effect on the outcome.
This is especially true as time complexity analysis is not really a case of considering runtime. Runtime has to take into account the lesser terms for specific limitations of N. For example, if your algorithm runtime can be expressed as runtime = N2 + 9999999999N, and N is always in the range [1, 4], it's the second term that's more important, not the first.
It's better to think of complexity analysis as what happens as N approaches infinity. With the O(N + log M) one, think about what happens when you:
double N?
double M?
The first has a much greater impact so I would simply convert the complexity to O(N).
However, you'll hopefully have noticed the use of the word "independent" in my first paragraph. The only sticking point to my suggestion would be if M was actually some function of N, in which case it may become the more important term.
Any function that reversed the impact of the log M would do this, such as the equality M = 101010N.
I understand why, in worst case, where T is the running time of the algorithm, that using the median of medians algorithm with blocks of size three gives a recurrence relation of
T(n) = T(2n / 3) + T(n / 3) + O(n)
The Wikipedia article for the median-of-medians algorithm says that with blocks of size three the runtime is not O(n) because it still needs to check all n elements. I don't quite understand this explanation, and in my homework it says I need to show it by induction.
How would I show that median-of-medians takes time Ω(n log n) in this case?
Since this is a homework problem I'm going to let you figure out a rigorous proof of this result on your own, but it might be helpful to think about this one by looking at the shape of the recursion tree, which will be something like this:
n Total work: n
2n/3 n/3 Total work: n
4n/9 2n/9 2n/9 n/9 Total work: n
Essentially, each node's children collectively will do the exact same amount of work as the node itself, so if you sum up the work done across the layers, you should see roughly linear work done per level. It won't be exactly linear work per level because eventually the smaller call starts to bottom out, but for the top layers you'll see this pattern hold.
You can formalize this by induction by guessing that the runtime is something of the form cn log n, possibly with some lower-order terms added in, but (IMHO) it's more important and instructive to see where the runtime comes from than it is to be able to prove it inductively.
If we add the fractional parts of T(2n/3) and T(n/3), get T(n). Then, using the Master theorem, we have n^(log_(b)(a)) = n^(log_(1)(1)) = n. We also have f(n) = O(n). So n^(log_(b)(a)) = O(n) = Theta(f(n)), thus Case 2 of the Master theorem applies. Thus T(n) = Theta(n^(log_(b)(a)) * log(n)) = Theta(n*log(n)).
Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n),
does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO doesn't describe the exact running time of your algorithm but rather its growth rate. BigO therefore describes a class of functions, not a single function. O(n^2) doesn't mean that your algorithms for input of size 2 will run in 4 operations, it means that if you were to plot the running time of your application as a function of n it would be asymptotically upper bound by c*n^2 starting at some n0. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k is any constant value.
for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).