Time complexity of sequential operation with different parameters - time-complexity

I have a function that has two sequential operations.
I compute the time complexity of these:
O(n) + O(kn^(1-1/k))
What is the total time complexity of the function? Is correct to say that is O(n+kn^(1-1/k))?

Related

Amortized time comlexity

I was working on a problem where we are supposed to give an example of an algorithm whose time complexity is O(n^2), but whose amortized time complexity is less than that. My immediate thought is nested loops, but I'm not exactly sure of what an example of that would look like where the result was amortized. Any insights would be greatly appreciated!
Consider the Add method on a Vector (resizable array) data structure. Once the current capacity of the array is exceeded, we must increase the capacity by making a larger array and copying stuff over. Typically, you'd just double the capacity in such cases, giving rise to a worst-case O(n) Add, but an O(1) amortized Add. Instead of doubling, we're of course free to increase it by squaring (provided the initial capacity is greater than one). This means that, every now and then, an add will take O(n^2) time; but such an increasingly large majority of them will take O(1) time that the amortized complexity will be O(1) as well.
Combining variations on this idea with the multiplicative effect on complexity of putting code into loops, it's probably possible to find an example where the worst-case time complexity is O(f) and the amortized complexity is O(g), for and f and g where g is O(f).

kNN-DTW time complexity

I found from various online sources that the time complexity for DTW is quadratic. On the other hand, I also found that standard kNN has linear time complexity. However, when pairing them together, does kNN-DTW have quadratic or cubic time?
In essence, does the time complexity of kNN solely depend on the metric used? I have not found any clear answer for this.
You need to be careful here. Let's say you have n time series in your 'training' set (let's call it this, even though you are not really training with kNN) of length l. Computing the DTW between a pair of time series has a asymptotic complexity of O(l * m) where m is your maximum warping window. As m <= l also O(l^2) holds. (although there might be more efficient implementations, i don't think they are actually faster in practice in most cases, see here). Classifying a time series using kNN requires you to compute the distance between that time series and all time series in the training set which would mean n comparisons, linear with respect to n.
So your final complexity would be in O(l * m * n) or O(l^2 * n). In words: the complexity is quadratic with respect to time series length and linear with respect to the number of training examples.

Computational complexity depending on two variables

I have an algorithm and it is mainly composed of k-NN , followed by a computation involving finding permutations, followed by some for loops. Line by line, my computational complexity is :
O(n) - for k-NN
O(2^k) - for a part that computes singlets, pairs, triplets, etc.
O(k!) - for a part that deals with combinatorics.
O(k*k!) - for the final part.
K here is a parameter that can be chosen by the user, in general it is somewhat small (10-100). n is the number of examples in my dataset, and this can get very large.
What is the overall complexity of my algorithm? Is it simply O(n) ?
As k <= 100, f(k) = O(1) for every function f.
In your case, there is a function f such that the overall time is O(n + f(k)), so it is O(n)

How to calculate time complexity O(n) of the algorithm?

What I have done:
I measured the time spent processing 100, 1000, 10000, 100000, 1000000 items.
Measurements here: https://github.com/DimaBond174/cache_single_thread
.
Then I assumed that O(n) increases in proportion to n, and calculated the remaining algorithms with respect to O(n) ..
Having time measurements for processing 100, 1000, 10000, 100000, 1000000 items how can we now attribute the algorithm to O(1), O(log n), O(n), O(n log n), or O(n^2) ?
Let's define N as one of the possible inputs of data. An algorithm can have different Big O values depending on which input you're referring to, but generally there's only one big input that you care about. Without the algorithm in question, you can only guess. However there are some guidelines that will help you determine which it is.
General Rule:
O(1) - the speed of the program barely changes regardless of size of data. To get this, a program must not have loops operating on the data in question at all.
O(log N) - the program slows down slightly when N increases dramatically, in a logarithmic curve. To get this, loops must only go through a fraction of the data. (for example, binary search).
O(N) - the program's speed is directly proportional to the size of the data input. If you perform an operation on each unit of the data, you get this. You must not have any kind of nested loops (that act on the data).
O(N log N)- the program's speed is significantly reduced by larger input. This occurs when you have a O(logN) operation NESTED in a loop that would otherwise be O(N). So for example, you had a loop that did a binary search for each unit of data.
O(N^2) - The program will slow down to a crawl with larger input and eventually stall with large enough data. This happens when you have NESTED loops. Same as above, but this time the nested loop is O(N) instead of O(log N)
So, try to think of a looping operation as O(N) or O(log N). Then, whenever you have nesting, multiply them together. If the loops are NOT nested, they are not multiplied like this. So two loops separate from each other would simply be O(2N) and not O(N^2).
Also remember that you may have loops under the hood, so you should think about them too. For example, if you did something like Arrays.sort(X) in Java, that would be a O(N logN) operation. So if you have that inside a loop for some reason, your program is going to be a lot slower than you think.
Hope that answers your question.

Time complexity and number of operations

As defined (wiki) time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
Then how is that we count the number of elementary operations and call if time complexity ?
Doing so we are not even thinking of length of string representing the input ? Isn't it.