O(n) + O(n^2) = O(n^2)? - time-complexity

We know thah O(n) + O(n) = O(n) for this, even that O(n) + O(n) + ... + O(n) = O(n^2) for this.
But what happend if O(n) + O(n^2)?
Is O(n) or O(n^2)?

The Big O notation (https://en.wikipedia.org/wiki/Big_O_notation) is used to understand the limit of a specific algorithm, and how fast its complexity grows. Therefore when considering the growth of a linear and a quadratic component of an algorithm, what remains in Big I notation is only the quadratic component.
As you can see from the attached image, the quadratic curve grows much faster (over y-axis) than the linear curve, determining the general tendency of the complexity for that algorithm to be just influenced by the quadratic curve, hence O(n^2).
The case for O(n) + O(n) = O(n) it's due to the fact that any constant in Big O notation can be discarded: in fact the curve y = n and y = 2n would be growing just as fast (although with a different slope).
The case for O(n) + ... + O(n) = O(n^2) it's not generally true! For this case the actual complexity would be polynomial with O(k*n). Only if the parameter k equals to the size of your input n, then you will end up with a specific quadratic case.

Related

an algorithm that is Theta(n) is also O(n^2), is this correct?

As Theta(n) is about the upper and lower bound, this question confused me.
I am sure O(n) can be O(n^2), but Omega(n) is also O(n^2)?
Bear in mind that O(f),Theta(f), Omega(f) are sets of functions.
O(n) is the set of functions that asymptotically grow at most as fast as n (modulo a constant factor), so O(n) is a proper subset of O(n^2).
Omega(n) is the set of functions that asymptotically grow at least as fast as n, so it is definitely not a subset of O(n^2). But it has a non-empty intersection with it, for example 0.5n and 7n^2 are in both sets.

Computational complexity depending on two variables

I have an algorithm and it is mainly composed of k-NN , followed by a computation involving finding permutations, followed by some for loops. Line by line, my computational complexity is :
O(n) - for k-NN
O(2^k) - for a part that computes singlets, pairs, triplets, etc.
O(k!) - for a part that deals with combinatorics.
O(k*k!) - for the final part.
K here is a parameter that can be chosen by the user, in general it is somewhat small (10-100). n is the number of examples in my dataset, and this can get very large.
What is the overall complexity of my algorithm? Is it simply O(n) ?
As k <= 100, f(k) = O(1) for every function f.
In your case, there is a function f such that the overall time is O(n + f(k)), so it is O(n)

O(nm + n2 log n) complexity time polynomial time?

If an algorithm computes in O(nm + n^2 log n) time. Can you then say it computes in Polynomial time?
I know that O(n log n) is O(n^2) so polynomial time. Just not sure how the n^2 works..
Remember that O(n) means "upper-bounded". If a function T(n) is O(n), then n*T(n) is O(n2).
Of course, you can also multiply T(n) by some other function that is O(n) --- not necessarily f(n)=n. So T(n)*O(n) is also O(n2).
If you know that O(n logn) is O(n2), then you can multiply both by a function that is O(n) and arrive to the conclusion that O(n2 logn) is O(n3), which is polynomial.
Finally, if O(a), O(b) and O(c) are all polynomials, then O(a+b+c) is also polynomial. Because they can all be upper-bounded by the term that grows faster.

Big O time complexity

I have a time complexity T(n) = 6n + xn and apparently the Big O complexity is (n^2) but I thought it would be (n). I would like to understand why it is (n^2).
T(n)=O(g(n)) in computer science means this
So evidently your T(n) function belongs in set O(n^2)
But main question is that does your 'x' in T(n) depend on input of n?
If answer is yes, then it's obvious that T(n)=O(n) + xn belongs to set O(n^2)
If answer is no and x is just a constant factor, well then T(n) of course belongs also O(n^2) ( loose upper limit ). But tighter upper bound is T(n) belongs to O(n), because T(n) = O(n) + O(n) what is just O(n)
Because we are talking about upper limits ( big O notation ) it is correct to say that O(n) function belongs also in set O(n^2). If we are only interested that our algorithm does even in it's worst case perform at O(n^2) time.
Hope this helps

Practical difference between O(n) and O(1 + n)?

Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n),
does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO doesn't describe the exact running time of your algorithm but rather its growth rate. BigO therefore describes a class of functions, not a single function. O(n^2) doesn't mean that your algorithms for input of size 2 will run in 4 operations, it means that if you were to plot the running time of your application as a function of n it would be asymptotically upper bound by c*n^2 starting at some n0. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k is any constant value.