Big O time complexity - time-complexity

I have a time complexity T(n) = 6n + xn and apparently the Big O complexity is (n^2) but I thought it would be (n). I would like to understand why it is (n^2).

T(n)=O(g(n)) in computer science means this
So evidently your T(n) function belongs in set O(n^2)
But main question is that does your 'x' in T(n) depend on input of n?
If answer is yes, then it's obvious that T(n)=O(n) + xn belongs to set O(n^2)
If answer is no and x is just a constant factor, well then T(n) of course belongs also O(n^2) ( loose upper limit ). But tighter upper bound is T(n) belongs to O(n), because T(n) = O(n) + O(n) what is just O(n)
Because we are talking about upper limits ( big O notation ) it is correct to say that O(n) function belongs also in set O(n^2). If we are only interested that our algorithm does even in it's worst case perform at O(n^2) time.
Hope this helps

Related

O(n) + O(n^2) = O(n^2)?

We know thah O(n) + O(n) = O(n) for this, even that O(n) + O(n) + ... + O(n) = O(n^2) for this.
But what happend if O(n) + O(n^2)?
Is O(n) or O(n^2)?
The Big O notation (https://en.wikipedia.org/wiki/Big_O_notation) is used to understand the limit of a specific algorithm, and how fast its complexity grows. Therefore when considering the growth of a linear and a quadratic component of an algorithm, what remains in Big I notation is only the quadratic component.
As you can see from the attached image, the quadratic curve grows much faster (over y-axis) than the linear curve, determining the general tendency of the complexity for that algorithm to be just influenced by the quadratic curve, hence O(n^2).
The case for O(n) + O(n) = O(n) it's due to the fact that any constant in Big O notation can be discarded: in fact the curve y = n and y = 2n would be growing just as fast (although with a different slope).
The case for O(n) + ... + O(n) = O(n^2) it's not generally true! For this case the actual complexity would be polynomial with O(k*n). Only if the parameter k equals to the size of your input n, then you will end up with a specific quadratic case.

Is O(n * 2^n ) the same as O(2^n)?

Would O(n * 2^n ) simplify to O(2^n) in Big-O notation?
My intuition is that it is not, even though O(2^n) is significantly worse than O(n)
O(n * 2^n) is not equal to O(2^n) and is much worse than O(2^n).
Your intuition is correct. O(n * 2^n) is not equal to O(2^n) and you can see that by definition of big-O. That is,
But with some random k, with only taking n=k+1 you demonstrate that the inequality isn’t true.
One easy way to elucidate this is to compute the quotient of both quantities and let n tend to infinity. In this case the quotient is
n*2^n/2^n = n
which tends to infinity as n goes to infinity. Since the limit is not bounded by any constant, the answer is that O(n*2^n) grows much faster than O(2^n)

an algorithm that is Theta(n) is also O(n^2), is this correct?

As Theta(n) is about the upper and lower bound, this question confused me.
I am sure O(n) can be O(n^2), but Omega(n) is also O(n^2)?
Bear in mind that O(f),Theta(f), Omega(f) are sets of functions.
O(n) is the set of functions that asymptotically grow at most as fast as n (modulo a constant factor), so O(n) is a proper subset of O(n^2).
Omega(n) is the set of functions that asymptotically grow at least as fast as n, so it is definitely not a subset of O(n^2). But it has a non-empty intersection with it, for example 0.5n and 7n^2 are in both sets.

Practical difference between O(n) and O(1 + n)?

Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n),
does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO doesn't describe the exact running time of your algorithm but rather its growth rate. BigO therefore describes a class of functions, not a single function. O(n^2) doesn't mean that your algorithms for input of size 2 will run in 4 operations, it means that if you were to plot the running time of your application as a function of n it would be asymptotically upper bound by c*n^2 starting at some n0. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k is any constant value.

Is O(1000n) = O(n) when n>>1000

If it is explicitly given that n>>1000, can O(1000n) be considered as O(n) ??
In other words, if we are to solve a problem(which also states that n>>1000) in O(n) & my solution's complexity is O(1000n), is my solution acceptable ?
If the function is O(1000n), then it is automatically also O(n).
After all, if f(n) is O(1000n), then there exists a constant M and and an n0 such that
f(n) <= M*1000n
for all n > n0. But if that is true, then we can take N = 1000*M and
f(n) <= N*n
for all n > n0. Therefore, f is O(n) as well.
Constant factors "drop out" in big-O notation. See Wikipedia, under "multiplication by a constant".
Your solution is in polynomial time, so any constants won't matter when n is arbitrarily large. So yes, your solution is acceptable.
Yes, provided n is much larger than 1000