Big O calculation - time-complexity

I was studying Big O notation. I know that Big O is denoted by:
f(n) E O(g(n)) or f(n) = O(g(n))
It means the function f (n) has growth rate no greater than g(n).
Now lets say I have an equation:
5n +2 E O(n)
by the above equation, shouldn't 'n' be equal to g(n) and '5n+2' equals to f(n).
Now for any value of n. f(n) is always greater then g(n). So how Big O is true in this case ?

You should read the concept of Big Oh in more detail.
The relation
f(n) E O(g(n))
says
for some Constant C
f(n) <= C * g(n)
In this case C is some value for which 5n + 2 is always smaller than Cn
If you solve it:
5n + 2 <= Cn
2 <= (C - 5)*n
From this you can easily find out that if C = 6
then for any value of n your equation always holds!
Hope this helps!

That's not a correct definition of big O notation. If f(x) is O(g(x)), then there must exist some constants C and N such that: |f(x)| <= C |g(x)| for all x>N. So, if f(x) is always less than or equal to some constant * g(x) after some x value N, then f(x) is O(g(n)). Effectively, this means that constant factors are irrelevant, because you can choose C to be any value. So, for your example f(n)=5n+2 <= C*g(n)=10000n so, f(n) is O(g(n)).

Considering what the Big-O notation stands for you have the statement
5n +2 E O(n)
or as well
5n +2 = O(n)
Given that Big-O notation states an upper bound to our function, which is to establish an upper limit to the possible results of our given funcion, the problen can be reconsidered in the following way:
5n +2 <= c*n , for some constant c
We can see that the statement holds true given that it is possible to find some constant that will be greater than or equal to our function (making that constant as big or small as we need).
In a more general way, we can say that any given function f(n) will belong to O(g(n)) if the degree of g(n) is greater that or equal to the degree of f(n), that is, the highest degree among its terms.
Formally:
Let f(n) = n^x;
Let g(n) = n^y; so that x <= y
Then f(n) = O(g(n)).
The same applies to Big-Omega the other way arround.
Hope it works for you

Related

Time complexity of an algorithm in different cases

An old man trying to learn more and got stuck on this exercise in some old exam:
Specify the complexity, in Θ(.) Notation, of the Test(n) function, detailed below, in each of the following three cases:
1/ n is even.
2/ n is a perfect square, that is, there exists an integer i such that i² = n.
3/ n is a prime number.
Function Test( n : Integer) : Integer
Variable
i : Integer
Start
for i := 2 to n do
if n mod i = 0 Return( i ) End-if
End-for
Return(n)
End
I think the comments have answered your general question, but a note about proving Big Theta time complexity:
To show f(n) ∈ Θ(g(n)), you don't necessarily have to prove f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n)) through the method you alluded to of showing there exists a constant c and an integer n0 for which for all n > n0 f(n) < cg(n), and then finding another c and n0 for which for all n > n0 f(n) > cg(n). Although this is perfectly valid and widely taught, there is an alternative, equally mathematically rigorous but generally much cleaner and more practical approach, which is just to show that:
0 < lim n-> \infty f(n) / g(n) < \infty
That is, f(n) ∈ Θ(g(n)) iff the limit as n goes to infinity of f(n) over g(n) is some constant.
If you showed only that
lim n-> \infty f(n) / g(n) < \infty
you would have shown that f(n) grows no faster than g(n): that is, that f(n) ∈ O(g(n)).
Similarly:
0 < lim n-> \infty f(n) / g(n)
implies that f(n) grows at least as fast as g(n): that f(n) ∈ Ω(g(n)). So together, they imply f(n) ∈ Θ(g(n))
Generally I think this is a lot less tedious than proofs of the form you mentioned, which involve actually finding c values and n0 values for the big O case, proving some needlessly particular statement involving those values, and then repeating the whole process for the Ω case, but that is just my opinion: whatever style works for you is good.

Is O(n*d) similar to O(n) where d is the constant

I'm new to time complexity and etc and trying to figure which algorithm is better.Might not be the best question of all time but yeah :/
If d is a constant then O(d*n) and O(n) are the same thing. This is what Big-O is all about i.e. the fact that these two are considered the same Big-O is part of the definition of Big-O.
The definition of Big-O is basically that for large n's some function f(n) is O(g(n)) if there exists a constant k such that f(n) ≤ k * g(n).
In your case, d is just absorbed by the constant k in that definition. A suitable constant k clearly exists: d*n ≤ k*nas long as k is greater than d.

Comparison of functions asymptotically

I have 2 functions:
f(n) = n*log(n)
g(n) = n^(1.1) * log(log(log(n)))
I want to know how these functions compare to each other. From what I understand, f(n) will always grow faster than g(n). In other words: f(n) in ω(g(n))
I am assuming log base 10, but it really does not matter as any base could be used. I tried a number of combinations of n and c, as the following relation seems to hold:
f(n) ≥ c g(n) ≥ 0
The one combination that seemed to stick out to me was the following:
c = 0
n = 10^10
In this instance:
f(10^10) = (10^10) log(10^10) = (10^10)*(10) = 10^11
c*g(n) = 0 * (10^10)^(1.1) * log(log(log(10^10))
= 0 * (10^11) * log(log(10))
= 0 * (10^11) * log(1)
= 0 * (10^11) * 0 = 0
Hence f(n) will always be greater than g(n) and the relationship will be f(n) is ω(n).
Would my understanding be correct here?
edited: for correction
First of all, the combination sticking out to you doesn't work because it's invalid. A function f(x) is said to be O(g(x)) if and only if there exists a real number x' and positive real number c such that f(x)≤cg(x) for all x≥x'. You use c=0, which is not positive, and so using it to understand asymptotic complexity isn't going to be helpful.
But more importantly, in your example, it's not the case that f(x)=Ω(g(x)). In fact, it's actually f(x)=O(g(x)). You can see this because log(n)=O(n^0.1) (proof here), so nlog(n)=O(n^1.1), so nlog(n)=O(n^1.1 log(log(log(n)))), and thus f(x)=O(g(x)).

Is T(n)=n/x sublinear in $n$?

I know sublinear time algorithm is expressed by o(n).
Is T(n)=n/x sublinear in $n$ for positive number x?
In other words, is n/x=o(n)?
No.
T(n) = n/x is linear, in the same way as T(n) = xn is linear. If your function is just n multiplied by some constant c, then it's linear. In this particular case, c=1/x.
You can also check this using the formal definition of small o.
Formally, f(n) = o(g(n)) as n → ∞ means that for
every positive constant ε there exists a constant N such that |f(n)| <= ε|g(n)| for all n>= N.
In this case, pick ε=1/2x and you wont be able to find an N to satisfy the condition to make n/x = o(n).
Intuitively, one says f(n) = o(g(n)) if and only if f(n) is dominated by g(n) eventually even if you "slow g(n) down" multiplying it by a very small constant.

How to prove a constant is O(1)

How would you possibly show that 2 is O(1)?
More over, how would you show that a constant is theta(1) hence omega(1) and O(1)?
For O, I am under the impression that you are able to do a simplification for f(n), whereby it can be reduced down to 1, but then how can this prove that 2 is O(1) for some n0? What would be the n0 value in this case?
By definition, a function f is in O(1) if there exist constants n0 and M such that f(n) ≤ M · 1 = M for all n ≥ n0.
If f(n) is defined as 2, then just set M = 2 (or any greater value; it doesn't matter) and n0 = 1 (or any greater value; it doesn't matter), and the condition is met.
[…] that 2 is O(1) for some n0? What would be the n0 value in this case?
n0 is not a parameter here; it's not meaningful to say "O(1) for some n0". You can arbitrarily choose any value of n0 that makes f satisfy the condition; if one exists, then f is O(1), period.
Big Oh and Theta so not indicate the time taken by an algorithm. They indicate the rate of increase in time as the input increases for the algorithm. When you understand this, things become very easy and less mathematical. f(x) = 2 {for all and any x} is always O(1) since the output value (2) does not depend on the input value (x) at all! O(1) represents this independence. So does theta(1) and omega(1).