An old man trying to learn more and got stuck on this exercise in some old exam:
Specify the complexity, in Θ(.) Notation, of the Test(n) function, detailed below, in each of the following three cases:
1/ n is even.
2/ n is a perfect square, that is, there exists an integer i such that i² = n.
3/ n is a prime number.
Function Test( n : Integer) : Integer
Variable
i : Integer
Start
for i := 2 to n do
if n mod i = 0 Return( i ) End-if
End-for
Return(n)
End
I think the comments have answered your general question, but a note about proving Big Theta time complexity:
To show f(n) ∈ Θ(g(n)), you don't necessarily have to prove f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n)) through the method you alluded to of showing there exists a constant c and an integer n0 for which for all n > n0 f(n) < cg(n), and then finding another c and n0 for which for all n > n0 f(n) > cg(n). Although this is perfectly valid and widely taught, there is an alternative, equally mathematically rigorous but generally much cleaner and more practical approach, which is just to show that:
0 < lim n-> \infty f(n) / g(n) < \infty
That is, f(n) ∈ Θ(g(n)) iff the limit as n goes to infinity of f(n) over g(n) is some constant.
If you showed only that
lim n-> \infty f(n) / g(n) < \infty
you would have shown that f(n) grows no faster than g(n): that is, that f(n) ∈ O(g(n)).
Similarly:
0 < lim n-> \infty f(n) / g(n)
implies that f(n) grows at least as fast as g(n): that f(n) ∈ Ω(g(n)). So together, they imply f(n) ∈ Θ(g(n))
Generally I think this is a lot less tedious than proofs of the form you mentioned, which involve actually finding c values and n0 values for the big O case, proving some needlessly particular statement involving those values, and then repeating the whole process for the Ω case, but that is just my opinion: whatever style works for you is good.
Related
Does that statement follow Big O transitivity?
I am new to Big O notation and Time Complexity so I am struggling with the basics.
Any help would be greatly appreciated!
Think of the counterexample:
f(n) = n3
g(n) = n2
h(n) = n.
Indeed, g = O(f) and h = O(f). But is g = O(h)?
A good way to attack this problem is to leverage the definition of Big O. I say "good" because you'll be better off in the long run by developing a deeper understanding of the theory behind this question.
First, note that f(n) = O(g(n)) if and only if f(n) ≤ c∙g(n) for all n ≥ n0 and some c > 0.
Next, apply that definition to the statements in the question.
If g(n) = O(f(n)) then g(n) ≤ c0∙f(n) for all n ≥ n0 and some c0 > 0...
Similarly, if h(n) = O(f(n)), then h(n) ≤ c1∙f(n) ... etc.
So, given these two facts, are you able to prove that g(n) = O(h(n))? Well, what does that actually mean?
It means that (again!):
g(n) ≤ c2∙h(n) for all n ≥ n2 and some c2 > 0
So what you have is that:
g(n) is "less" than f(n)
h(n) is also "less" than f(n)
Can you conclude that g(n) is "less" than h(n)? No, you cannot. Now, after breaking this down and figuring out what you think the right answer is, you can try to find a counterexample (which has already been provided).
I generally take this approach when I'm trying to answer one of those tricky "True or False? If true, prove it. If false, give a counterexample" problems since I find that it enhances my understanding of the concepts that I'm studying!
I'm new to time complexity and etc and trying to figure which algorithm is better.Might not be the best question of all time but yeah :/
If d is a constant then O(d*n) and O(n) are the same thing. This is what Big-O is all about i.e. the fact that these two are considered the same Big-O is part of the definition of Big-O.
The definition of Big-O is basically that for large n's some function f(n) is O(g(n)) if there exists a constant k such that f(n) ≤ k * g(n).
In your case, d is just absorbed by the constant k in that definition. A suitable constant k clearly exists: d*n ≤ k*nas long as k is greater than d.
Say I'm told that processing time of an algorithm is Ω(n) and O(n^3) and I am asked to conclude whether Big-Theta is Θ(n^2). How would I go about answering this question?
f(n) = Ω(n) and f(n) = O(n^3) does not imply f(n) = Θ(n^2).
To justify it, you can consider the following counterexamples:
f(n) = n. Since for n >= 1, n <= f(n) <= n^3, f(n) = Ω(n) and f(n) = O(n^3) but because for n >= 1, f(n) < n^2, f(n) is not Θ(n^2)
f(n) = n^3. Since for n >= 1, n <= f(n) <= n^3, f(n) = Ω(n) and f(n) = O(n^3) but because for n >= 1, f(n) > n^2, f(n) is not Θ(n^2)
For the example given, the conclusion would be that it is not an accurate conclusion. In words, the explanation would be that since the algorithm is bounded above by O(n^3) and below by Omega(n), the average case running time cannot be stated based off of just those entries and the algorithm would need to be studied against multiple datasets in order to find the average case time. In general, when studying the best and worst case runtimes for algorithms, if these two are the same (meaning the algorithm is bounded on both sides by the same running time) then the Big-theta can be determined to be the same runtime. Otherwise, more information about how the algorithm is running with multiple datasets would be needed.
I know sublinear time algorithm is expressed by o(n).
Is T(n)=n/x sublinear in $n$ for positive number x?
In other words, is n/x=o(n)?
No.
T(n) = n/x is linear, in the same way as T(n) = xn is linear. If your function is just n multiplied by some constant c, then it's linear. In this particular case, c=1/x.
You can also check this using the formal definition of small o.
Formally, f(n) = o(g(n)) as n → ∞ means that for
every positive constant ε there exists a constant N such that |f(n)| <= ε|g(n)| for all n>= N.
In this case, pick ε=1/2x and you wont be able to find an N to satisfy the condition to make n/x = o(n).
Intuitively, one says f(n) = o(g(n)) if and only if f(n) is dominated by g(n) eventually even if you "slow g(n) down" multiplying it by a very small constant.
I was studying Big O notation. I know that Big O is denoted by:
f(n) E O(g(n)) or f(n) = O(g(n))
It means the function f (n) has growth rate no greater than g(n).
Now lets say I have an equation:
5n +2 E O(n)
by the above equation, shouldn't 'n' be equal to g(n) and '5n+2' equals to f(n).
Now for any value of n. f(n) is always greater then g(n). So how Big O is true in this case ?
You should read the concept of Big Oh in more detail.
The relation
f(n) E O(g(n))
says
for some Constant C
f(n) <= C * g(n)
In this case C is some value for which 5n + 2 is always smaller than Cn
If you solve it:
5n + 2 <= Cn
2 <= (C - 5)*n
From this you can easily find out that if C = 6
then for any value of n your equation always holds!
Hope this helps!
That's not a correct definition of big O notation. If f(x) is O(g(x)), then there must exist some constants C and N such that: |f(x)| <= C |g(x)| for all x>N. So, if f(x) is always less than or equal to some constant * g(x) after some x value N, then f(x) is O(g(n)). Effectively, this means that constant factors are irrelevant, because you can choose C to be any value. So, for your example f(n)=5n+2 <= C*g(n)=10000n so, f(n) is O(g(n)).
Considering what the Big-O notation stands for you have the statement
5n +2 E O(n)
or as well
5n +2 = O(n)
Given that Big-O notation states an upper bound to our function, which is to establish an upper limit to the possible results of our given funcion, the problen can be reconsidered in the following way:
5n +2 <= c*n , for some constant c
We can see that the statement holds true given that it is possible to find some constant that will be greater than or equal to our function (making that constant as big or small as we need).
In a more general way, we can say that any given function f(n) will belong to O(g(n)) if the degree of g(n) is greater that or equal to the degree of f(n), that is, the highest degree among its terms.
Formally:
Let f(n) = n^x;
Let g(n) = n^y; so that x <= y
Then f(n) = O(g(n)).
The same applies to Big-Omega the other way arround.
Hope it works for you