I know sublinear time algorithm is expressed by o(n).
Is T(n)=n/x sublinear in $n$ for positive number x?
In other words, is n/x=o(n)?
No.
T(n) = n/x is linear, in the same way as T(n) = xn is linear. If your function is just n multiplied by some constant c, then it's linear. In this particular case, c=1/x.
You can also check this using the formal definition of small o.
Formally, f(n) = o(g(n)) as n → ∞ means that for
every positive constant ε there exists a constant N such that |f(n)| <= ε|g(n)| for all n>= N.
In this case, pick ε=1/2x and you wont be able to find an N to satisfy the condition to make n/x = o(n).
Intuitively, one says f(n) = o(g(n)) if and only if f(n) is dominated by g(n) eventually even if you "slow g(n) down" multiplying it by a very small constant.
Related
An old man trying to learn more and got stuck on this exercise in some old exam:
Specify the complexity, in Θ(.) Notation, of the Test(n) function, detailed below, in each of the following three cases:
1/ n is even.
2/ n is a perfect square, that is, there exists an integer i such that i² = n.
3/ n is a prime number.
Function Test( n : Integer) : Integer
Variable
i : Integer
Start
for i := 2 to n do
if n mod i = 0 Return( i ) End-if
End-for
Return(n)
End
I think the comments have answered your general question, but a note about proving Big Theta time complexity:
To show f(n) ∈ Θ(g(n)), you don't necessarily have to prove f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n)) through the method you alluded to of showing there exists a constant c and an integer n0 for which for all n > n0 f(n) < cg(n), and then finding another c and n0 for which for all n > n0 f(n) > cg(n). Although this is perfectly valid and widely taught, there is an alternative, equally mathematically rigorous but generally much cleaner and more practical approach, which is just to show that:
0 < lim n-> \infty f(n) / g(n) < \infty
That is, f(n) ∈ Θ(g(n)) iff the limit as n goes to infinity of f(n) over g(n) is some constant.
If you showed only that
lim n-> \infty f(n) / g(n) < \infty
you would have shown that f(n) grows no faster than g(n): that is, that f(n) ∈ O(g(n)).
Similarly:
0 < lim n-> \infty f(n) / g(n)
implies that f(n) grows at least as fast as g(n): that f(n) ∈ Ω(g(n)). So together, they imply f(n) ∈ Θ(g(n))
Generally I think this is a lot less tedious than proofs of the form you mentioned, which involve actually finding c values and n0 values for the big O case, proving some needlessly particular statement involving those values, and then repeating the whole process for the Ω case, but that is just my opinion: whatever style works for you is good.
I'm new to time complexity and etc and trying to figure which algorithm is better.Might not be the best question of all time but yeah :/
If d is a constant then O(d*n) and O(n) are the same thing. This is what Big-O is all about i.e. the fact that these two are considered the same Big-O is part of the definition of Big-O.
The definition of Big-O is basically that for large n's some function f(n) is O(g(n)) if there exists a constant k such that f(n) ≤ k * g(n).
In your case, d is just absorbed by the constant k in that definition. A suitable constant k clearly exists: d*n ≤ k*nas long as k is greater than d.
How would you possibly show that 2 is O(1)?
More over, how would you show that a constant is theta(1) hence omega(1) and O(1)?
For O, I am under the impression that you are able to do a simplification for f(n), whereby it can be reduced down to 1, but then how can this prove that 2 is O(1) for some n0? What would be the n0 value in this case?
By definition, a function f is in O(1) if there exist constants n0 and M such that f(n) ≤ M · 1 = M for all n ≥ n0.
If f(n) is defined as 2, then just set M = 2 (or any greater value; it doesn't matter) and n0 = 1 (or any greater value; it doesn't matter), and the condition is met.
[…] that 2 is O(1) for some n0? What would be the n0 value in this case?
n0 is not a parameter here; it's not meaningful to say "O(1) for some n0". You can arbitrarily choose any value of n0 that makes f satisfy the condition; if one exists, then f is O(1), period.
Big Oh and Theta so not indicate the time taken by an algorithm. They indicate the rate of increase in time as the input increases for the algorithm. When you understand this, things become very easy and less mathematical. f(x) = 2 {for all and any x} is always O(1) since the output value (2) does not depend on the input value (x) at all! O(1) represents this independence. So does theta(1) and omega(1).
I was studying Big O notation. I know that Big O is denoted by:
f(n) E O(g(n)) or f(n) = O(g(n))
It means the function f (n) has growth rate no greater than g(n).
Now lets say I have an equation:
5n +2 E O(n)
by the above equation, shouldn't 'n' be equal to g(n) and '5n+2' equals to f(n).
Now for any value of n. f(n) is always greater then g(n). So how Big O is true in this case ?
You should read the concept of Big Oh in more detail.
The relation
f(n) E O(g(n))
says
for some Constant C
f(n) <= C * g(n)
In this case C is some value for which 5n + 2 is always smaller than Cn
If you solve it:
5n + 2 <= Cn
2 <= (C - 5)*n
From this you can easily find out that if C = 6
then for any value of n your equation always holds!
Hope this helps!
That's not a correct definition of big O notation. If f(x) is O(g(x)), then there must exist some constants C and N such that: |f(x)| <= C |g(x)| for all x>N. So, if f(x) is always less than or equal to some constant * g(x) after some x value N, then f(x) is O(g(n)). Effectively, this means that constant factors are irrelevant, because you can choose C to be any value. So, for your example f(n)=5n+2 <= C*g(n)=10000n so, f(n) is O(g(n)).
Considering what the Big-O notation stands for you have the statement
5n +2 E O(n)
or as well
5n +2 = O(n)
Given that Big-O notation states an upper bound to our function, which is to establish an upper limit to the possible results of our given funcion, the problen can be reconsidered in the following way:
5n +2 <= c*n , for some constant c
We can see that the statement holds true given that it is possible to find some constant that will be greater than or equal to our function (making that constant as big or small as we need).
In a more general way, we can say that any given function f(n) will belong to O(g(n)) if the degree of g(n) is greater that or equal to the degree of f(n), that is, the highest degree among its terms.
Formally:
Let f(n) = n^x;
Let g(n) = n^y; so that x <= y
Then f(n) = O(g(n)).
The same applies to Big-Omega the other way arround.
Hope it works for you
Working through the recurrences, you can derive that during each call to this function, the time complexity will be: T(n) = 2T(n/2) + O(1)
And the height of the recurrence tree would be log2(n), where is the total number of calls (i.e. nodes in the tree).
It was said by the instructor that this function has a time complexity of O(n), but I simply cannot see why.
Further, when you substitute O(n) into the time complexity equation there are strange results. For example,
T(n) <= cn
T(n/2) <= (cn)/2
Back into the original equation:
T(n) <= cn + 1
Where this is obviously not true because cn + 1 !< cn
Your instructor is correct. This is an application of the Master theorem.
You can't substitute O(n) like you did in the time complexity equation, a correct substitution would be a polynomial form like an + b, since O(n) only shows the highest significant degree (there can be constants of lower degree).
To expand on the answer, you correctly recognize an time complexity equation of the form
T(n) = aT(n/b) + f(n), with a = 2, b = 2 and f(n) asympt. equals O(1).
With this type of equations, you have three cases that depends on the compared value of log_b(a) (cost of recursion) and of f(n) (cost of solving the basic problem of length n):
1° f(n) is much longer than the recursion itself (log_b(a) < f(n)), for instance a = 2, b = 2 and f(n) asympt. equals O(n^16). Then the recursion is of negligible complexity and the total time complexity can be assimilated to the complexity of f(n):
T(n) = f(n)
2° The recursion is longer than f(n) (log_b(a) > f(n)), which is the case here Then the complexity is O(log_b(a)), in your example O(log_2(2)), ie O(n).
3° The critical case where f(n) == log_b(a), ie there exists k >= 0 such that f(n) = O(n^{log_b(a)} log^k (n)), then the complexity is:
T(n) = O(n^{log_b(a)} log^k+1 (a)}
This is the ugly case in my opinion.