I need to prove or disprove the following conjecture:
if f(n) = O(h(n)) AND g(n) = O(k(n)) then (f − g)(n) = O(h(n) − k(n))
I am aware of the sum and product theorems for growth combination, but I could not find a way to apply them here, even though I know that subtraction can be rewritten as addition. Everywhere I looked defined the mentioned theorems, but lacked examples of subtraction.
Your statement is not true, consider the following counter-example:
Take f(n) = 2n2 = O(n2) and g(n) = n2 = O(n2). We have:
(f-g)(n) = n2, which is definitely not a constant and hence (f-g)(n) ≠ O(1).
Related
Does that statement follow Big O transitivity?
I am new to Big O notation and Time Complexity so I am struggling with the basics.
Any help would be greatly appreciated!
Think of the counterexample:
f(n) = n3
g(n) = n2
h(n) = n.
Indeed, g = O(f) and h = O(f). But is g = O(h)?
A good way to attack this problem is to leverage the definition of Big O. I say "good" because you'll be better off in the long run by developing a deeper understanding of the theory behind this question.
First, note that f(n) = O(g(n)) if and only if f(n) ≤ c∙g(n) for all n ≥ n0 and some c > 0.
Next, apply that definition to the statements in the question.
If g(n) = O(f(n)) then g(n) ≤ c0∙f(n) for all n ≥ n0 and some c0 > 0...
Similarly, if h(n) = O(f(n)), then h(n) ≤ c1∙f(n) ... etc.
So, given these two facts, are you able to prove that g(n) = O(h(n))? Well, what does that actually mean?
It means that (again!):
g(n) ≤ c2∙h(n) for all n ≥ n2 and some c2 > 0
So what you have is that:
g(n) is "less" than f(n)
h(n) is also "less" than f(n)
Can you conclude that g(n) is "less" than h(n)? No, you cannot. Now, after breaking this down and figuring out what you think the right answer is, you can try to find a counterexample (which has already been provided).
I generally take this approach when I'm trying to answer one of those tricky "True or False? If true, prove it. If false, give a counterexample" problems since I find that it enhances my understanding of the concepts that I'm studying!
I have 2 functions:
f(n) = n*log(n)
g(n) = n^(1.1) * log(log(log(n)))
I want to know how these functions compare to each other. From what I understand, f(n) will always grow faster than g(n). In other words: f(n) in ω(g(n))
I am assuming log base 10, but it really does not matter as any base could be used. I tried a number of combinations of n and c, as the following relation seems to hold:
f(n) ≥ c g(n) ≥ 0
The one combination that seemed to stick out to me was the following:
c = 0
n = 10^10
In this instance:
f(10^10) = (10^10) log(10^10) = (10^10)*(10) = 10^11
c*g(n) = 0 * (10^10)^(1.1) * log(log(log(10^10))
= 0 * (10^11) * log(log(10))
= 0 * (10^11) * log(1)
= 0 * (10^11) * 0 = 0
Hence f(n) will always be greater than g(n) and the relationship will be f(n) is ω(n).
Would my understanding be correct here?
edited: for correction
First of all, the combination sticking out to you doesn't work because it's invalid. A function f(x) is said to be O(g(x)) if and only if there exists a real number x' and positive real number c such that f(x)≤cg(x) for all x≥x'. You use c=0, which is not positive, and so using it to understand asymptotic complexity isn't going to be helpful.
But more importantly, in your example, it's not the case that f(x)=Ω(g(x)). In fact, it's actually f(x)=O(g(x)). You can see this because log(n)=O(n^0.1) (proof here), so nlog(n)=O(n^1.1), so nlog(n)=O(n^1.1 log(log(log(n)))), and thus f(x)=O(g(x)).
I know sublinear time algorithm is expressed by o(n).
Is T(n)=n/x sublinear in $n$ for positive number x?
In other words, is n/x=o(n)?
No.
T(n) = n/x is linear, in the same way as T(n) = xn is linear. If your function is just n multiplied by some constant c, then it's linear. In this particular case, c=1/x.
You can also check this using the formal definition of small o.
Formally, f(n) = o(g(n)) as n → ∞ means that for
every positive constant ε there exists a constant N such that |f(n)| <= ε|g(n)| for all n>= N.
In this case, pick ε=1/2x and you wont be able to find an N to satisfy the condition to make n/x = o(n).
Intuitively, one says f(n) = o(g(n)) if and only if f(n) is dominated by g(n) eventually even if you "slow g(n) down" multiplying it by a very small constant.
The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).
Working through the recurrences, you can derive that during each call to this function, the time complexity will be: T(n) = 2T(n/2) + O(1)
And the height of the recurrence tree would be log2(n), where is the total number of calls (i.e. nodes in the tree).
It was said by the instructor that this function has a time complexity of O(n), but I simply cannot see why.
Further, when you substitute O(n) into the time complexity equation there are strange results. For example,
T(n) <= cn
T(n/2) <= (cn)/2
Back into the original equation:
T(n) <= cn + 1
Where this is obviously not true because cn + 1 !< cn
Your instructor is correct. This is an application of the Master theorem.
You can't substitute O(n) like you did in the time complexity equation, a correct substitution would be a polynomial form like an + b, since O(n) only shows the highest significant degree (there can be constants of lower degree).
To expand on the answer, you correctly recognize an time complexity equation of the form
T(n) = aT(n/b) + f(n), with a = 2, b = 2 and f(n) asympt. equals O(1).
With this type of equations, you have three cases that depends on the compared value of log_b(a) (cost of recursion) and of f(n) (cost of solving the basic problem of length n):
1° f(n) is much longer than the recursion itself (log_b(a) < f(n)), for instance a = 2, b = 2 and f(n) asympt. equals O(n^16). Then the recursion is of negligible complexity and the total time complexity can be assimilated to the complexity of f(n):
T(n) = f(n)
2° The recursion is longer than f(n) (log_b(a) > f(n)), which is the case here Then the complexity is O(log_b(a)), in your example O(log_2(2)), ie O(n).
3° The critical case where f(n) == log_b(a), ie there exists k >= 0 such that f(n) = O(n^{log_b(a)} log^k (n)), then the complexity is:
T(n) = O(n^{log_b(a)} log^k+1 (a)}
This is the ugly case in my opinion.