Equation
I know that the solution is what is in green, but I don't understand how to compute it.
I would appreciate if somebody could explain me or only to give me a link where I can understand it.
Thanks.
For the general case (where n>1), the recursion is n + T(n/2) + T(n/2).
This can be simplified to 2T(n/2) + n.
By the Master Method of solving recurrences, let a = 2, b = 2 and f(n) = O(n).
According to the theorem, log (base b) of a is log (base 2) of 2, which is clearly 1. So O(n^(log(base b) of a)) is O(n^(1)) is O(n).
Case 2 of the Master Theorem says that if f(n) is equal in complexity to O(n^(log(base b) of a)), then the entire recurrence has a complexity of O(n^(log(base b) of a) * log(n)).
Therefore the overall complexity is O(n^(log(base b) of a) * log(n)) which is O(n * log(n)). When dealing with complexity, we can use log(n) and lg(n) interchangeably. So choice C is correct.
P.S. A really good overview of how to apply the master method is here.
Related
For the following algorithm (this algorithm doesn't really do anything useful besides being an exercise in analyzing time complexity):
const dib = (n) => {
if (n <= 1) return;
dib(n-1);
dib(n-1);
I'm watching a video where they say the time complexity is O(2^n). If I count the nodes I can see they're right (the tree has around 32 nodes) however in my head I thought it would be O(n*2^n) since n is the height of the tree and each level has 2^n nodes. Can anyone point out the flaw in my thinking?
Each tree has 2^i nodes, not 2^n.
So each level has 2^(i-1) nodes: 1 + 2 + 4 + 8 ... 2^n.
The deepest level is the decider in the complexity.
The total number of nodes beneath any level > 1 is 1 + 2*f(i-1) .
This is 2^n - 1.
Derek's answer is great, it gives intuition behind the estimation, if you want formal proof, you can use the Master theorem for Decreasing functions.
The master theorem is a formula for solving recurrences of the form
T(n) = aT(n - b) + f(n), where a ≥ 1 and b > 0 and f(n) is
asymptotically positive. (Asymptotically
positive means that the function is positive for all sufficiently large n.
Recurrent formula for above algorithm is T(n) = 2*T(n-1) + O(1). Do you see why? You can see solution for various cases (a=1, a>1, a<1) here http://cs.uok.edu.in/Files/79755f07-9550-4aeb-bd6f-5d802d56b46d/Custom/Ten%20Master%20Method.pdf
For our case a>1, so T(n) = O(a^(n/b) * f(n)) or O (a^(n/b) * n^k ) and gives O(2^n)
The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).
I need to prove or disprove the following conjecture:
if f(n) = O(h(n)) AND g(n) = O(k(n)) then (f − g)(n) = O(h(n) − k(n))
I am aware of the sum and product theorems for growth combination, but I could not find a way to apply them here, even though I know that subtraction can be rewritten as addition. Everywhere I looked defined the mentioned theorems, but lacked examples of subtraction.
Your statement is not true, consider the following counter-example:
Take f(n) = 2n2 = O(n2) and g(n) = n2 = O(n2). We have:
(f-g)(n) = n2, which is definitely not a constant and hence (f-g)(n) ≠ O(1).
Working through the recurrences, you can derive that during each call to this function, the time complexity will be: T(n) = 2T(n/2) + O(1)
And the height of the recurrence tree would be log2(n), where is the total number of calls (i.e. nodes in the tree).
It was said by the instructor that this function has a time complexity of O(n), but I simply cannot see why.
Further, when you substitute O(n) into the time complexity equation there are strange results. For example,
T(n) <= cn
T(n/2) <= (cn)/2
Back into the original equation:
T(n) <= cn + 1
Where this is obviously not true because cn + 1 !< cn
Your instructor is correct. This is an application of the Master theorem.
You can't substitute O(n) like you did in the time complexity equation, a correct substitution would be a polynomial form like an + b, since O(n) only shows the highest significant degree (there can be constants of lower degree).
To expand on the answer, you correctly recognize an time complexity equation of the form
T(n) = aT(n/b) + f(n), with a = 2, b = 2 and f(n) asympt. equals O(1).
With this type of equations, you have three cases that depends on the compared value of log_b(a) (cost of recursion) and of f(n) (cost of solving the basic problem of length n):
1° f(n) is much longer than the recursion itself (log_b(a) < f(n)), for instance a = 2, b = 2 and f(n) asympt. equals O(n^16). Then the recursion is of negligible complexity and the total time complexity can be assimilated to the complexity of f(n):
T(n) = f(n)
2° The recursion is longer than f(n) (log_b(a) > f(n)), which is the case here Then the complexity is O(log_b(a)), in your example O(log_2(2)), ie O(n).
3° The critical case where f(n) == log_b(a), ie there exists k >= 0 such that f(n) = O(n^{log_b(a)} log^k (n)), then the complexity is:
T(n) = O(n^{log_b(a)} log^k+1 (a)}
This is the ugly case in my opinion.
I think it's interesting but I'm not sure about my solution. This algorithm calculates xn
If I use the master theorem my reasoning goes like this
T(n) = 2 T(n/2) + f(n)
But f(n) in this case is 1? Because n <= 4 is constant. Gives me:
T(n) = Θ(n)
If I use substitution I get this answer
T(n) = Θ(n + log(n))
I think I'm doing lots of things wrong. Can someone point me in the right direction?
T(n) = Θ(n + log(n)) is just T(n) = Θ(n). The lower order term (log(n)) can be omitted when using theta.
Also, f(n) is O(1) because you are only doing one multiplication (rek(x, n/2) * rek(x, (n + 1)/2)) for each recursion.
The complexity is 0(n). Explanation: You make there ALL multiplication as in using simple cycle. And you have no operation thats numbers divided by numbers of * are bigger than a const. So, complexity is about const*0(n) that makes 0(n).