How can I order the following functions by rate of growth? n^(logn), 3^n, (logn)^n, n choose n-4, and n^3 ?
What I have is: n^3, n choose n-4, n^logn, 3^n, (logn)^n but I'm not sure if this is right.
Your ordering looks correct to me.
n^3 is obviously the smallest polynomial in the list.
n choose (n-4) is n! / ((n-4)! 4!) = n (n-1) (n-2) (n-3) / 4!. It's O(n^4), and is the second smallest function.
n^log n = exp((log n)^2) is not even exponential, it's quasi-polynomial.
3^n is classical exponential.
(log n)^n obviously grows faster than 3^n since it has both base and power increasing as n grows. By the way, it's still exponential because (log n)^n = exp(n log log n) = O(exp(n^2)), for example.
Related
Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...
For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.
In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.
Assume that function f is in the complexity class O(N (log N)2), and that for N = 1,000 the program runs in 8 seconds.
How to write a formula T(N) that can compute the approximate time that it takes to run f for any input of size N???
Here is the answer:
8 = c (1000 x 10)
c = 8x10^-4
T(N) = 8x10-4* (N log2 N)
I don't understand the first line where does the 10 come from?
Can anybody explain the answer to me please? Thanks!
I don't understand the first line where does the 10 come from? Can
anybody explain the answer to me please? Thanks!
T(N) is the maximum time complexity. c is the constant or O(1) time, which is the portion of the algorithm's speed which is not affected by the size of the input. The 10 comes from rounding to simplify the math. It's actually 9.965784, which is log2 of 1000, e.g.
N x log2 N is
1000 x 10 or
1000 x 9.965784
O(N (log N)^2) describes how the runtime scales with N, but it's not a formula for calculating runtime in seconds. In fact, Big-O notation doesn't generally give the exact scaling function itself, but an upper bound on it as N becomes large. See here (there's a nice picture showing this last point).
If you're interested in a function's runtime in practice (particularly in the non-asymptotic regime, i.e. small N), one option is to actually run the function and measure it. Do this for multiple values of N, chosen on some grid (possibly with nonlinear spacing). Then, you can interpolate between these points.
Define S(N)=N(log N)^2
If you can assume that S(N) bounds your program for all N >= 1000
Then you can bound your execution time by good'ol rule of three:
S(1000) - T(1000)
S(N) - T(N)
T(N) <= S(N)* T(1000)/S(1000) for all N >=1000
S(1000) approx 10E4
T(1000) = 8
T(N) <= N(log N)^2 * 8 / 10E4
The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).
Consider a tree where the cost of an insertion is in O(log n). Say you start from an empty tree and add N elements iteratively. We want to know the total time complexity. I did this:
nb of operations in iteration i = log i
nb of operations in all iterations from 1 to N = log 1 + log 2 + ... + log N = log( N! )
total complexity = O(N!) ~ O(N log N)
(cf the Stirling approximation http://en.wikipedia.org/wiki/Stirling%27s_approximation )
Is this correct?
Yes, it's nearly correct.
A small correction: in the ith step, the number of operations is not log i, as most of the time that's an irrational number, it's O(log i). So for a mathematically tight proof you have to work a bit harder, but in short, what you wrote is the essence of the proof.
I've got a homework question that's been puzzling me. It asks that you prove that the function Sum[log(i)*i^3, {i, n}) (ie. the sum of log(i)*i^3 from i=1 to n) is big-theta (log(n)*n^4).
I know that Sum[i^3, {i, n}] is ( (n(n+1))/2 )^2 and that Sum[log(i), {i, n}) is log(n!), but I'm not sure if 1) I can treat these two separately since they're part of the same product inside the sum, and 2) how to start getting this into a form that will help me with the proof.
Any help would be really appreciated. Thanks!
The series looks like this - log 1 + log 2 * 2^3 + log 3 * 3^3....(upto n terms)
the sum of which does not converge. So if we integrate it
Integral to (1 to infinity) [ logn * n^3] (integration by parts)
you will get 1/4*logn * n^4 - 1/16* (n^4)
It is clear that the dominating term there is logn*n^4, therefore it belongs to Big Theta(log n * n^4)
The other way you could look at it is -
The series looks like log 1 + log2 * 8 + log 3 * 27......+ log n * n^3.
You could think of log n as the term with the highest value, since all logarithmic functions grow at the same rate asymptotically,
You could treat the above series as log n (1 + 2^3 + 3^3...) which is
log n [n^2 ( n + 1)^2]/4
Assuming f(n) = log n * n^4
g(n) = log n [n^2 ( n + 1)^2]/4
You could show that lim (n tends to inf) for f(n)/g(n) will be a constant [applying L'Hopital's rule]
That's another way to prove that the function g(n) belongs to Big Theta (f(n)).
Hope that helps.
Hint for one part of your solution: how large is the sum of the last two summands of your left sum?
Hint for the second part: If you divide your left side (the sum) by the right side, how many summands to you get? How large is the largest one?
Hint for the first part again: Find a simple lower estimate for the sum from n/2 to n in your first expression.
Try BigO limit definition and use calculus.
For calculus you might like to use some Computer Algebra System.
In following answer, I've shown, how to do this with Maxima Opensource CAS :
Asymptotic Complexity of Logarithms and Powers