What is the performance of log(n!) compared to n? - time-complexity

I was wondering if log(n!) is O(n), or if n is O(log(n!)). I have seen that log(n!) is O(n log(n)) but there is no information that I can find available to my particular question.

Stirling's approximation implies that log(n!) = Θ(n log(n)). In particular, it is not true that log(n!) = O(n), and it is true that n = O(log(n!)).

Log(n!) grows much faster as compared to n.
You can see from the picture.
Example :
When n =50,
O(n) will be 50.
But, O(log(n!)) = 64.48.
Update :
I tried plotting n and log(n!), on the same Graph.
log(n!) = O(n) for 0 < n < 25
n = O(log(n!)) for n > 25

Related

Proving or Refuting Time Complexity

I have an exam soon and I wasn't at university for a long time, cause I was at the hospital
Prove or refute the following statements:
log(n)= O(
√
n)
3^(n-1)= O(2^n)
f(n) + g(n) = O(f(g(n)))
2^(n+1) = O(2^n)
Could someone please help me and explain to me ?
(1) is true because log(n) grows asymptotically slower than any polynomial, including sqrt(n) = n^(1/2). To prove this we can observe that both log(n) and sqrt(n) are strictly increasing functions for n > 0 and then focus on a sequence where both evaluate easily, e.g., 2^(2k). Now we see log(2^(2k)) = 2k, but sqrt(2^(2k)) = 2^k. For k = 2, 2k = 2^k, and for k > 2, 2k < 2^k. This glosses over some details but the idea is sound. You can finish this by arguing that between 2^(2k) and 2^(2(k+1)) both functions have values greater than one for k >= 2 and thus any crossings can be eliminated by multiplying sqrt(n) by some constant.
(2) it is not true that 3^(n-1) is O(2^n). Suppose this were true. Then there exists an n0 and c such that for n > n0, 3^(n-1) <= c*2^n. First, eliminate the -1 by adding a (1/3) to the front; so (1/3)*3^n <= c*2^n. Next, divide through by 2^n: (1/3)*(3/2)^n <= c. Multiply by 3: (3/2)^n <= 3c. Finally, take the log of both sides with base 3/2: n <= log_3/2 (3c). The RHS is a constant expression and n is a variable; so this cannot be true of arbitrarily large n as required. This is a contradiction so our supposition was wrong; that is, 3^(n-1) is not O(2^n).
(3) this is not true. f(n) = 1 and g(n) = n is an easy counterexample. In this case, f(n) + g(n) = 1 + n but O(f(g(n)) = O(f(n)) = O(1).
(4) this is true. Rewrite 2^(n+1) as 2*2^n and it becomes obvious that this is true for n >= 1 by choosing c > 2.

Big O time complexity of n^1.001

Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...
For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.
In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.

Time Complexity of nested loops including if statement

I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.
Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
&iff;
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.
Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)

Best case and worst case time complexity

Given the following pseudo code for an array A
x = 0
for i = 0 to n - 2
for j = i to n - 1
if A[i] > A[j]:
x = x + 1
return x
Is the worst case complexity O(n^2) or Theta(n^2) and why? I don't seem to understand the difference between the two.
As for the best case complexity, is it not the same as the worst case complexity because the algorithm still has to run through the same lines?
The dominating operation in this algorithm is comparison A[i] > A[j]. This comparison is always done n^2 times.
O(n^2) means that this is worst-case complexity. If you use O notation you say that this complexity could be better in best-case.
Theta(n^2) means that this is the complexity in all cases.
So the answer is: the complexity is Theta(n^2) because in both best- and worst-case it's n^2.
See: Big-Theta notation and Big-O notation

Asymptotic complexity for typical expressions

The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).