Does O(N(logN)^4) grow faster than O(N^3)? - time-complexity

Does a O(N(logN)^4) function grow faster than O(N^3)? I can't really understand it much and can't define which grows faster.

O(N(logN)^4) < O(N^3)
The rule to compare two O notation is to to the ratio and to see, if n goes to infinity, where the ratio goes.
In our example: N(logN)^4/N^3 = log(N)^4/N^2 -> 0 so near infinity you will have N(logN)^4/N^3 < 1 ~ N(logN)^4 < N^3. This can be translated to O(N(logN)^4) < O(N^3)

Related

What is the time complexity (Big-O) of this while loop (Pseudocode)?

This is written in pseudocode.
We have an Array A of length n(n>=2)
int i = 1;
while (i < n) {
if (A[i] == 0) {
terminates the while-loop;
}
doubles i
}
I am new to this whole subject and coding, so I am having a hard time grasping it and need an "Explain like im 5".
I know the code doesnt make a lot of sense but it is just an exercise, I have to determine best case and worst case.
So in the Best case Big O would be O(1) if the value in [1] is 0.
For the worst-case scenario I thought the time complexity of this loop would be O(log(n)) as i doubles.
Is that correct?
Thanks in advance!
For Big O notation you take the worse case scenario. For the case where A[i] never evaluates to zero then your loop is like this:
int i = 1;
while(i < n) {
i *= 2;
}
i is doubled on each iteration, ie exponential growth.
Given an example of n=16
the values of i would be:
1
2
4
8
wouldn't get to 16
4 iterations
and 2^4 = 16
to work out the power, you would take log to base 2 of n, ie log(16) = 4
So the worst case would be log(n)
So the complexity would be stated as O(log(n))

Big O time complexity of n^1.001

Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...
For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.
In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.

Best case and worst case time complexity

Given the following pseudo code for an array A
x = 0
for i = 0 to n - 2
for j = i to n - 1
if A[i] > A[j]:
x = x + 1
return x
Is the worst case complexity O(n^2) or Theta(n^2) and why? I don't seem to understand the difference between the two.
As for the best case complexity, is it not the same as the worst case complexity because the algorithm still has to run through the same lines?
The dominating operation in this algorithm is comparison A[i] > A[j]. This comparison is always done n^2 times.
O(n^2) means that this is worst-case complexity. If you use O notation you say that this complexity could be better in best-case.
Theta(n^2) means that this is the complexity in all cases.
So the answer is: the complexity is Theta(n^2) because in both best- and worst-case it's n^2.
See: Big-Theta notation and Big-O notation

Asymptotic complexity for typical expressions

The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).

n^(1/n) growth rate compare to n^a

I have a function na, where 0 < a < 1 and n1/n. Which of these grow faster? They are both na fraction so technically at some instance a = 1/n So how do I rank this? If a < 1/n or a > 1/n then it's obvious, but all I have for a is that it's between 0 and 1 exclusive. So how do I know which has the greater growth rate?
O(n1/n) is in fact O(1).
log(n1/n) = 1/n*log(n) = log(n)/n
and it is approximately zero(because n is much more than log(n)) so :
log(n1/n) = 0 ==> O(n1/n) = O(1)
O(na) > O(1). when a>0
so you have O(n1/n) < O(na)
In simple words, whatever a is even if a is so little for one k we have that for every n>k a>1/n and that what is behind my answer.
(Suppose a a little number you just need to consider ns that are greater than 1/a and for them a>1/n).