Comparison of functions asymptotically - time-complexity

I have 2 functions:
f(n) = n*log(n)
g(n) = n^(1.1) * log(log(log(n)))
I want to know how these functions compare to each other. From what I understand, f(n) will always grow faster than g(n). In other words: f(n) in ω(g(n))
I am assuming log base 10, but it really does not matter as any base could be used. I tried a number of combinations of n and c, as the following relation seems to hold:
f(n) ≥ c g(n) ≥ 0
The one combination that seemed to stick out to me was the following:
c = 0
n = 10^10
In this instance:
f(10^10) = (10^10) log(10^10) = (10^10)*(10) = 10^11
c*g(n) = 0 * (10^10)^(1.1) * log(log(log(10^10))
= 0 * (10^11) * log(log(10))
= 0 * (10^11) * log(1)
= 0 * (10^11) * 0 = 0
Hence f(n) will always be greater than g(n) and the relationship will be f(n) is ω(n).
Would my understanding be correct here?
edited: for correction

First of all, the combination sticking out to you doesn't work because it's invalid. A function f(x) is said to be O(g(x)) if and only if there exists a real number x' and positive real number c such that f(x)≤cg(x) for all x≥x'. You use c=0, which is not positive, and so using it to understand asymptotic complexity isn't going to be helpful.
But more importantly, in your example, it's not the case that f(x)=Ω(g(x)). In fact, it's actually f(x)=O(g(x)). You can see this because log(n)=O(n^0.1) (proof here), so nlog(n)=O(n^1.1), so nlog(n)=O(n^1.1 log(log(log(n)))), and thus f(x)=O(g(x)).

Related

Proving or Refuting Time Complexity

I have an exam soon and I wasn't at university for a long time, cause I was at the hospital
Prove or refute the following statements:
log(n)= O(
√
n)
3^(n-1)= O(2^n)
f(n) + g(n) = O(f(g(n)))
2^(n+1) = O(2^n)
Could someone please help me and explain to me ?
(1) is true because log(n) grows asymptotically slower than any polynomial, including sqrt(n) = n^(1/2). To prove this we can observe that both log(n) and sqrt(n) are strictly increasing functions for n > 0 and then focus on a sequence where both evaluate easily, e.g., 2^(2k). Now we see log(2^(2k)) = 2k, but sqrt(2^(2k)) = 2^k. For k = 2, 2k = 2^k, and for k > 2, 2k < 2^k. This glosses over some details but the idea is sound. You can finish this by arguing that between 2^(2k) and 2^(2(k+1)) both functions have values greater than one for k >= 2 and thus any crossings can be eliminated by multiplying sqrt(n) by some constant.
(2) it is not true that 3^(n-1) is O(2^n). Suppose this were true. Then there exists an n0 and c such that for n > n0, 3^(n-1) <= c*2^n. First, eliminate the -1 by adding a (1/3) to the front; so (1/3)*3^n <= c*2^n. Next, divide through by 2^n: (1/3)*(3/2)^n <= c. Multiply by 3: (3/2)^n <= 3c. Finally, take the log of both sides with base 3/2: n <= log_3/2 (3c). The RHS is a constant expression and n is a variable; so this cannot be true of arbitrarily large n as required. This is a contradiction so our supposition was wrong; that is, 3^(n-1) is not O(2^n).
(3) this is not true. f(n) = 1 and g(n) = n is an easy counterexample. In this case, f(n) + g(n) = 1 + n but O(f(g(n)) = O(f(n)) = O(1).
(4) this is true. Rewrite 2^(n+1) as 2*2^n and it becomes obvious that this is true for n >= 1 by choosing c > 2.

Big O time complexity of n^1.001

Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...
For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.
In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.

Asymptotic notation and Growth of Combinations of Functions: Difference

I need to prove or disprove the following conjecture:
if f(n) = O(h(n)) AND g(n) = O(k(n)) then (f − g)(n) = O(h(n) − k(n))
I am aware of the sum and product theorems for growth combination, but I could not find a way to apply them here, even though I know that subtraction can be rewritten as addition. Everywhere I looked defined the mentioned theorems, but lacked examples of subtraction.
Your statement is not true, consider the following counter-example:
Take f(n) = 2n2 = O(n2) and g(n) = n2 = O(n2). We have:
(f-g)(n) = n2, which is definitely not a constant and hence (f-g)(n) ≠ O(1).

Big O calculation

I was studying Big O notation. I know that Big O is denoted by:
f(n) E O(g(n)) or f(n) = O(g(n))
It means the function f (n) has growth rate no greater than g(n).
Now lets say I have an equation:
5n +2 E O(n)
by the above equation, shouldn't 'n' be equal to g(n) and '5n+2' equals to f(n).
Now for any value of n. f(n) is always greater then g(n). So how Big O is true in this case ?
You should read the concept of Big Oh in more detail.
The relation
f(n) E O(g(n))
says
for some Constant C
f(n) <= C * g(n)
In this case C is some value for which 5n + 2 is always smaller than Cn
If you solve it:
5n + 2 <= Cn
2 <= (C - 5)*n
From this you can easily find out that if C = 6
then for any value of n your equation always holds!
Hope this helps!
That's not a correct definition of big O notation. If f(x) is O(g(x)), then there must exist some constants C and N such that: |f(x)| <= C |g(x)| for all x>N. So, if f(x) is always less than or equal to some constant * g(x) after some x value N, then f(x) is O(g(n)). Effectively, this means that constant factors are irrelevant, because you can choose C to be any value. So, for your example f(n)=5n+2 <= C*g(n)=10000n so, f(n) is O(g(n)).
Considering what the Big-O notation stands for you have the statement
5n +2 E O(n)
or as well
5n +2 = O(n)
Given that Big-O notation states an upper bound to our function, which is to establish an upper limit to the possible results of our given funcion, the problen can be reconsidered in the following way:
5n +2 <= c*n , for some constant c
We can see that the statement holds true given that it is possible to find some constant that will be greater than or equal to our function (making that constant as big or small as we need).
In a more general way, we can say that any given function f(n) will belong to O(g(n)) if the degree of g(n) is greater that or equal to the degree of f(n), that is, the highest degree among its terms.
Formally:
Let f(n) = n^x;
Let g(n) = n^y; so that x <= y
Then f(n) = O(g(n)).
The same applies to Big-Omega the other way arround.
Hope it works for you

How is this algorithm O(n)?

Working through the recurrences, you can derive that during each call to this function, the time complexity will be: T(n) = 2T(n/2) + O(1)
And the height of the recurrence tree would be log2(n), where is the total number of calls (i.e. nodes in the tree).
It was said by the instructor that this function has a time complexity of O(n), but I simply cannot see why.
Further, when you substitute O(n) into the time complexity equation there are strange results. For example,
T(n) <= cn
T(n/2) <= (cn)/2
Back into the original equation:
T(n) <= cn + 1
Where this is obviously not true because cn + 1 !< cn
Your instructor is correct. This is an application of the Master theorem.
You can't substitute O(n) like you did in the time complexity equation, a correct substitution would be a polynomial form like an + b, since O(n) only shows the highest significant degree (there can be constants of lower degree).
To expand on the answer, you correctly recognize an time complexity equation of the form
T(n) = aT(n/b) + f(n), with a = 2, b = 2 and f(n) asympt. equals O(1).
With this type of equations, you have three cases that depends on the compared value of log_b(a) (cost of recursion) and of f(n) (cost of solving the basic problem of length n):
1° f(n) is much longer than the recursion itself (log_b(a) < f(n)), for instance a = 2, b = 2 and f(n) asympt. equals O(n^16). Then the recursion is of negligible complexity and the total time complexity can be assimilated to the complexity of f(n):
T(n) = f(n)
2° The recursion is longer than f(n) (log_b(a) > f(n)), which is the case here Then the complexity is O(log_b(a)), in your example O(log_2(2)), ie O(n).
3° The critical case where f(n) == log_b(a), ie there exists k >= 0 such that f(n) = O(n^{log_b(a)} log^k (n)), then the complexity is:
T(n) = O(n^{log_b(a)} log^k+1 (a)}
This is the ugly case in my opinion.