So during my lecture my professor demonstrated how to solve this problem...
Prove n^2 + 2n + lgn = O(n^2)
So for this problem if I'm correct, I can replace lgn and 2n with n^2
because it grows faster than those terms. After doing that, we'd end up with n^2 + 2n^2 + n^2 as our g(n)
0 <= n^2 2n + lgn <= n^2 + 2n^2 + n^2 for all n >= 1
However, is this only allowed when proving Big O questions, or can the same methodology be used when trying to prove Big Omega?
So with that being said that leads us to this problem...
Prove 2n^3 - 3n^2 + 2n is Big Omega (n^3)
For this problem he got rid of -3n^2 completely, and only swapped 2n with n^3.
So after doing all of this g(n) would be this.
2n^3 - 3n^2 + 2n >= 2n^3 - n^3 for all n >= 3
Why wasn't -3n^2 replaced with n^3?
Related
Quicksort's recurrence equation is
T(n) = T(n/2) + T(n/2) + theta(n)
if pivot always divides the original array into two same-sized sub arrays.
so the overall time complexity would be O(nlogn)
But what if the ratio of the two sub-lists is always 1:99?
The equation definitely would be T(n) = T(n/100) + T(99n/100) + theta(n)
But how can I derive time complexity from the above equation?
I've read other answer which describes that we can ignore T(n/100) since T(99n/100) will dominate the overall time complexity.
But I quite cannot fully understand.
Any advice would be appreciated!
Plug T(n) = n log(n) + Θ(n) and you get
n log(n) + Θ(n) = n/100 log(n/100) + 99n/100 log(99n/100) + Θ(n/100) + Θ(99n/100) + Θ(n)
= n log(n)/100 + 99n log(n)/100 - n/100 log(100) - 99n/100 log(99/100) + Θ(n)
= n log(n) + Θ(n)
In fact any fraction will work:
T(n) = T(pn) + T((1-p)n) + Θ(n)
is solved by O(n log(n)).
Let's say we have the following complexity:
T(n, k) = n^2 + n + k^2 + 15*k + 123
Where we do not know anything about relations between n and k.
I could say that in terms of Big O complexity will be the following:
T(n) = O(n^2 + n + k^2 + 15*k)
Can I simplify it further and drop only 15 constant or I can drop n and 15*k?
UPDATE: according to this link Big O is not valid notation for two or more variables
Yes, you can.
O(n^2+n+k^2+15*k)=O(n^2+n)+O(k^2+15*k)=O(n^2)+O(k^2)=O(n^2+k^2)
This does assume that k and n are both positive, looking at the behavior as they get large.
I'm trying to prove that this formula (n2+1)/(n+1) is O(n)
As you know, we need to come up with n0 and C.
So I'm confused a little bit about how to choose an appropriate C since the equation here is division.
So with C=1, (n2+1) / (n+1) / n
(n2+n) / (n+n) / n >= (n2+1) /(n+1)
but I'm stuck here in how to simplify the division here.
As n tends to infinity your original equation becomes n^2/n which is equivalent to O(n)
Choosing c = 1:
(n^2 + 1)/(n + 1) <= 1*n definition of Big-Oh with c = 1
n^2 + 1 <= n^2 + n multiplying both sides by n + 1
1 <= n subtracting n^2 from both sides
n >= 1 rearranging
Therefore, the choice n0 = 1 works for c = 1.
Working through the recurrences, you can derive that during each call to this function, the time complexity will be: T(n) = 2T(n/2) + O(1)
And the height of the recurrence tree would be log2(n), where is the total number of calls (i.e. nodes in the tree).
It was said by the instructor that this function has a time complexity of O(n), but I simply cannot see why.
Further, when you substitute O(n) into the time complexity equation there are strange results. For example,
T(n) <= cn
T(n/2) <= (cn)/2
Back into the original equation:
T(n) <= cn + 1
Where this is obviously not true because cn + 1 !< cn
Your instructor is correct. This is an application of the Master theorem.
You can't substitute O(n) like you did in the time complexity equation, a correct substitution would be a polynomial form like an + b, since O(n) only shows the highest significant degree (there can be constants of lower degree).
To expand on the answer, you correctly recognize an time complexity equation of the form
T(n) = aT(n/b) + f(n), with a = 2, b = 2 and f(n) asympt. equals O(1).
With this type of equations, you have three cases that depends on the compared value of log_b(a) (cost of recursion) and of f(n) (cost of solving the basic problem of length n):
1° f(n) is much longer than the recursion itself (log_b(a) < f(n)), for instance a = 2, b = 2 and f(n) asympt. equals O(n^16). Then the recursion is of negligible complexity and the total time complexity can be assimilated to the complexity of f(n):
T(n) = f(n)
2° The recursion is longer than f(n) (log_b(a) > f(n)), which is the case here Then the complexity is O(log_b(a)), in your example O(log_2(2)), ie O(n).
3° The critical case where f(n) == log_b(a), ie there exists k >= 0 such that f(n) = O(n^{log_b(a)} log^k (n)), then the complexity is:
T(n) = O(n^{log_b(a)} log^k+1 (a)}
This is the ugly case in my opinion.
What is O(log(n!)) and O(n!)? I believe it is O(n log(n)) and O(n^n)? Why?
I think it has to do with Stirling's approximation, but I don't get the explanation very well.
Am I wrong about O(log(n!) = O(n log(n))? How can the math be explained in simpler terms? In reality I just want an idea of how this works.
O(n!) isn't equivalent to O(n^n). It is asymptotically less than O(n^n).
O(log(n!)) is equal to O(n log(n)). Here is one way to prove that:
Note that by using the log rule log(mn) = log(m) + log(n) we can see that:
log(n!) = log(n*(n-1)*...2*1) = log(n) + log(n-1) + ... log(2) + log(1)
Proof that O(log(n!)) ⊆ O(n log(n)):
log(n!) = log(n) + log(n-1) + ... log(2) + log(1)
Which is less than:
log(n) + log(n) + log(n) + log(n) + ... + log(n) = n*log(n)
So O(log(n!)) is a subset of O(n log(n))
Proof that O(n log(n)) ⊆ O(log(n!)):
log(n!) = log(n) + log(n-1) + ... log(2) + log(1)
Which is greater than (the left half of that expression with all (n-x) replaced by n/2:
log(n/2) + log(n/2) + ... + log(n/2) = floor(n/2)*log(floor(n/2)) ∈ O(n log(n))
So O(n log(n)) is a subset of O(log(n!)).
Since O(n log(n)) ⊆ O(log(n!)) ⊆ O(n log(n)), they are equivalent big-Oh classes.
By Stirling's approximation,
log(n!) = n log(n) - n + O(log(n))
For large n, the right side is dominated by the term n log(n). That implies that O(log(n!)) = O(n log(n)).
More formally, one definition of "Big O" is that f(x) = O(g(x)) if and only if
lim sup|f(x)/g(x)| < ∞ as x → ∞
Using Stirling's approximation, it's easy to show that log(n!) ∈ O(n log(n)) using this definition.
A similar argument applies to n!. By taking the exponential of both sides of Stirling's approximation, we find that, for large n, n! behaves asymptotically like n^(n+1) / exp(n). Since n / exp(n) → 0 as n → ∞, we can conclude that n! ∈ O(n^n) but O(n!) is not equivalent to O(n^n). There are functions in O(n^n) that are not in O(n!) (such as n^n itself).