Is O(log n) the same as O(log 2n)?
By laws of logarithms, log(2N) = log(2) + log(N) and since you are writing it in big O, what you get is O(log(2)) + O(log(N)) = O(log(N).
Yes, O(log n) and O(log 2n) mean the same thing. This is because
log 2n = log 2 + log n,
and since log 2 is a constant, it's ignored by big-O notation.
Going a bit broader than this, properties of logarithms mean that logs of many common expressions end up being equivalent to O(log n). For example, log nk, for any fixed constant k, is O(log n) because
log nk = k log n = O(log n).
Related
Was learning the merge sort algorithm, found that the time complexity of Merge sort is O(n log n).
Want to know if we can say O(n log n) = O(n) * O(log n)?
No, it doesn't really make sense to do that. The Big-O function yields sets of functions and sets cannot be multiplied together.
More generally, you don't normally perform any operations on O(...) results. There's no adding them, subtracting them, multiplying them. No algebra. O(...) typically shows up at the conclusion of a proof: "Based on the analysis above, I conclude that the worst case complexity of Finkle's Algorithm is O(whatever)." It doesn't really show up in the middle where it one might subject it to algebraic manipulation.
(You could perform set operations, I suppose. I've never seen anybody do that.)
To formalise what it means to do O(n) * O(log n), let's make the following definition:
A function f is in O(n) * O(log n) if and only if it can be written as a product f(n) = g(n) h(n) where g is in O(n) and h is in O(log n).
Now we can prove that the set O(n) * O(log n) is equal to the set O(n log n) by showing that the functions in both sets are the same:
Given g in O(n) and h in O(log n), there are N_g, c_g, N_h, c_h such that for all n >= max(N_g, N_h) we have |g(n)| <= c_g n and |h(n)| <= c_h log n. It follows that |g(n) h(n)| <= c_g c_h n log n, and so max(N_g, N_h) and c_g c_h are sufficient to show that f is in O(n log n).
Conversely, given f in O(n log n), there are N_f >= 1, c_f such that |f(n)| <= c_f n log n for all n >= N_f. Define g(n) = max(1, n) and h(n) = f(n) / max(1, n); clearly g is in O(n), and we can also see that for n >= N_f we have |h(n)| <= c_f n log n / max(1, n) where the bound on the right hand side is equal to c_f log n because n >= 1, so N_f, c_f are sufficient to show that h is in O(log n). Since we have f(n) = g(n) h(n), it follows that f is in O(n) * O(log n) as we defined it.
The choice of N_f >= 1 and g(n) = max(1, n) is to avoid dividing by zero when n is zero.
actually, the definition of Big-o is not commutative, lets see the example:
let f be defined as f(n) = n
f(n) = O(n^2) & f(n) = O(n^3), but O(n^2) != O(n^3)
that's because using equal sign = is not accurately define here we should say f(n) is O(g).
anyway being a little inaccurate, here is the definition of Big-O grabbed by sipser:
Say that f (n) = O(g(n))
if positive integers c and n 0 exist such that for every integer n ≥ n0,
f (n) ≤ c g(n).
When f (n) = O(g(n)), we say that g(n) is an upper bound for
When f (n) = O(g(n)), we say that g(n) is an upper bound for
f (n), or more precisely, that g(n) is an asymptotic upper bound for
f (n), to emphasize that we are suppressing constant factors.
So for proving what you state you must first define what * means in your equation. and show for every function which is O(n log n), it is also O(n) * O(log n) and vice-versa.
but being inaccurate again and define * as symbolic polynomial multiplication we have the following for some constant positive c and d.
O(n log n) = O(cn log n) = O(log n ^ (cn)) = O(d log n^(cn)) = O(log (n^cn) ^ d) = O(log n^cdn) ~= log n ^ cdn ~= cdn * log n
= O(n) * O(log n) = O(cn) * O(d log n) = O(cn) * O(log n^d) ~= cn * (log n^d) ~= cn * d*logn ~= cdn * log n
void func(int n){
int i=1, k=n;
while (i<=k){
k=k/2;
i = i*2;
}
}
How do i calculate the time complexity of this function? I understand that the assignment of i=1, k=n takes two basic steps and to divide the value of k and multiply the value of i takes two basic steps as well, but because the values of i and k are increasing and decreasing exponentially, will the time complexity be O(log base 4 N) or O(log base 2 sqrt(N))?
Your answer is O(log √n), in the comments #Eraklon says it's O((log2 n)/2), and #matri70boss says it's O(log4 n). All three of you are correct, but the answer in its simplest form is O(log n).
log √n = log n0.5 = 0.5 log n, and we discard the constant factor 0.5 when we write in big O notation.
(log2 n)/2 = (log n)/(2 log 2) by the change of base identity, and 1/(2 log 2) is another constant factor we can discard.
Likewise, log4 n = (log n)/(log 4), and we can discard the constant factor 1/(log 4).
Equation
I know that the solution is what is in green, but I don't understand how to compute it.
I would appreciate if somebody could explain me or only to give me a link where I can understand it.
Thanks.
For the general case (where n>1), the recursion is n + T(n/2) + T(n/2).
This can be simplified to 2T(n/2) + n.
By the Master Method of solving recurrences, let a = 2, b = 2 and f(n) = O(n).
According to the theorem, log (base b) of a is log (base 2) of 2, which is clearly 1. So O(n^(log(base b) of a)) is O(n^(1)) is O(n).
Case 2 of the Master Theorem says that if f(n) is equal in complexity to O(n^(log(base b) of a)), then the entire recurrence has a complexity of O(n^(log(base b) of a) * log(n)).
Therefore the overall complexity is O(n^(log(base b) of a) * log(n)) which is O(n * log(n)). When dealing with complexity, we can use log(n) and lg(n) interchangeably. So choice C is correct.
P.S. A really good overview of how to apply the master method is here.
Consider a tree where the cost of an insertion is in O(log n). Say you start from an empty tree and add N elements iteratively. We want to know the total time complexity. I did this:
nb of operations in iteration i = log i
nb of operations in all iterations from 1 to N = log 1 + log 2 + ... + log N = log( N! )
total complexity = O(N!) ~ O(N log N)
(cf the Stirling approximation http://en.wikipedia.org/wiki/Stirling%27s_approximation )
Is this correct?
Yes, it's nearly correct.
A small correction: in the ith step, the number of operations is not log i, as most of the time that's an irrational number, it's O(log i). So for a mathematically tight proof you have to work a bit harder, but in short, what you wrote is the essence of the proof.
What is O(log(n!)) and O(n!)? I believe it is O(n log(n)) and O(n^n)? Why?
I think it has to do with Stirling's approximation, but I don't get the explanation very well.
Am I wrong about O(log(n!) = O(n log(n))? How can the math be explained in simpler terms? In reality I just want an idea of how this works.
O(n!) isn't equivalent to O(n^n). It is asymptotically less than O(n^n).
O(log(n!)) is equal to O(n log(n)). Here is one way to prove that:
Note that by using the log rule log(mn) = log(m) + log(n) we can see that:
log(n!) = log(n*(n-1)*...2*1) = log(n) + log(n-1) + ... log(2) + log(1)
Proof that O(log(n!)) ⊆ O(n log(n)):
log(n!) = log(n) + log(n-1) + ... log(2) + log(1)
Which is less than:
log(n) + log(n) + log(n) + log(n) + ... + log(n) = n*log(n)
So O(log(n!)) is a subset of O(n log(n))
Proof that O(n log(n)) ⊆ O(log(n!)):
log(n!) = log(n) + log(n-1) + ... log(2) + log(1)
Which is greater than (the left half of that expression with all (n-x) replaced by n/2:
log(n/2) + log(n/2) + ... + log(n/2) = floor(n/2)*log(floor(n/2)) ∈ O(n log(n))
So O(n log(n)) is a subset of O(log(n!)).
Since O(n log(n)) ⊆ O(log(n!)) ⊆ O(n log(n)), they are equivalent big-Oh classes.
By Stirling's approximation,
log(n!) = n log(n) - n + O(log(n))
For large n, the right side is dominated by the term n log(n). That implies that O(log(n!)) = O(n log(n)).
More formally, one definition of "Big O" is that f(x) = O(g(x)) if and only if
lim sup|f(x)/g(x)| < ∞ as x → ∞
Using Stirling's approximation, it's easy to show that log(n!) ∈ O(n log(n)) using this definition.
A similar argument applies to n!. By taking the exponential of both sides of Stirling's approximation, we find that, for large n, n! behaves asymptotically like n^(n+1) / exp(n). Since n / exp(n) → 0 as n → ∞, we can conclude that n! ∈ O(n^n) but O(n!) is not equivalent to O(n^n). There are functions in O(n^n) that are not in O(n!) (such as n^n itself).