Big-Theta: multiplying Theta(n) and Theta(n^2) = Theta(n^3)? - time-complexity

If f(n) = Θ(n) and
g(n) = Θ(n^2),
then f(n) * g(n) = Θ(n^3)?

Problem
Technically, Θ(n) is a set of functions, so we say that f is in Θ(n), rather than f(n) being equal to Θ(n).
Hence, the problem we want to investigate is:
Let
h(n) = g(n) · f(n) (*)
Does f ∈ ϴ(n) and g ∈ ϴ(n^2) imply that h ∈ ϴ(n^3)?
Preparations
Let's start by loosely stating the definition of Big-ϴ notation
f ∈ ϴ(g(n))
⇨ For some positive constants k1, k2, and n0, the following holds:
k1 · |g(n)| ≤ |f(n)| ≤ k2 · |g(n)|, for all n ≥ n0 (+)
We will make use of this definition below but assume, without loss of generality, that both f(n) and g(n) above are non-negative for all n.
Solution
From the above we can state, for some positive set of constants (c1, c2, n0) and (d1, d2, m0), that the following holds
f ∈ ϴ(n): c1 · n ≤ f(n) ≤ c2 · n, for all n ≥ n0 (i)
g ∈ ϴ(n^2): d1 · n^2 ≤ g(n) ≤ d2 · n^2, for all n ≥ m0 (ii)
Now, the set of constants (c1, c2, n0) (as well as (d1, d2, m0)) is not unique; if such a set exists, an infinite number of such sets exist. Since f ∈ ϴ(n) and g ∈ ϴ(n^2) holds, such sets do exist, and we can, without loss of generality, assume that we can find a set of constants (c1, c2, n0) and (d1, d2, m0) such that c1=d1, c2=d2 and n0=m0 all hold. Hence, we can re-state (i-ii) as:
f ∈ ϴ(n): c1 · n ≤ f(n) ≤ c2 · n, for all n ≥ n0 (I)
g ∈ ϴ(n^2): c1 · n^2 ≤ g(n) ≤ c2 · n^2, for all n ≥ n0 (II)
for some set of positive constants (c1, c2, n0).
Now, since n > n0 > 0, all terms in the inequalities (I-II) above are positive, and we can apply (*) directly:
(I) * (II):
c1^2 · n^3 ≤ f(n) · g(n) ≤ c2^2 · n^3, for all n ≥ n0 (iii)
Now, let k1 = c1^2 and k2=c2^2, and insert---among with h(n) = f(n) · g(n)---into (iii), yielding
k1 · n^3 ≤ h(n) ≤ k2 · n^3, for all n ≥ n0 (III)
This is, by (+), the very definition of h ∈ ϴ(n^3), and we have hence solved our problem by showing that:
For h(n) as in (*): f ∈ ϴ(n) and g ∈ ϴ(n^2) implies that h ∈ ϴ(n^3)

Related

Time complexity of an algorithm in different cases

An old man trying to learn more and got stuck on this exercise in some old exam:
Specify the complexity, in Θ(.) Notation, of the Test(n) function, detailed below, in each of the following three cases:
1/ n is even.
2/ n is a perfect square, that is, there exists an integer i such that i² = n.
3/ n is a prime number.
Function Test( n : Integer) : Integer
Variable
i : Integer
Start
for i := 2 to n do
if n mod i = 0 Return( i ) End-if
End-for
Return(n)
End
I think the comments have answered your general question, but a note about proving Big Theta time complexity:
To show f(n) ∈ Θ(g(n)), you don't necessarily have to prove f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n)) through the method you alluded to of showing there exists a constant c and an integer n0 for which for all n > n0 f(n) < cg(n), and then finding another c and n0 for which for all n > n0 f(n) > cg(n). Although this is perfectly valid and widely taught, there is an alternative, equally mathematically rigorous but generally much cleaner and more practical approach, which is just to show that:
0 < lim n-> \infty f(n) / g(n) < \infty
That is, f(n) ∈ Θ(g(n)) iff the limit as n goes to infinity of f(n) over g(n) is some constant.
If you showed only that
lim n-> \infty f(n) / g(n) < \infty
you would have shown that f(n) grows no faster than g(n): that is, that f(n) ∈ O(g(n)).
Similarly:
0 < lim n-> \infty f(n) / g(n)
implies that f(n) grows at least as fast as g(n): that f(n) ∈ Ω(g(n)). So together, they imply f(n) ∈ Θ(g(n))
Generally I think this is a lot less tedious than proofs of the form you mentioned, which involve actually finding c values and n0 values for the big O case, proving some needlessly particular statement involving those values, and then repeating the whole process for the Ω case, but that is just my opinion: whatever style works for you is good.

Comparing time complexity using ∈ notation

If there are two functions: f(n) and g(n)
If: f(n) ∈ O(g(n))
Is it then true that: 3^(f(n)) ∈ O(3^(g(n)))

What is the product of O(n) and O(log n)?

Was learning the merge sort algorithm, found that the time complexity of Merge sort is O(n log n).
Want to know if we can say O(n log n) = O(n) * O(log n)?
No, it doesn't really make sense to do that. The Big-O function yields sets of functions and sets cannot be multiplied together.
More generally, you don't normally perform any operations on O(...) results. There's no adding them, subtracting them, multiplying them. No algebra. O(...) typically shows up at the conclusion of a proof: "Based on the analysis above, I conclude that the worst case complexity of Finkle's Algorithm is O(whatever)." It doesn't really show up in the middle where it one might subject it to algebraic manipulation.
(You could perform set operations, I suppose. I've never seen anybody do that.)
To formalise what it means to do O(n) * O(log n), let's make the following definition:
A function f is in O(n) * O(log n) if and only if it can be written as a product f(n) = g(n) h(n) where g is in O(n) and h is in O(log n).
Now we can prove that the set O(n) * O(log n) is equal to the set O(n log n) by showing that the functions in both sets are the same:
Given g in O(n) and h in O(log n), there are N_g, c_g, N_h, c_h such that for all n >= max(N_g, N_h) we have |g(n)| <= c_g n and |h(n)| <= c_h log n. It follows that |g(n) h(n)| <= c_g c_h n log n, and so max(N_g, N_h) and c_g c_h are sufficient to show that f is in O(n log n).
Conversely, given f in O(n log n), there are N_f >= 1, c_f such that |f(n)| <= c_f n log n for all n >= N_f. Define g(n) = max(1, n) and h(n) = f(n) / max(1, n); clearly g is in O(n), and we can also see that for n >= N_f we have |h(n)| <= c_f n log n / max(1, n) where the bound on the right hand side is equal to c_f log n because n >= 1, so N_f, c_f are sufficient to show that h is in O(log n). Since we have f(n) = g(n) h(n), it follows that f is in O(n) * O(log n) as we defined it.
The choice of N_f >= 1 and g(n) = max(1, n) is to avoid dividing by zero when n is zero.
actually, the definition of Big-o is not commutative, lets see the example:
let f be defined as f(n) = n
f(n) = O(n^2) & f(n) = O(n^3), but O(n^2) != O(n^3)
that's because using equal sign = is not accurately define here we should say f(n) is O(g).
anyway being a little inaccurate, here is the definition of Big-O grabbed by sipser:
Say that f (n) = O(g(n))
if positive integers c and n 0 exist such that for every integer n ≥ n0,
f (n) ≤ c g(n).
When f (n) = O(g(n)), we say that g(n) is an upper bound for
When f (n) = O(g(n)), we say that g(n) is an upper bound for
f (n), or more precisely, that g(n) is an asymptotic upper bound for
f (n), to emphasize that we are suppressing constant factors.
So for proving what you state you must first define what * means in your equation. and show for every function which is O(n log n), it is also O(n) * O(log n) and vice-versa.
but being inaccurate again and define * as symbolic polynomial multiplication we have the following for some constant positive c and d.
O(n log n) = O(cn log n) = O(log n ^ (cn)) = O(d log n^(cn)) = O(log (n^cn) ^ d) = O(log n^cdn) ~= log n ^ cdn ~= cdn * log n
= O(n) * O(log n) = O(cn) * O(d log n) = O(cn) * O(log n^d) ~= cn * (log n^d) ~= cn * d*logn ~= cdn * log n

I need help proving that if f(n) = O(g(n)), then log f(n) = O(log g(n)) is FALSE

Let functions f and g such that f(n) is O(g(n)) and following
statements:
I. log f(n) is O(log g(n))
II. 2f(n) is O(2g(n))
III. f(n)2 is O(g(n)2)
Which of the following statement(s) is/are false?
A. I and II
B. I and III
C. II and III
D. All I, II, III
Explanation:
Only statement (III) f(n)2 is O(g(n)2) is correct.
Option (A) is true.
Solution says that only statement 3 is correct, rest 2 are wrong.
I understand that II is wrong because f(n) can be 2n and g(n) can be n; then f(n) != O(g(n)), but how is statement I false?
Statement I is false and here's why. Let f(n) = 2 and g(n) = 1. Then f(n) = O(g(n)). However, log(f(n)) = 1 and log(g(n))= 0. There is no n0 nor any c such that 1 <= c * 0.
EDIT: presumably, statement II is not formatted properly and should read 2^f(n) = O(2^g(n)), which is false if f(n) = 2n and g(n) = n, e.g.

If f(n) is O(g(n)) and g(n) is O(h(n)) how can I prove that f(n) + g(n) is O(h(n))?

I understand that f(n) is less than or equal to a constant times g(n) for all n greater than or equal to k where n and c are positive. I also understand the same for g(n) and how it is of O(h(n)). However, I am unsure how f(n) + g(n) is of O(h(n)).