Time complexity bounded by omega - time-complexity

Hi I was wandering if this statement is true.
if f(n) = omega(g(n)) and g(n) = omega(f(n))
does it mean that f(n) = theta(g(n)) or g(n) = theta(f(n))?
Could anyone clarify this for me?

You can change these symbols for < > if you want. It is basically how it works in terms of complexity (not the algebra, therefore you cannot use the <> directly)
f(n) <= g(n)
g(n) <= f(n)
Yes, it means that g(n) = f(n) (in complexity, therefore you can read it as g(n) has same complexity as f(n))
In formal complexity world, you use Theta for that.

Related

Are big O and big Omega mutually exclusive

Are big O and big Omega exclusive? In the sense for a function f(n) can g(n) be both big omega and big O of that function? For example lets say f(n) = n+nlog(n) and g(n) = sqrt(n)+ n^2. I know for sure that f(n) = O(g(n)) because n^2 is going to dominate all of the other functions. But would there be a case where f(n) = big Omega (g(n))? Or would there be another example that could show a function g(n) be a big-O and a big-Omega of a function f(n)?
They are not mutually exclusive. An example can be f(n)=g(n). In this case, we will say that f(n) is big Theta of g(n) see this under: 'Big Theta notation' for more information

Master theorem where f(n) = cn^k

I'm wondering about a specific case for the master theorem that arises when f(n) = xnk and n^(logba) = nk where x is an integer greater than 1. For this case f(n) is larger than n^(logba) however it is not polynomially larger than it so case 3 can not arise.
For a case like this I would assume you use case 2 as the big-O of them both are the same but that doesn't seem to fit with the equation that I can find. It seems possible that I'm making a mistake in taking f(n) directly out of the original recursive relation rather than it's big-O as that seems to make sense to me yet I can't find any clarification on this or any examples where the space of f(n) in the equation is not already it's own big-O .
Edit: When I say "the equation that I can find" what I mean is that assumption doesn't fit with the master theorem as I can work it out. As I have it the master theorem for case 2 which I am talking about looks like f(n) = Θ(n^(logba)). I think the important bit really is whether out of an equation ending in + xnk I pull out f(n) = xnk or f(n) = nk. Apologies for the poor wording.
I think the important bit really is whether out of an equation ending in + xnk I pull out f(n) = xnk or f(n) = nk.
Normally you should take f(n) = x * nk. Because master theorem defines T(n) to be in the form aT(n/b) + f(n). But in your example, it doesn't really matter.
Growth of f(n) and x * f(n) are the same, if x is a positive constant. In the case of f(n) = xnk, They are both Θ(nk). (Or you could say they are both Θ(x * nk). This is the same set as Θ(nk).)
Since f(n) = Θ(nlogba), case 2 of master theorem should be used here. The theorem says T(n) = Θ(nlogba * lgn) in this case.
Again, it doesn't matter here if you write Θ(nlogba * lgn) or Θ(5 * nlogba * lgn) or Θ(x * nlogba * lgn). Multiplying a function with a positive constant doesn't change its asymptotic bounds. Master theorem gives you just the asymptotical bounds of the function, not its exact value.

Which constants can be ignored in Big O for time complexity - exponential cases?

The obvious one is a constant on a linear term for example 2n, 4n and 8n are all just n or O(n).
But what about the exponential constant 1.6^n and 2^n. In this case the constant seems to have a greater effect on the time complexity.
Also there is not really a convenient way to write a catch all for exponential time complexity.
O(K^n) perhaps.
In this cheat sheet, they seem to use O(2^n) does that mean that all exponential complexities should be written that way?
Probably not.
You're right that 2n, 4n and 8n are all just O(n), and you're also right that O(1.6n) is not the same as O(2n). To understand why, we need to refer to the definition of big O notation.
The notation O(...) means a set of functions. A function f(n) is in the set O(g(n)) if and only if, for some constants c and n0, we have f(n) ≤ c * g(n) whenever n ≥ n0. Given this definition:
The function f(n) = 8n is in the set O(n) because if we choose c = 8 and n0 = 1, we have 8n ≤ 8 * n for all n ≥ 1.
The function f(n) = 2n is not in the set O(1.6n), because whichever c and n0 we choose, 2n > c * 1.6n for some sufficiently large n. We can choose n > log2 c + n0 log2 1.6 for a concrete counterexample.
Note however that f(n) = 1.6n is in the set O(2n), because 1.6n ≤ 1 * 2n for all n ≥ 1.
For a "catch-all" way of writing exponential complexity, you can write 2O(n). This includes exponential functions with arbitrary bases, e.g. the function f(n) = 16n since this equals 24n, and 4n is in the set O(n). It's an abuse of notation, since raising the number 2 to the power of a set of functions doesn't really make sense in this context, but it is common enough to be understood.
That is correct. The cheat sheet you linked to here can not show all the different complexities so it picks the most common ones.
Simply put, if you have a function growing at 3 ^ n. It can not be classified as 2 ^ n because it will break the definition of Big O.
The some what complex looking math that describes Big O is simply saying that it can't ever be bigger. And also ignore linear growth constants.
f(n) ≤ c * g(n) whenever n ≥ n0

Analysing algorithm with big theta

I have to say the time-complexity for these three algorithm.
Is it possible someone can see if they're correct?
I'm also unsure as to how I find theta?
I know theta is the average of big-O and Omega. But I feel like it's basically the same when it comes to analysing code and writing it in big-O notation.
first one seems correct with below explanation, definition of Θ notation is as below
Θ(g(n)) = {f(n) : there exists c1,c2,n0 such that
0 <= c1*g(n) <= f(n) <= c2*g(n) given c1,c2,n0 > 0}
Here in 1st snippet we should look for f(n) which is
f(n)= n/3 + n/5 = 8/15*n
To find g(n) if we assume that c1=0.5,c2=2,n0=15(since divisible by both 3 and 5)
then below will be the cases
when n=15, 0<=c1g(n)<=f(n)<=c2g(n) => 0<=c1g(n)<=1*8/15*15<=c2g(n) => 0<=0.5*g(n)<=8<=2*g(n)
when n=30 0<=0.5g(n)<=16<=2g(n)
when n=90 0<=0.5g(n)<=48<=2g(n) ...so on
when n=17 0<=0.5g(n)<=9<=2g(n)
when n=20 0<=0.5g(n)<=10<=2g(n)
Hence g(n) = n seems appropriate choice and since we could demonstrate one combination of c1,c2 and n0 that demonstrates the definition to be correct, g(n)=n is acceptable answer.

Is O(1000n) = O(n) when n>>1000

If it is explicitly given that n>>1000, can O(1000n) be considered as O(n) ??
In other words, if we are to solve a problem(which also states that n>>1000) in O(n) & my solution's complexity is O(1000n), is my solution acceptable ?
If the function is O(1000n), then it is automatically also O(n).
After all, if f(n) is O(1000n), then there exists a constant M and and an n0 such that
f(n) <= M*1000n
for all n > n0. But if that is true, then we can take N = 1000*M and
f(n) <= N*n
for all n > n0. Therefore, f is O(n) as well.
Constant factors "drop out" in big-O notation. See Wikipedia, under "multiplication by a constant".
Your solution is in polynomial time, so any constants won't matter when n is arbitrarily large. So yes, your solution is acceptable.
Yes, provided n is much larger than 1000