Are big O and big Omega mutually exclusive - time-complexity

Are big O and big Omega exclusive? In the sense for a function f(n) can g(n) be both big omega and big O of that function? For example lets say f(n) = n+nlog(n) and g(n) = sqrt(n)+ n^2. I know for sure that f(n) = O(g(n)) because n^2 is going to dominate all of the other functions. But would there be a case where f(n) = big Omega (g(n))? Or would there be another example that could show a function g(n) be a big-O and a big-Omega of a function f(n)?

They are not mutually exclusive. An example can be f(n)=g(n). In this case, we will say that f(n) is big Theta of g(n) see this under: 'Big Theta notation' for more information

Related

What exactly does small-oh notation mean? [duplicate]

What is the difference between Big-O notation O(n) and Little-O notation o(n)?
f ∈ O(g) says, essentially
For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.
Note that O(g) is the set of all functions for which this condition holds.
f ∈ o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > a.
Once again, note that o(g) is a set.
In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.
In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero.
These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement. There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).
One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false. Therefore, Big-O can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's". It's like <= versus <.
More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f ∈ O(g) is true. This is why you can drop constants when working with big-O notation.
However, for f ∈ o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.
To use purely math examples (rather than referring to algorithms):
The following are true for Big-O, but would not be true if you used little-o:
x² ∈ O(x²)
x² ∈ O(x² + x)
x² ∈ O(200 * x²)
The following are true for little-o:
x² ∈ o(x³)
x² ∈ o(x!)
ln(x) ∈ o(x)
Note that if f ∈ o(g), this implies f ∈ O(g). e.g. x² ∈ o(x³) so it is also true that x² ∈ O(x³), (again, think of O as <= and o as <)
Big-O is to little-o as ≤ is to <. Big-O is an inclusive upper bound, while little-o is a strict upper bound.
For example, the function f(n) = 3n is:
in O(n²), o(n²), and O(n)
not in O(lg n), o(lg n), or o(n)
Analogously, the number 1 is:
≤ 2, < 2, and ≤ 1
not ≤ 0, < 0, or < 1
Here's a table, showing the general idea:
(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2) oscillates between 3 and 4 forever. It's in O(1) despite not having a normal limit, because it still has a lim sup: 4.)
I recommend memorizing how the Big-O notation converts to asymptotic comparisons. The comparisons are easier to remember, but less flexible because you can't say things like nO(1) = P.
I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)
Stuff you know: A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.
Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(N⁄loglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."
Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)
In general
Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:
f(n) ∈ o(n) means: at some point, the more you zoom out, the more f(n) will be dominated by n (it will progressively diverge from it).
g(n) ∈ Θ(n) means: at some point, zooming out will not change how g(n) compare to n (if we remove ticks from the axis you couldn't tell the zoom level).
Finally h(n) ∈ O(n) means that function h can be in either of these two categories. It can either look a lot like n or it could be smaller and smaller than n when n increases. Basically, both f(n) and g(n) are also in O(n).
I think this Venn diagram (adapted from this course) could help:
It's the exact same has what we use for comparing numbers:
In computer science
In computer science, people will usually prove that a given algorithm admits both an upper O and a lower bound 𝛺. When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem Θ.
For example, if we prove that the complexity of an algorithm is both in O(n) and 𝛺(n) it implies that its complexity is in Θ(n). (That's the definition of Θ and it more or less translates to "asymptotically equal".) Which also means that no algorithm can solve the given problem in o(n). Again, roughly saying "this problem can't be solved in (strictly) less than n steps".
Usually the o is used within lower bound proof to show a contradiction. For example:
Suppose algorithm A can find the min value in an array of size n in o(n) steps. Since A ∈ o(n) it can't see all items from the input. In other words, there is at least one item x which A never saw. Algorithm A can't tell the difference between two similar inputs instances where only x's value changes. If x is the minimum in one of these instances and not in the other, then A will fail to find the minimum on (at least) one of these instances. In other words, finding the minimum in an array is in 𝛺(n) (no algorithm in o(n) can solve the problem).
Details about lower/upper bound meanings
An upper bound of O(n) simply means that even in the worse case, the algorithm will terminate in at most n steps (ignoring all constant factors, both multiplicative and additive). A lower bound of 𝛺(n) is a statement about the problem itself, it says that we built some example(s) where the given problem couldn't be solved by any algorithm in less than n steps (ignoring multiplicative and additive constants). The number of steps is at most n and at least n so this problem complexity is "exactly n". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write Θ(n) for short.
The big-O notation has a companion called small-o notation. The big-O notation says the one function is asymptotical no more than another. To say that one function is asymptotically less than another, we use small-o notation. The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).

Master theorem where f(n) = cn^k

I'm wondering about a specific case for the master theorem that arises when f(n) = xnk and n^(logba) = nk where x is an integer greater than 1. For this case f(n) is larger than n^(logba) however it is not polynomially larger than it so case 3 can not arise.
For a case like this I would assume you use case 2 as the big-O of them both are the same but that doesn't seem to fit with the equation that I can find. It seems possible that I'm making a mistake in taking f(n) directly out of the original recursive relation rather than it's big-O as that seems to make sense to me yet I can't find any clarification on this or any examples where the space of f(n) in the equation is not already it's own big-O .
Edit: When I say "the equation that I can find" what I mean is that assumption doesn't fit with the master theorem as I can work it out. As I have it the master theorem for case 2 which I am talking about looks like f(n) = Θ(n^(logba)). I think the important bit really is whether out of an equation ending in + xnk I pull out f(n) = xnk or f(n) = nk. Apologies for the poor wording.
I think the important bit really is whether out of an equation ending in + xnk I pull out f(n) = xnk or f(n) = nk.
Normally you should take f(n) = x * nk. Because master theorem defines T(n) to be in the form aT(n/b) + f(n). But in your example, it doesn't really matter.
Growth of f(n) and x * f(n) are the same, if x is a positive constant. In the case of f(n) = xnk, They are both Θ(nk). (Or you could say they are both Θ(x * nk). This is the same set as Θ(nk).)
Since f(n) = Θ(nlogba), case 2 of master theorem should be used here. The theorem says T(n) = Θ(nlogba * lgn) in this case.
Again, it doesn't matter here if you write Θ(nlogba * lgn) or Θ(5 * nlogba * lgn) or Θ(x * nlogba * lgn). Multiplying a function with a positive constant doesn't change its asymptotic bounds. Master theorem gives you just the asymptotical bounds of the function, not its exact value.

Which constants can be ignored in Big O for time complexity - exponential cases?

The obvious one is a constant on a linear term for example 2n, 4n and 8n are all just n or O(n).
But what about the exponential constant 1.6^n and 2^n. In this case the constant seems to have a greater effect on the time complexity.
Also there is not really a convenient way to write a catch all for exponential time complexity.
O(K^n) perhaps.
In this cheat sheet, they seem to use O(2^n) does that mean that all exponential complexities should be written that way?
Probably not.
You're right that 2n, 4n and 8n are all just O(n), and you're also right that O(1.6n) is not the same as O(2n). To understand why, we need to refer to the definition of big O notation.
The notation O(...) means a set of functions. A function f(n) is in the set O(g(n)) if and only if, for some constants c and n0, we have f(n) ≤ c * g(n) whenever n ≥ n0. Given this definition:
The function f(n) = 8n is in the set O(n) because if we choose c = 8 and n0 = 1, we have 8n ≤ 8 * n for all n ≥ 1.
The function f(n) = 2n is not in the set O(1.6n), because whichever c and n0 we choose, 2n > c * 1.6n for some sufficiently large n. We can choose n > log2 c + n0 log2 1.6 for a concrete counterexample.
Note however that f(n) = 1.6n is in the set O(2n), because 1.6n ≤ 1 * 2n for all n ≥ 1.
For a "catch-all" way of writing exponential complexity, you can write 2O(n). This includes exponential functions with arbitrary bases, e.g. the function f(n) = 16n since this equals 24n, and 4n is in the set O(n). It's an abuse of notation, since raising the number 2 to the power of a set of functions doesn't really make sense in this context, but it is common enough to be understood.
That is correct. The cheat sheet you linked to here can not show all the different complexities so it picks the most common ones.
Simply put, if you have a function growing at 3 ^ n. It can not be classified as 2 ^ n because it will break the definition of Big O.
The some what complex looking math that describes Big O is simply saying that it can't ever be bigger. And also ignore linear growth constants.
f(n) ≤ c * g(n) whenever n ≥ n0

Time complexity bounded by omega

Hi I was wandering if this statement is true.
if f(n) = omega(g(n)) and g(n) = omega(f(n))
does it mean that f(n) = theta(g(n)) or g(n) = theta(f(n))?
Could anyone clarify this for me?
You can change these symbols for < > if you want. It is basically how it works in terms of complexity (not the algebra, therefore you cannot use the <> directly)
f(n) <= g(n)
g(n) <= f(n)
Yes, it means that g(n) = f(n) (in complexity, therefore you can read it as g(n) has same complexity as f(n))
In formal complexity world, you use Theta for that.

Is O(1000n) = O(n) when n>>1000

If it is explicitly given that n>>1000, can O(1000n) be considered as O(n) ??
In other words, if we are to solve a problem(which also states that n>>1000) in O(n) & my solution's complexity is O(1000n), is my solution acceptable ?
If the function is O(1000n), then it is automatically also O(n).
After all, if f(n) is O(1000n), then there exists a constant M and and an n0 such that
f(n) <= M*1000n
for all n > n0. But if that is true, then we can take N = 1000*M and
f(n) <= N*n
for all n > n0. Therefore, f is O(n) as well.
Constant factors "drop out" in big-O notation. See Wikipedia, under "multiplication by a constant".
Your solution is in polynomial time, so any constants won't matter when n is arbitrarily large. So yes, your solution is acceptable.
Yes, provided n is much larger than 1000