Big O Notation example claiming 2n^2 = O(n^3)? - time-complexity

While I was studying time complexity, I found a web page explaining about Big O notation.
While I was reading I found the example they used and it got me confused.
In the example they said something like
2n^{2} = O(n^{3}) because for all n bigger than 2(n_{0}), there exists a c(which is 1) that satisfies 0 <= 2n^{2} <= cn^{3}
First I thought the Big O should be O(n^2). But after reading through few more texts I can see that n^2 is smaller than n^3 so can theoretically say that the big O is O(n^3).
But is O(n^2) wrong for the example above?

You're absolutely correct (though you should write out the proof for O(n^2) to convince yourself). Technically the example as written is not wrong, but it's also not a good example. You can think of O(g) as meaning that the function grows as slowly or slower than g. So if a function is O(n^2) it is also O(n^3). There are other variations on Big-O notation (Theta and Omega) which make stronger statements about the asymptotic behavior.
This thread had some nice information even though the focus of the question is slightly different Difference between Big-O and Little-O Notation.

Related

Does it make sense to use big-O to describe the best case for a function?

I have an extremely pedantic question on big-O notation that I would like some opinions on. One of my uni subjects states “Best O(1) if their first element is the same” for a question on checking if two lists have a common element.
My qualm with this is that it does not describe the function on the entire domain of large inputs, rather the restricted domain of large inputs that have two lists with the same first element. Does it make sense to describe a function by only talking about a subset of that function’s domain? Of course, when restricted to that domain, the time complexity is omega(1), O(1) and therefore theta(1), but this isn’t describing the original function. From my understanding it would be more correct to say the entire function is bounded by omega(1). (and O(m*n) where m, n are the sizes of the two input lists).
What do all of you think?
It is perfectly correct to discuss cases (as you correctly point out, a case is a subset of the function's domain) and bounds on the runtime of algorithms in those cases (Omega, Oh or Theta). Whether or not it's useful is a harder question and one that is very situation-dependent. I'd generally think that Omega-bounds on the best case, Oh-bounds on the worst case and Theta bounds (on the "universal" case of all inputs, when such a bound exists) are the most "useful". But calling the subset of inputs where the first elements of each collection are the same, the "best case", seems like reasonable usage. The "best case" for bubble sort is the subset of inputs which are pre-sorted arrays, and is bound by O(n), better than unmodified merge sort's best-case bound.
Fundamentally, big-O notation is a way of talking about how some quantity scales. In CS we so often see it used for talking about algorithm runtimes that we forget that all the following are perfectly legitimate use cases for it:
The area of a circle of radius r is O(r2).
The volume of a sphere of radius r is O(r3).
The expected number of 2’s showing after rolling n dice is O(n).
The minimum number of 2’s showing after rolling n dice is O(1).
The number of strings made of n pairs of balanced parentheses is O(4n).
With that in mind, it’s perfectly reasonable to use big-O notation to talk about how the behavior of an algorithm in a specific family of cases will scale. For example, we could say the following:
The time required to sort a sequence of n already-sorted elements with insertion sort is O(n). (There are other, non-sorted sequences where it takes longer.)
The time required to run a breadth-first search of a tree with n nodes is O(n). (In a general graph, this could be larger if the number of edges were larger.)
The time required to insert n items in sorted order into an initially empty binary heap is On). (This can be Θ(n log n) for a sequence of elements that is reverse-sorted.)
In short, it’s perfectly fine to use asymptotic notation to talk about restricted subcases of an algorithm. More generally, it’s fine to use asymptotic notation to describe how things grow as long as you’re precise about what you’re quantifying. See #Patrick87’s answer for more about whether to use O, Θ, Ω, etc. when doing so.

Time complexity - understanding big-Theta

I'm currently taking algorithms and data structure. After nearly two months of studying, I still find time complexity extremely confusing.
I was told (by my professor) that if I big-omega and big-O of some program aren't equal, big-theta doesn't exist.
I now literally question everything I've learned so far. I'll take BubbleSort as an example, with big-omega(n),big-theta(n^2) and big-O(n^2). Big-theta indeed does exist (and it makes sense when i analyze it).
Can anyone explain to me whether my professor is wrong or perhaps I misunderstood something?
There exists big O, Big ϴ, and Big Ω.
O is the upper bound
Ω is the lower bound
ϴ exists if and only only if O = Ω
In essence, Big-O is the most useful in that it tells us the WORST a function can behave.
Big-Ω indicates the BEST it can behave.
When WORST = BEST, you get ALWAYS. That's Big-ϴ. When a function always behaves the same.
Example (optimized bubble sort, then has boolean flag for when a swap occurs):
bubbleSort() ∈ Ω(n)
bubbleSort() ∈ O(n^2)
This means, the best bubble sort can do is linear time. But, in the worst case, it DEGRADES to quadratic time. Reason, on a pre-sorted list, bubble sort behaves quiet well, as it does one loop through the list (n iterations) and exits. Whereas, on a list in descending order, it would do roughly n(n-1)/2 iterations, which is proportional to n^2. Depending on the input, bubble sort behaves DRASTICALLY (different order or magnitude) differently.
Whereas:
mergeSort() ∈ Ω(n * log n)
mergeSort() ∈ O(n * log n)
This means, in the best case merge sort is in n * log n time, and in the worst case. It is ALWAYS n * log n. That is because, no matter what the input is, merge sort will always recursively divide the list in half-size sub-arrays, and put them back together once each sub-array is of size 1. However, you can only break something in half so many times (log2(n)) times. Then, the merge() routine that is O(n) is called once per mergeSort() which occurs log2(n) times. So, for ALL executions of mergeSort() you get n * log2(n).
Therefore we can make a STRONGER statement and say that:
mergeSort() ∈ ϴ(n * log n)
We can only make such definitive statements (use big theta) if the runtime of a function is bounded above and below by functions of the exact same order of magnitude.
How I remember it:
ϴ is an end all be all, whereas O, and Ω are simply limits.
lets take this problem in 2 ways:
First Way:
we say that O(bubble sort) is O(n^2) and Ω(bubble sort) is Ω(n).
The above statement holds true because, when the elements are laid down in array in sorted or almost sorted manner then bubble sort will have an equation of running time in form of -> an + b, where a and b are constants (a is positive).
but when elements are laid down in array in not so sorted manner or very unsorted manner, then the running time will be in form -> an^2 + bn + c. where a,b,c are constants (a is positive).
thus, bubble sort ∈ O(n^2) and bubble sort ∈ Ω(n).
PS: differentiating what will constitute to almost sorted manner and what will constitute to not so sorted manner is talk for another day, but you can google it you will find it easily.
Now, Second way:
Rather than taking Big-O and Big-Ω of bubble sort we divide bubble sort into its worst case and best case.
now we say what is Big-O of worst case of bubble sort we get. O(n^2), i.e,
worst case of bubble sort ∈ O(n^2).
but now since we have specified that we are dealing with only worst case then
Big-Ω of worst case of bubble sort will also be Ω(n^2).
Thus in this case ϴ(n^2) exists. because O = Ω.
Using similar approach, we can see that the best case of bubble sort will have ϴ(n).
Note: worst case will be in form: an^2 + bn + c.
and best case will be in form: an + b.
-------------------------------------------------------
In second way we bifurcated the worst case and best case then only we were able to reach to ϴ notations for both the cases.
Bubble sort as a whole had different polynomial equations as its running time with different degree that is why it did not have a ϴ notation.

What is the average case time-complexity for quick sort?

I see a google search pulls up a lot here but there is a lot of ambiguity on the web. Please do not mark as a duplicate.
I have seen
n log n
and
n ln n
and finally
n log(base2) n
One of the SO answers treats Big O as worst case and states n^2. Worst case is in fact n^2, but Big O does not imply worst case so in this sense it is highly voted but wrong.
To be clear, I want to know the average case and to be clear, this is for time-complexity.
In math you always assume the base is equal to 10.
In computer science, if the base is omitted from the logarithm, convention dictates that the base is equal to 2.
Bit of a silly question really, I'm surprised your teacher didn't address this. You can use any of the three Big-O runtimes in your original post because they all mean the same exact thing.

Complexity of binary tree search

I am finding conflicting information on the search on a binary heap. According to this, https://en.wikipedia.org/wiki/Binary_heap, it's O(n) (edit: it's actually O(log n)), according to this, Search an element in a heap, it's O(n/2).
Wikipedia was just wrong about that. Binary heaps are not designed to be searched for individual elements, and are optimized just to give access to the smallest element. This is what enables them, for example, to be constructed in time Θ(n); the ordering they require isn't nearly as strict as a binary search tree.
It looks like someone has updated Wikipedia, which is good. Thanks for pointing that out!
One note - the terminology O(n / 2), while technically correct, is considered a poor use of big-O notation. Big-O notation ignores constant factors, so O(n / 2) is the same as O(n). If you want to count the specific number of operations you'll end up doing, then avoid big-O notation and say something like "exactly n / 2 comparisons are required."

What is Big O notation? Do you use it? [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 9 years ago.
What is Big O notation? Do you use it?
I missed this university class I guess :D
Does anyone use it and give some real life examples of where they used it?
See also:
Big-O for Eight Year Olds?
Big O, how do you calculate/approximate it?
Did you apply computational complexity theory in real life?
One important thing most people forget when talking about Big-O, thus I feel the need to mention that:
You cannot use Big-O to compare the speed of two algorithms. Big-O only says how much slower an algorithm will get (approximately) if you double the number of items processed, or how much faster it will get if you cut the number in half.
However, if you have two entirely different algorithms and one (A) is O(n^2) and the other one (B) is O(log n), it is not said that A is slower than B. Actually, with 100 items, A might be ten times faster than B. It only says that with 200 items, A will grow slower by the factor n^2 and B will grow slower by the factor log n. So, if you benchmark both and you know how much time A takes to process 100 items, and how much time B needs for the same 100 items, and A is faster than B, you can calculate at what amount of items B will overtake A in speed (as the speed of B decreases much slower than the one of A, it will overtake A sooner or later—this is for sure).
Big O notation denotes the limiting factor of an algorithm. Its a simplified expression of how run time of an algorithm scales with relation to the input.
For example (in Java):
/** Takes an array of strings and concatenates them
* This is a silly way of doing things but it gets the
* point across hopefully
* #param strings the array of strings to concatenate
* #returns a string that is a result of the concatenation of all the strings
* in the array
*/
public static String badConcat(String[] Strings){
String totalString = "";
for(String s : strings) {
for(int i = 0; i < s.length(); i++){
totalString += s.charAt(i);
}
}
return totalString;
}
Now think about what this is actually doing. It is going through every character of input and adding them together. This seems straightforward. The problem is that String is immutable. So every time you add a letter onto the string you have to create a new String. To do this you have to copy the values from the old string into the new string and add the new character.
This means you will be copying the first letter n times where n is the number of characters in the input. You will be copying the character n-1 times, so in total there will be (n-1)(n/2) copies.
This is (n^2-n)/2 and for Big O notation we use only the highest magnitude factor (usually) and drop any constants that are multiplied by it and we end up with O(n^2).
Using something like a StringBuilder will be along the lines of O(nLog(n)). If you calculate the number of characters at the beginning and set the capacity of the StringBuilder you can get it to be O(n).
So if we had 1000 characters of input, the first example would perform roughly a million operations, StringBuilder would perform 10,000, and the StringBuilder with setCapacity would perform 1000 operations to do the same thing. This is rough estimate, but O(n) notation is about orders of magnitudes, not exact runtime.
It's not something I use per say on a regular basis. It is, however, constantly in the back of my mind when trying to figure out the best algorithm for doing something.
A very similar question has already been asked at Big-O for Eight Year Olds?. Hopefully the answers there will answer your question although the question asker there did have a bit of mathematical knowledge about it all which you may not have so clarify if you need a fuller explanation.
Every programmer should be aware of what Big O notation is, how it applies for actions with common data structures and algorithms (and thus pick the correct DS and algorithm for the problem they are solving), and how to calculate it for their own algorithms.
1) It's an order of measurement of the efficiency of an algorithm when working on a data structure.
2) Actions like 'add' / 'sort' / 'remove' can take different amounts of time with different data structures (and algorithms), for example 'add' and 'find' are O(1) for a hashmap, but O(log n) for a binary tree. Sort is O(nlog n) for QuickSort, but O(n^2) for BubbleSort, when dealing with a plain array.
3) Calculations can be done by looking at the loop depth of your algorithm generally. No loops, O(1), loops iterating over all the set (even if they break out at some point) O(n). If the loop halves the search space on each iteration? O(log n). Take the highest O() for a sequence of loops, and multiply the O() when you nest loops.
Yeah, it's more complex than that. If you're really interested get a textbook.
'Big-O' notation is used to compare the growth rates of two functions of a variable (say n) as n gets very large. If function f grows much more quickly than function g we say that g = O(f) to imply that for large enough n, f will always be larger than g up to a scaling factor.
It turns out that this is a very useful idea in computer science and particularly in the analysis of algorithms, because we are often precisely concerned with the growth rates of functions which represent, for example, the time taken by two different algorithms. Very coarsely, we can determine that an algorithm with run-time t1(n) is more efficient than an algorithm with run-time t2(n) if t1 = O(t2) for large enough n which is typically the 'size' of the problem - like the length of the array or number of nodes in the graph or whatever.
This stipulation, that n gets large enough, allows us to pull a lot of useful tricks. Perhaps the most often used one is that you can simplify functions down to their fastest growing terms. For example n^2 + n = O(n^2) because as n gets large enough, the n^2 term gets so much larger than n that the n term is practically insignificant. So we can drop it from consideration.
However, it does mean that big-O notation is less useful for small n, because the slower growing terms that we've forgotten about are still significant enough to affect the run-time.
What we now have is a tool for comparing the costs of two different algorithms, and a shorthand for saying that one is quicker or slower than the other. Big-O notation can be abused which is a shame as it is imprecise enough already! There are equivalent terms for saying that a function grows less quickly than another, and that two functions grow at the same rate.
Oh, and do I use it? Yes, all the time - when I'm figuring out how efficient my code is it gives a great 'back-of-the-envelope- approximation to the cost.
The "Intuitition" behind Big-O
Imagine a "competition" between two functions over x, as x approaches infinity: f(x) and g(x).
Now, if from some point on (some x) one function always has a higher value then the other, then let's call this function "faster" than the other.
So, for example, if for every x > 100 you see that f(x) > g(x), then f(x) is "faster" than g(x).
In this case we would say g(x) = O(f(x)). f(x) poses a sort of "speed limit" of sorts for g(x), since eventually it passes it and leaves it behind for good.
This isn't exactly the definition of big-O notation, which also states that f(x) only has to be larger than C*g(x) for some constant C (which is just another way of saying that you can't help g(x) win the competition by multiplying it by a constant factor - f(x) will always win in the end). The formal definition also uses absolute values. But I hope I managed to make it intuitive.
It may also be worth considering that the complexity of many algorithms is based on more than one variable, particularly in multi-dimensional problems. For example, I recently had to write an algorithm for the following. Given a set of n points, and m polygons, extract all the points that lie in any of the polygons. The complexity is based around two known variables, n and m, and the unknown of how many points are in each polygon. The big O notation here is quite a bit more involved than O(f(n)) or even O(f(n) + g(m)).
Big O is good when you are dealing with large numbers of homogenous items, but don't expect this to always be the case.
It is also worth noting that the actual number of iterations over the data is often dependent on the data. Quicksort is usually quick, but give it presorted data and it slows down. My points and polygons alogorithm ended up quite fast, close to O(n + (m log(m)), based on prior knowledge of how the data was likely to be organised and the relative sizes of n and m. It would fall down badly on randomly organised data of different relative sizes.
A final thing to consider is that there is often a direct trade off between the speed of an algorithm and the amount of space it uses. Pigeon hole sorting is a pretty good example of this. Going back to my points and polygons, lets say that all my polygons were simple and quick to draw, and I could draw them filled on screen, say in blue, in a fixed amount of time each. So if I draw my m polygons on a black screen it would take O(m) time. To check if any of my n points was in a polygon, I simply check whether the pixel at that point is green or black. So the check is O(n), and the total analysis is O(m + n). Downside of course is that I need near infinite storage if I'm dealing with real world coordinates to millimeter accuracy.... ...ho hum.
It may also be worth considering amortized time, rather than just worst case. This means, for example, that if you run the algorithm n times, it will be O(1) on average, but it might be worse sometimes.
A good example is a dynamic table, which is basically an array that expands as you add elements to it. A naïve implementation would increase the array's size by 1 for each element added, meaning that all the elements need to be copied every time a new one is added. This would result in a O(n2) algorithm if you were concatenating a series of arrays using this method. An alternative is to double the capacity of the array every time you need more storage. Even though appending is an O(n) operation sometimes, you will only need to copy O(n) elements for every n elements added, so the operation is O(1) on average. This is how things like StringBuilder or std::vector are implemented.
What is Big O notation?
Big O notation is a method of expressing the relationship between many steps an algorithm will require related to the size of the input data. This is referred to as the algorithmic complexity. For example sorting a list of size N using Bubble Sort takes O(N^2) steps.
Do I use Big O notation?
I do use Big O notation on occasion to convey algorithmic complexity to fellow programmers. I use the underlying theory (e.g. Big O analysis techniques) all of the time when I think about what algorithms to use.
Concrete Examples?
I have used the theory of complexity analysis to create algorithms for efficient stack data structures which require no memory reallocation, and which support average time of O(N) for indexing. I have used Big O notation to explain the algorithm to other people. I have also used complexity analysis to understand when linear time sorting O(N) is possible.
From Wikipedia.....
Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n² − 2n + 2.
As n grows large, the n² term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n² is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression's value for most purposes.
Obviously I have never used it..
You should be able to evaluate an algorithm's complexity. This combined with a knowledge of how many elements it will take can help you to determine if it is ill suited for its task.
It says how many iterations an algorithm has in the worst case.
to search for an item in an list, you can traverse the list until you got the item. In the worst case, the item is in the last place.
Lets say there are n items in the list. In the worst case you take n iterations. In the Big O notiation it is O(n).
It says factualy how efficient an algorithm is.