I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
I understand every part except F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1+1?
The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."
"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place.
And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".
I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n+1 is O(1) while a = n^50+n^50 isn't O(1) anymore?
Video link for anyone who needed more information (4:56 to 6:26)
Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".
CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.
However if n is a very large positive integer, we can't assume that. A very large integer will need O(log(n)) bits to represent. Which, whether we store it as bits, bytes, etc, will need an array of O(log(n)) things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.
Now suppose that we're trying to calculate n+m. We're going to need to generate a result of size O(log(n+m)), which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and is O(log(n+m)) to track.
So when you're looking at addition, the log of the size of the answer is what matters. Since log(50^n) = n * log(50) that means that operations with 50^n are at least O(n). (Getting 50^n might take longer...) And it means that calculating n+1 takes time O(log(n)).
Now in the case of the Fibonacci sequence, F(n) is roughly φ^n where φ = (1 + sqrt(5))/2 so log(F(n)) = O(n).
Related
Let's say I have to iterate over every character in an array of strings, in which every string has a different length, so arr[0].length != arr[1].length and so on, as this for example:
#prints every char in all the array
for str in arr:
for c in str:
print(c)
How should the time complexity of an algorithm of this nature be represented? A summation of every length of the element in the array? or just like O(N*M), taking N as number of elements and M as max length of array, which it overbounds accordingly?
There is a precise mathematical theory called complexity theory which answers your question and many more. In complexity theory, we have what is called a Turing machine which is a type of computer. The time complexity of a Turing machine doing a computation is then defined as the function f defined on natural numbers such that f(n) is the worst case running time of the machine on inputs of length n. In your case it just needs to copy its input into somewhere else, which is clearly has O(n) time complexity (n here is the combined length of your array). Since NM is greater than n, it means that your Turing machine doing the algorithm you described will not run longer than some constant times NM but it may halt sooner due to irregularities of the lengths of elements of the array.
If you are interested in learning about complexity theory, I recommend the book Introduction to the Theory of Computation by Michael Sipser, which explains these concepts from scratch.
There are many ways you could do this. Your bound of O(NM) is a conservative upper bound. You could also define a parameter L indicating the total length of all the strings and say that the runtime is Θ(N + L), which is essentially your sum idea made a bit cleaner by assigning a name to the summation. That’s a more precise bound that more clearly indicates where the work is being done.
I know, we should drop the non-dominant terms when calculating time complexity of an algorithm. I am wondering if we should drop them when calculating space complexity. For example, if I have a string of N letters, I'd like to:
construct a list of letters from this string -> Space: O(N);
sort this list -> Worst-case space complexity for Timsort (I use Python): O(N).
In this case, would the entire solution take O(N) + O(N) space or just O(N)?
Thank you.
Welcome to SO!
First of all, I think you do misunderstand complexity: Complexity is defined independently of constant factors. It depends only on the large scale behavior of the data set size N. Thus, O(N) + O(N) is the same complexity as O(N).
Thus, your question might have been:
If I construct a list of letters using an algorithm with O(N) space complexity, followed by a sort algorithm with O(N) space complexity, would the entire solution use twice as much space?
But this question cannot be answered, since a complexity does not give you any measure how much space is actually used.
A well-known example: A brute force sorting algorithm, BubbleSort, with time complexity O(N^2) is faster for small data sets than a very good sorting algorithm, QuickSort, with average time complexity O(Nlog(N)).
EDIT:
It is no contradiction, that one can compute a space complexity, and that it does not say how much space is actually used.
A simple example:
Say, for a certain problem algorithm 1 has linear space complexity O(n), and algorithm 2 space complexity O(n^2).
One could thus assume (but this is wrong) that algorithm 1 would always use less space than algorithm 2.
First, it is clear that for large enough n algorithm 2 will use more space than algorithm 1, because n^2 grows faster than n.
However, consider the case where n is small enough, say n = 1, and algorithm 1 is implemented on a computer that uses storage in doubles (64 bits), whereas algorithm 2 is implemented on a computer that uses bytes (8 bits). Then, obviously, the O(n^2) algorithm uses less space than the O(n) algorithm.
This is a constant doubt I'm having. For example, I have a 2-d array of size n^2 (n being the number of rows and columns). Suppose I want to print all the elements of the 2-d array. When I calculate the time complexity of the algorithm with respect to n it's O(n^2 ). But if I calculated the time with respect to the input size (n^2 ) it's linear. Are both these calculations correct? If so, why do people only use O(n^2 ) everywhere regarding 2-d arrays?
That is not how time complexity works. You cannot do "simple math" like that.
A two-dimensional square array of extent x has n = x*x elements. Printing these n elements takes n operations (or n/m if you print m items at a time), which is O(N). The necessary work increases linearly with the number of elements (which is, incidentially, quadratic in respect of the array extent -- but if you arranged the same number of items in a 4-dimensional array, would it be any different? Obviously, no. That doesn't magically make it O(N^4)).
What you use time complexity for is not stuff like that anyway. What you want time complexity to tell you is an approximate idea of how some particular algorithm may change its behavior if you grow the number of inputs beyond some limit.
So, what you want to know is, if you do XYZ on one million items or on two million items, will it take approximately twice as long, or will it take approximately sixteen times as long, for example.
Time complexity analysis is irrespective of "small details" such as how much time an actual operations takes. Which tends to make the whole thing more and more academic and practically useless in modern architectures because constant factors (such as memory latency or bus latency, cache misses, faults, access times, etc.) play an ever-increasing role as they stay mostly the same over decades while the actual cost-per-step (instruction throughput, ALU power, whatever) goes down steadily with every new computer generation.
In practice, it happens quite often that the dumb, linear, brute force approach is faster than a "better" approach with better time complexity simply because the constant factor dominates everything.
Is it possible to get some optimization on any algorithm used for getting the gcd of numbers in an array if the array is sorted?
Thanks!
So, let's see. The general method of finding the GCD of an array of numbers is:
result = a[0]
for i = 1 to length(a)-1
result = gcd(result, a[i])
So what's the complexity of the gcd algorithm? Well, that's a rather involved question. See, for example, Time complexity of Euclid's Algorithm
If we pretend, as posited in the accepted answer, that the GCD algorithm is constant time (i.e. O(1)), then the complexity of the loop above is O(n). That's a reasonable assumption for numbers that fit into computer registers. And if that's the case then spending O(n log n) time to sort the array would almost certainly be a loser.
But in reality the GCD calculation is linear in the number of digits in the two numbers. If your input data consists of lots of large numbers, it's possible that sorting the array first will give you an advantage. The reasoning is that the result of gcd(a, b) will by definition give you a number that's no larger than min(a,b). So by getting the GCD of the two smallest numbers first, you limit the number of digits you have to deal with. Whether that limiting will overcome the cost of sorting the array is unclear.
If the numbers are larger than will fit into a computer register (hundreds of digits), then the GCD calculation is more expensive. But then again, so is sorting.
So the answer to your question is that sorting will almost certainly increase the speed of calculating the GCD of an array of numbers, but whether the performance improvement will offset the cost of sorting is unclear.
I think the only way you'll know for sure is to test it with representative data.
does every algorithm have Big Omega?
Is it possible for algorithms to have both Big O and Big Omega (but not equal to each other- not Big Theta) ?
For instance Quicksort's Big O - O(n log n) But does it have Big Omega? If it does, how do i calculate it?
First, it is of paramount importance that one not confuse the bound with the case. A bound - like Big-Oh, Big-Omega, Big-Theta, etc. - says something about a rate of growth. A case says something about the kinds of input you're currently considering being processed by your algorithm.
Let's consider a very simple example to illustrate the distinction above. Consider the canonical "linear search" algorithm:
LinearSearch(list[1...n], target)
1. for i := 1 to n do
2. if list[i] = target then return i
3. return -1
There are three broad kinds of cases one might consider: best, worst, and average cases for inputs of size n. In the best case, what you're looking for is the first element in the list (really, within any fixed number of the start of the list). In such cases, it will take no more than some constant amount of time to find the element and return from the function. Therefore, the Big-Oh and Big-Omega happen to be the same for the best case: O(1) and Omega(1). When both O and Omega apply, we also say Theta, so this is Theta(1) as well.
In the worst case, the element is not in the list, and the algorithm must go through all n entries. Since f(n) = n happens to be a function that is bound from above and from below by the same class of functions (linear ones), this is Theta(n).
Average case analysis is usually a bit trickier. We need to define a probability space for viable inputs of length n. One might say that all valid inputs (where integers can be represented using 32 bits in unsigned mode, for instance) are equally probable. From that, one could work out the average performance of the algorithm as follows:
Find the probability that target is not represented in the list. Multiply by n.
Given that target is in the list at least once, find the probability that it appears at position k for each 1 <= k <= n. Multiply each P(k) by k.
Add up all of the above to get a function in terms of n.
Notice that in step 1 above, if the probability is non-zero, we will definitely get at least a linear function (exercise: we can never get more than a linear function). However, if the probability in step 1 is indeed zero, then the assignment of probabilities in step 2 makes all the difference in determining the complexity: you can have best-case behavior for some assignments, worst-case for others, and possibly end up with behavior that isn't the same as best (constant) or worst (linear).
Sometimes, we might speak loosely of a "general" or "universal" case, which considers all kinds of input (not just the best or the worst), but that doesn't give any particular weighting to inputs and doesn't take averages. In other words, you consider the performance of the algorithm in terms of an upper-bound on the worst-case, and a lower-bound on the best-case. This seems to be what you're doing.
Phew. Now, back to your question.
Are there functions which have different O and Omega bounds? Definitely. Consider the following function:
f(n) = 1 if n is odd, n if n is even.
The best case is "n is odd", in which case f is Theta(1); the worst case is "n is even", in which case f is Theta(n); and if we assume for the average case that we're talking about 32-bit unsigned integers, then f is Theta(n) in the average case, as well. However, if we talk about the "universal" case, then f is O(n) and Omega(1), and not Theta of anything. An algorithm whose runtime behaves according to f might be the following:
Strange(list[1...n], target)
1. if n is odd then return target
2. else return LinearSearch(list, target)
Now, a more interesting question might be whether there are algorithms for which some case (besides the "universal" case) cannot be assigned some valid Theta bound. This is interesting, but not overly so. The reason is that you, during your analysis, are allowed to choose the cases that constitutes best- and worst-case behavior. If your first choice for the case turns out not to have a Theta bound, you can simply exclude the inputs that are "abnormal" for your purposes. The case and the bound aren't completely independent, in that sense: you can often choose a case such that it has "good" bounds.
But can you always do it?
I don't know, but that's an interesting question.
Does every algorithm have a Big Omega?
Yes. Big Omega is a lower bound. Any algorithm can be said to take at least constant time, so any algorithm is Ω(1).
Does every algorithm have a Big O?
No. Big O is a upper bound. Algorithms that don't (reliably) terminate don't have a Big O.
An algorithm has an upper bound if we can say that, in the absolute worst case, the algorithm will not take longer than this. I'm pretty sure O(∞) is not valid notation.
When will the Big O and Big Omega of an algorithm be equal?
There is actually a special notation for when they can be equal: Big Theta (Θ).
They will be equal if the algorithm scales perfectly with the size of the input (meaning there aren't input sizes where the algorithm is suddenly a lot more efficient).
This is assuming we take Big O to be the smallest possible upper bound and Big Omega to be the largest possible lower bound. This is not actually required from the definition, but they're commonly informally treated as such. If you drop this assumption, you can find a Big O and Big Omega that aren't equal for any algorithm.
Brute force prime number checking (where we just loop through all smaller numbers and try to divide them into the target number) is perhaps a good example of when the smallest upper bound and largest lower bound are not equal.
Assume you have some number n. Let's also for the time being ignore the fact that bigger numbers take longer to divide (a similar argument holds when we take this into account, although the actual complexities would be different). And I'm also calculating the complexity based on the number itself instead of the size of the number (which can be the number of bits, and could change the analysis here quite a bit).
If n is divisible by 2 (or some other small prime), we can very quickly check whether it's prime with 1 division (or a constant number of divisions). So the largest lower bound would be Ω(1).
Now if n is prime, we'll need to try to divide n by each of the numbers up to sqrt(n) (I'll leave the reason we don't need to go higher than this as an exercise). This would take O(sqrt(n)), which would also then be our smallest upper bound.
So the algorithm would be Ω(1) and O(sqrt(n)).
Exact complexity also may be hard to calculate for some particularly complex algorithms. In such cases it may be much easier and acceptable to simply calculate some reasonably close lower and upper bounds and leave it at that. I don't however have an example on hand for this.
How does this relate to best case and worst case?
Do not confuse upper and lower bounds for best and worst case. This is a common mistake, and a bit confusing, but they're not the same. This is a whole other topic, but as a brief explanation:
The best and worst (and average) cases can be calculated for every single input size. The upper and lower bounds can then be used for each of those 3 cases (separately). You can think of each of those cases as a line on a graph with input size on the x-axis and time on the y-axis and then, for each of those lines, the upper and lower bounds are lines which need to be strictly above or below that line as the input size tends to infinity (this isn't 100% accurate, but it's a good basic idea).
Quick-sort has a worst-case of Θ(n2) (when we pick the worst possible pivot at every step) and a best-case of Θ(n log n) (when we pick good pivots). Note the use of Big Theta, meaning each of those are both lower and upper bounds.
Let's compare quick-sort with the above prime checking algorithm:
Say you have a given number n, and n is 53. Since it's prime, it will (always) take around sqrt(53) steps to determine whether it's prime. So the best and worst cases are all the same.
Say you want to sort some array of size n, and n is 53. Now those 53 elements can be arranged such that quick-sort ends up picking really bad pivots and run in around 532 steps (the worst case) or really good pivots and run in around 53 log 53 steps (the best case). So the best and worst cases are different.
Now take n as 54 for each of the above:
For prime checking, it will only take around 1 step to determine that 54 is prime. The best and worst cases are the same again, but they're different from what they were for 53.
For quick-sort, you'll again have a worst case of around 542 steps and a best case of around 54 log 54 steps.
So for quick-sort the worst case always takes around n2 steps and the best case always takes around n log n steps. So the lower and upper (or "tight") bound of the worst case is Θ(n2) and the tight bound of the best case is Θ(n log n).
For our prime checking, sometimes the worst case takes around sqrt(n) steps and sometimes it takes around 1 step. So the lower bound for the worse case would be Ω(1) and upper bound would be O(sqrt(n)). It would be the same for the best case.
Note that above I simply said "the algorithm would be Ω(1) and O(sqrt(n))". This is slightly ambiguous, as it's not clear whether the algorithm always takes the same amount of time for some input size, or the statement is referring to one of the best, average or worst case.
How do I calculate this?
It's hard to give general advice for this since proofs of bounds are greatly dependent on the algorithm. You'd need to analyse the algorithm similar to what I did above to figure out the worst and best cases.
Big O and Big Omega it can be calculated for every algorithm as you can see in Big-oh vs big-theta