Is it possible to get some optimization on any algorithm used for getting the gcd of numbers in an array if the array is sorted?
Thanks!
So, let's see. The general method of finding the GCD of an array of numbers is:
result = a[0]
for i = 1 to length(a)-1
result = gcd(result, a[i])
So what's the complexity of the gcd algorithm? Well, that's a rather involved question. See, for example, Time complexity of Euclid's Algorithm
If we pretend, as posited in the accepted answer, that the GCD algorithm is constant time (i.e. O(1)), then the complexity of the loop above is O(n). That's a reasonable assumption for numbers that fit into computer registers. And if that's the case then spending O(n log n) time to sort the array would almost certainly be a loser.
But in reality the GCD calculation is linear in the number of digits in the two numbers. If your input data consists of lots of large numbers, it's possible that sorting the array first will give you an advantage. The reasoning is that the result of gcd(a, b) will by definition give you a number that's no larger than min(a,b). So by getting the GCD of the two smallest numbers first, you limit the number of digits you have to deal with. Whether that limiting will overcome the cost of sorting the array is unclear.
If the numbers are larger than will fit into a computer register (hundreds of digits), then the GCD calculation is more expensive. But then again, so is sorting.
So the answer to your question is that sorting will almost certainly increase the speed of calculating the GCD of an array of numbers, but whether the performance improvement will offset the cost of sorting is unclear.
I think the only way you'll know for sure is to test it with representative data.
Related
I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
I understand every part except F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1+1?
The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."
"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place.
And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".
I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n+1 is O(1) while a = n^50+n^50 isn't O(1) anymore?
Video link for anyone who needed more information (4:56 to 6:26)
Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".
CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.
However if n is a very large positive integer, we can't assume that. A very large integer will need O(log(n)) bits to represent. Which, whether we store it as bits, bytes, etc, will need an array of O(log(n)) things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.
Now suppose that we're trying to calculate n+m. We're going to need to generate a result of size O(log(n+m)), which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and is O(log(n+m)) to track.
So when you're looking at addition, the log of the size of the answer is what matters. Since log(50^n) = n * log(50) that means that operations with 50^n are at least O(n). (Getting 50^n might take longer...) And it means that calculating n+1 takes time O(log(n)).
Now in the case of the Fibonacci sequence, F(n) is roughly φ^n where φ = (1 + sqrt(5))/2 so log(F(n)) = O(n).
I have read many explanations of amortized analysis and how it differs from average-case analysis. However, I have not found a single explanation that showed how, for a particular example for which both kinds of analysis are sensible, the two would give asymptotically different results.
The most wide-spread example of amortized running time analysis shows that appending an element to a dynamic array takes O(1) amortized time (where the running time of the operation is O(n) if the array's length is an exact power of 2, and O(1) otherwise). I believe that, if we consider all array lengths equally likely, then the average-case analysis will give the same O(1) answer.
So, could you please provide an example to show that amortized analysis and average-case analysis may give asymptotically different results?
Consider a dynamic array supporting push and pop from the end. In this example, the array capacity will double when push is called on a full array and halve when pop leaves the array size 1/2 of the capacity. pop on an empty array does nothing.
Note that this is not how dynamic arrays are "supposed" to work. To maintain O(1) amortized complexity, the array capacity should only halve when the size is alpha times the capacity, for alpha < 1/2.
In the bad dynamic array, when considering both operations, neither has O(1) amortized complexity, because alternating between them when the capacity is near 2x the size can produce Ω(n) time complexity for both operations repeatedly.
However, if you consider all sequences of push and pop to be equally likely, both operations have O(1) average time complexity, for two reasons:
First, since the sequences are random, I believe the size of the array will mostly be O(1). This is a random walk on the natural numbers.
Second, the array will be near size a power of 2 only rarely.
This shows an example where amortized complexity is strictly greater than average complexity.
They never have different asymptotically different results. average-case means that weird data might not trigger the average case and might be slower. asymptotic analysis means that even weird data will have the same performance. But on average they'll always have the same complexity.
Where they differ is the worst-case analysis. For algorithms where slowdowns come every few items regardless of their values, then the worst-case and the average-case are the same, and we often call this "asymptotic analysis". For algorithms that can have slowdowns based on the data itself, the worst-case and average-case are different, and we do not call either "asymptotic".
In "Pairing Heaps with Costless Meld", the author gives a priority queue with O(0) time per meld. Obviously, the average time per meld is greater than that.
Consider any data structure with worst-case and best-case inserts and removes taking I and R time. Now use the physicist's argument and give the structure a potential of nR, where n is the number of values in the structure. Each insert increases the potential by R, so the total amortized cost of an insert is I+R. However, each remove decreases the potential by R. Thus, each removal has an amortized cost of R-R=0!
The average cost is R; the amortized cost is 0; these are different.
Given an array A size of n of real numbers. It consists of n/logn sorted
sequences (each sequence of size logn).
Prove that it's not possible to sort the array A in time complexity of
o(nlogn) (Small o nation) in worst case.
Make an assumption that its possible then to contradict it with lower
bound theorem.
I need help in just understanding the question. As what I have concluded that they are asking to prove that we can't get any sorting algorithm less than O(nlogn)?
This is a constant doubt I'm having. For example, I have a 2-d array of size n^2 (n being the number of rows and columns). Suppose I want to print all the elements of the 2-d array. When I calculate the time complexity of the algorithm with respect to n it's O(n^2 ). But if I calculated the time with respect to the input size (n^2 ) it's linear. Are both these calculations correct? If so, why do people only use O(n^2 ) everywhere regarding 2-d arrays?
That is not how time complexity works. You cannot do "simple math" like that.
A two-dimensional square array of extent x has n = x*x elements. Printing these n elements takes n operations (or n/m if you print m items at a time), which is O(N). The necessary work increases linearly with the number of elements (which is, incidentially, quadratic in respect of the array extent -- but if you arranged the same number of items in a 4-dimensional array, would it be any different? Obviously, no. That doesn't magically make it O(N^4)).
What you use time complexity for is not stuff like that anyway. What you want time complexity to tell you is an approximate idea of how some particular algorithm may change its behavior if you grow the number of inputs beyond some limit.
So, what you want to know is, if you do XYZ on one million items or on two million items, will it take approximately twice as long, or will it take approximately sixteen times as long, for example.
Time complexity analysis is irrespective of "small details" such as how much time an actual operations takes. Which tends to make the whole thing more and more academic and practically useless in modern architectures because constant factors (such as memory latency or bus latency, cache misses, faults, access times, etc.) play an ever-increasing role as they stay mostly the same over decades while the actual cost-per-step (instruction throughput, ALU power, whatever) goes down steadily with every new computer generation.
In practice, it happens quite often that the dumb, linear, brute force approach is faster than a "better" approach with better time complexity simply because the constant factor dominates everything.
To complete task: find gcd(a,b) for integers a>b>0
Consider an algorithm that checks all of the numbers up to b and keeps track of the max number that divides a and b. It would use the % operator twice per check (for a and b). What would the complexity of this algorithm be?
I have not yet taken any formal CS courses in complexity theory (I will soon) so I am just looking for a quick answer.
The modulo operation is implemented in hardware, and it's pseudo O(1). Strictly speaking, it is not constant, but it depends on the number of bits of a and b. However, even then the number of bits is the same for all input sizes, so we usually ignore this factor.
The worst-case complexity of brute force GCD is just O(n) (also O(a), O(b), or O(min(a,b)); they're all the same), and it happens when when the GCD is either 1, a, or b.