Time complexity with respect to input - time-complexity

This is a constant doubt I'm having. For example, I have a 2-d array of size n^2 (n being the number of rows and columns). Suppose I want to print all the elements of the 2-d array. When I calculate the time complexity of the algorithm with respect to n it's O(n^2 ). But if I calculated the time with respect to the input size (n^2 ) it's linear. Are both these calculations correct? If so, why do people only use O(n^2 ) everywhere regarding 2-d arrays?

That is not how time complexity works. You cannot do "simple math" like that.
A two-dimensional square array of extent x has n = x*x elements. Printing these n elements takes n operations (or n/m if you print m items at a time), which is O(N). The necessary work increases linearly with the number of elements (which is, incidentially, quadratic in respect of the array extent -- but if you arranged the same number of items in a 4-dimensional array, would it be any different? Obviously, no. That doesn't magically make it O(N^4)).
What you use time complexity for is not stuff like that anyway. What you want time complexity to tell you is an approximate idea of how some particular algorithm may change its behavior if you grow the number of inputs beyond some limit.
So, what you want to know is, if you do XYZ on one million items or on two million items, will it take approximately twice as long, or will it take approximately sixteen times as long, for example.
Time complexity analysis is irrespective of "small details" such as how much time an actual operations takes. Which tends to make the whole thing more and more academic and practically useless in modern architectures because constant factors (such as memory latency or bus latency, cache misses, faults, access times, etc.) play an ever-increasing role as they stay mostly the same over decades while the actual cost-per-step (instruction throughput, ALU power, whatever) goes down steadily with every new computer generation.
In practice, it happens quite often that the dumb, linear, brute force approach is faster than a "better" approach with better time complexity simply because the constant factor dominates everything.

Related

Time Complexity of Algorithms With Addition [duplicate]

I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
I understand every part except F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1+1?
The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."
"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place.
And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".
I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n+1 is O(1) while a = n^50+n^50 isn't O(1) anymore?
Video link for anyone who needed more information (4:56 to 6:26)
Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".
CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.
However if n is a very large positive integer, we can't assume that. A very large integer will need O(log(n)) bits to represent. Which, whether we store it as bits, bytes, etc, will need an array of O(log(n)) things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.
Now suppose that we're trying to calculate n+m. We're going to need to generate a result of size O(log(n+m)), which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and is O(log(n+m)) to track.
So when you're looking at addition, the log of the size of the answer is what matters. Since log(50^n) = n * log(50) that means that operations with 50^n are at least O(n). (Getting 50^n might take longer...) And it means that calculating n+1 takes time O(log(n)).
Now in the case of the Fibonacci sequence, F(n) is roughly φ^n where φ = (1 + sqrt(5))/2 so log(F(n)) = O(n).

An example to show that amortized analysis and average-case analysis may give asymptotically different results

I have read many explanations of amortized analysis and how it differs from average-case analysis. However, I have not found a single explanation that showed how, for a particular example for which both kinds of analysis are sensible, the two would give asymptotically different results.
The most wide-spread example of amortized running time analysis shows that appending an element to a dynamic array takes O(1) amortized time (where the running time of the operation is O(n) if the array's length is an exact power of 2, and O(1) otherwise). I believe that, if we consider all array lengths equally likely, then the average-case analysis will give the same O(1) answer.
So, could you please provide an example to show that amortized analysis and average-case analysis may give asymptotically different results?
Consider a dynamic array supporting push and pop from the end. In this example, the array capacity will double when push is called on a full array and halve when pop leaves the array size 1/2 of the capacity. pop on an empty array does nothing.
Note that this is not how dynamic arrays are "supposed" to work. To maintain O(1) amortized complexity, the array capacity should only halve when the size is alpha times the capacity, for alpha < 1/2.
In the bad dynamic array, when considering both operations, neither has O(1) amortized complexity, because alternating between them when the capacity is near 2x the size can produce Ω(n) time complexity for both operations repeatedly.
However, if you consider all sequences of push and pop to be equally likely, both operations have O(1) average time complexity, for two reasons:
First, since the sequences are random, I believe the size of the array will mostly be O(1). This is a random walk on the natural numbers.
Second, the array will be near size a power of 2 only rarely.
This shows an example where amortized complexity is strictly greater than average complexity.
They never have different asymptotically different results. average-case means that weird data might not trigger the average case and might be slower. asymptotic analysis means that even weird data will have the same performance. But on average they'll always have the same complexity.
Where they differ is the worst-case analysis. For algorithms where slowdowns come every few items regardless of their values, then the worst-case and the average-case are the same, and we often call this "asymptotic analysis". For algorithms that can have slowdowns based on the data itself, the worst-case and average-case are different, and we do not call either "asymptotic".
In "Pairing Heaps with Costless Meld", the author gives a priority queue with O(0) time per meld. Obviously, the average time per meld is greater than that.
Consider any data structure with worst-case and best-case inserts and removes taking I and R time. Now use the physicist's argument and give the structure a potential of nR, where n is the number of values in the structure. Each insert increases the potential by R, so the total amortized cost of an insert is I+R. However, each remove decreases the potential by R. Thus, each removal has an amortized cost of R-R=0!
The average cost is R; the amortized cost is 0; these are different.

Amortized time comlexity

I was working on a problem where we are supposed to give an example of an algorithm whose time complexity is O(n^2), but whose amortized time complexity is less than that. My immediate thought is nested loops, but I'm not exactly sure of what an example of that would look like where the result was amortized. Any insights would be greatly appreciated!
Consider the Add method on a Vector (resizable array) data structure. Once the current capacity of the array is exceeded, we must increase the capacity by making a larger array and copying stuff over. Typically, you'd just double the capacity in such cases, giving rise to a worst-case O(n) Add, but an O(1) amortized Add. Instead of doubling, we're of course free to increase it by squaring (provided the initial capacity is greater than one). This means that, every now and then, an add will take O(n^2) time; but such an increasingly large majority of them will take O(1) time that the amortized complexity will be O(1) as well.
Combining variations on this idea with the multiplicative effect on complexity of putting code into loops, it's probably possible to find an example where the worst-case time complexity is O(f) and the amortized complexity is O(g), for and f and g where g is O(f).

Time complexity of two algorithms running together?

Imagine T1(n) and T2(n) are running times of P1 and P2 programs, and
T1(n) ∈ O(f(n))
T2(n) ∈ O(g(n))
What is the amount of T1(n)+T2(n), when P1 is running along side P2?
The Answer is O(max{f(n), g(n)}) but why?
When we think about Big-O notation, we generally think about what the algorithm does as the size of the input n gets really big. A lot of times, we can fall back on some sort of intuition with math. Consider two functions, one that is O(n^2) and one that is O(n). As n gets really large, both algorithms increases without bound. The difference is, the O(n^2) algorithm grows much, MUCH faster than O(n). So much, in fact, that if you combine the algorithms into something that would be O(n^2+n), the factor of n by itself is so small that it can be ignored, and the algorithm is still in the class O(n^2).
That's why when you add together two algorithms, the combined running time is in O(max{f(n), g(n)}). There's always one that 'dominates' the runtime, making the affect of the other negligible.
The Answer is O(max{f(n), g(n)})
This is only correct if the programms run independently of each other. Anyhow, let's assume, this is the case.
In order to answer the why, we need to take a closer look at what the BIG-O-notation represents. Contrary to the way you stated it, it does not represent time but an upperbound on the complexity.
So while running both programms might take more time, the upperbound on the complexity won't increase.
Lets considder an example: P_1 calculates the the product of all pairs of n numbers in a vector, it is implemented using nested loops, and therefore has a complexity of O(n*n). P_2 just prints the numbers in a single loop and therefore has a complexity of O(n).
Now if we run both programms at the same time, the nested loops of P_1 are the most 'complex' part, leaving the combination with a complexity of O(n*n)

How to calculate time complexity O(n) of the algorithm?

What I have done:
I measured the time spent processing 100, 1000, 10000, 100000, 1000000 items.
Measurements here: https://github.com/DimaBond174/cache_single_thread
.
Then I assumed that O(n) increases in proportion to n, and calculated the remaining algorithms with respect to O(n) ..
Having time measurements for processing 100, 1000, 10000, 100000, 1000000 items how can we now attribute the algorithm to O(1), O(log n), O(n), O(n log n), or O(n^2) ?
Let's define N as one of the possible inputs of data. An algorithm can have different Big O values depending on which input you're referring to, but generally there's only one big input that you care about. Without the algorithm in question, you can only guess. However there are some guidelines that will help you determine which it is.
General Rule:
O(1) - the speed of the program barely changes regardless of size of data. To get this, a program must not have loops operating on the data in question at all.
O(log N) - the program slows down slightly when N increases dramatically, in a logarithmic curve. To get this, loops must only go through a fraction of the data. (for example, binary search).
O(N) - the program's speed is directly proportional to the size of the data input. If you perform an operation on each unit of the data, you get this. You must not have any kind of nested loops (that act on the data).
O(N log N)- the program's speed is significantly reduced by larger input. This occurs when you have a O(logN) operation NESTED in a loop that would otherwise be O(N). So for example, you had a loop that did a binary search for each unit of data.
O(N^2) - The program will slow down to a crawl with larger input and eventually stall with large enough data. This happens when you have NESTED loops. Same as above, but this time the nested loop is O(N) instead of O(log N)
So, try to think of a looping operation as O(N) or O(log N). Then, whenever you have nesting, multiply them together. If the loops are NOT nested, they are not multiplied like this. So two loops separate from each other would simply be O(2N) and not O(N^2).
Also remember that you may have loops under the hood, so you should think about them too. For example, if you did something like Arrays.sort(X) in Java, that would be a O(N logN) operation. So if you have that inside a loop for some reason, your program is going to be a lot slower than you think.
Hope that answers your question.