Complexity Analysis Interpreter - time-complexity

Is it possible to write a program that determines the worst case running time of an algorithm in big-O notation? One that would, for example, be able to determine that a single for loop iterating over every element in an array has a time complexity of O(n), where n is the size of the array?

Do you mean a program X that takes as input a program Y and outputs Y's time complexity?
The answer is simple: It is not possible. In fact it is not possible to even decide whether Y will terminate at all. See the famous Halting Problem.
That said, there are many things you can do in terms of automating the process of finding the time complexity of an algorithm, and such programs can be very useful. But they can not guarantee a correct answer for every possible algorithm.

Related

When calculating the time complexity of an algorithm can we count the addition of two numbers of any size as requiring 1 "unit" of time or O(1) units?

I am working on analysing the time complexity of an algorithm. I am not certain what the correct way of calculating with the time complexity of basic operations such as addition and subtraction of two numbers is. I have learnt that the time complexity of adding up two n digit numbers is O(n), because this is how many elementary bit operations you need to perform during the addition. However, I have heard recently, that nowadays, in modern processors the time taken by adding up two numbers of any size (which is still managable by a computer) is constant: it does not depend on the size of the two numbers. Hence in the time complexity analysis of an algorithm you should calculate the time complexity of adding up two numbers of any size as O(1). Which approach is correct? Or in case the answer is that both approaches are "correct" used in the appropriate context which approach is more acceptable in a research paper? Thank you for any answer in advance.
It depends on the kind of algorithm you are analyzing, but for the general case you are just going to assume the inputs to the algorithm being analyzed will fit into the word-size of the machine it will be performed on (be that 32 bit, 128 bit, whatever), and under that assumption, where any single arithmetic operation will probably be executed as a single machine instruction and be computed in a single or small constant number of CPU clock cycles regardless of the underlying complexity of the hardware implementation, you will treat the complexity of that operation as being O(1). That is, you would assume O(1) complexity for arithmetic operations unless there's a particular reason to assume that they cannot be handled in constant time.
You would only really break the O(1) assumption if you were specifically designing an algorithm to be performed on numerical inputs of arbitrary precision such that you're planning on actually programmatically computing arithmetic operations yourself rather than passing them off completely to hardware (your algorithm expects overflow/underflows and is designed to handle them), or if you were working down at the level of implementing these operations yourself in an ALU or FPU circuit. Then, whether multiplication is performed in O(n*logn) or O(n*logn*loglogn) time in the number of bits would actually become relevant to your complexity analysis: because the number of bits involved in these operations isn't bounded by some constant or you're specifically analyzing the complexity of an algorithm/piece of hardware which is itself implementing an arithmetic operation.

To what extent shall we optimize time complexity?

Theory vs practice here.
Regarding time complexity, and I have a conceptual question that we didn't get to go deeper into in class.
Here it is:
There's a barbaric BROOT force algorithm, O(n^3)... and we got it down o O(n) and it was considered good enough. If we dive in deeper, it is actually O(n)+O(n), two separate iterations of the input. I came up with another way which was actually O(n/2). But those two algorithms are considered to be the same since both are O(n) and as n reaches infinity, it makes no difference, so not necessary at all once we reach O(n).
My question is:
In reality, in practice, we always have a finite number of inputs (admittedly occasionally in the trillions). So following the time complexity logic, O(n/2) is four times as fast as O(2n). So if we can make it faster, why not?
Time complexity is not everything. As you already noticed, the Big-Oh can hide a lot and also assumes that all operations cost the same.
In Practice you should always try to find a fast/the fastest solution for your problem. Sometimes this means that you use a algorithm with a bad complexity but good constants if you know that your problem is always small. Depending on your use case, you also want to implement optimizations that utilize hardware properties like cache optimizations.

How to analyze complexity of a program that behavior might change under different situation?

When I came across this question - using two stacks to implement a queue, I'm wondering how to analyze complexity of it.
Take this as example:
For queue(), the complexity is always O(1), as it simply push into the inbox.
For dequeue(), most of the time the complexity is also O(1), but when outbox is empty, it needs a loop to move all elements from inbox to outbox. So what is the complexity of this operation?
What is the idea when analyzing such kind of problem?
As Dave L. states in his explanation "each element will be pushed twice and popped twice, giving amortized constant time operations". This is because each dequeue that needs to move n elements from one stack to the other (taking O(n) time) will be followed by n-1 dequeues that only take O(1) time.
So one way to express the complexity of dequeue() would be saying that it has amortized constant time with a best cas of O(1) and a worst case of O(n).

Is there any time complexity difference between recursive and iterative approach?

I am aware that we do have space complexity difference between a recursive and iterative algorithm. But , do we also have time complexity differences between them?
For example: If I have a program that counts the number of nodes in a list recursively and then I implement the same program as iterative, will I have any difference in its Time complexity i.e. O(n)? Thank you
Short answer: no.
Unless you optimize the algorithm using dynamic programming or such, there is no change to time complexity. There is also no change to space complexity, don't know where you got that idea..
However, in many programming languages there is an inherent overhead to using recursion, since they must store the stack as well, which uses more memory. This can be slower, especially if it is not tail recursion.

Need Help Studying Running Times

At the moment, I'm studying for a final exam for a Computer Science course. One of the questions that will be asked is most likely a question on how to combine running times, so I'll give an example.
I was wondering, if I created a program that preprocessed inputs using Insertion Sort, and then searched for a value "X" using Binary Search, how would I combine the running times to find the best, worst, and average case time complexities of the over-all program?
For example...
Insertion Sort
Worst Case O(n^2)
Best Case O(n)
Average Case O(n^2)
Binary Search
Worst Case O(logn)
Best Case O(1)
Average Case O(logn)
Would the Worst case be O(n^2 + logn), or would it be O(n^2), or neither?
Would the Best Case be O(n)?
Would the Average Case be O(nlogn), O(n+logn), O(logn), O(n^2+logn), or none of these?
I tend to over-think solutions, so if I can get any guidance on combining running times, it would be much appreciated.
Thank you very much.
You usually don't "combine" (as in add) the running times to determine the overall efficiency class rather, you take the one that takes the longest for each worst, average, and best case.
So if you're going to perform insertion sort and then do a binary search after to find an element X in an array, the worst case is O(n^2) and the best case is O(n) -- all from insertion sort since it takes the longest.
Based on my limited study, (we haven't reached Amortization so this might be where Jim has the rest correct), but basically you just go based on whoever is slowest of the overall algorithm.
This seems to be a good book on the subject of Algorithms (I haven't got much to compare to):
http://www.amazon.com/Introduction-Algorithms-Third-Thomas-Cormen/dp/0262033844/ref=sr_1_1?ie=UTF8&qid=1303528736&sr=8-1
Also MIT has a full course on the Algorithms on their site here is the link for that too!
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/
I've actually found it helpful, it might not answer specifically your question, but I think it will help get you more confident seeing some of the topics explained a few times.