Complexity Analysis: how to identify the "basic operation"? - time-complexity

I am taking a class on complexity analysis and we try to determin basic operations of algorithms.
We defined it as the following:
A basic operation is one that best characterises the efficiency of the
particular algorithm of interest
For time analysis it is the operation that we expect to have the most
influence on the algorithm’s total running time:
- Key comparisons in a searching algorithm
- Numeric multiplications in a matrix multiplication algorithm
- Visits to nodes (or arcs) in a graph traversal algorithm
For space analysis it is an operation that increases memory usage
- A procedure call that adds a new frame to the run-time stack
- Creation of a new object or data structure in the run-time heapThe basic operation may occur in more than one place in the algorithm
So I'm trying to figure out the basic operation of the ReverseArray Algorithm.
ReverseArray(A[0..n-1])
for i=0 to [n/2]-1 do
temp <- A[i]
A[i] <- A[n-1-i]
A[n-1-i] <- temp
My tutor mentioned a basic operation is a "kind of operation" like assignment, addition, division and that I could either choose between assignment or subtraction in the case of this algorithm.
Now I have an exercise asking about the basic operation of the given algorithm. Is it then correct to say that the basic operation is "assignment" and then list all 3 lines of code inside the for loop?
In my opinion it could be subtraction too, because there are 4 of it.
I'm not really sure if basic operation is a commonly recognized term or if its just an expression my lecturer chose.

You can take any operation (assignment, reading array access, subtraction) as basic operation. All would lead to the same result:
Assignment: 3 * n/2 -> O(n)
Reading access: 2 * n/2 -> O(n)
Complete for-block: n/2 -> O(n)
It would made no difference in your example. Here is a stupid example ( no optimized code ), where it makes a difference:
for i = 1 to n do
x = a[i]
for j = 1 to n do
b[j] += x
Obviously, the reading access to array a takes O(n) steps, where the number of writing operations or additions is O(n^2).
The basic operation is the operation on the basis of which you have calculated the complexity. This can be every operation in your code, but this can lead to different results, as I have shown in the example.
For this reason, one often sees phrases like:
The code needs O(n) multiplications and O(n^2) additions.

Related

What does n mean in big-oh complexity?

In Big-Oh notation, what does n mean? I've seen input size and length of a vector. If it's input size, does it mean memory space on the computer? I see n often interchangeably used with input size.
Examples of Big-Oh,
O(n) is linear running time
O(logn) is logarithmic running time.
A code complexity analysis example, (I'm changing input n to m)
def factorial(m):
product = 1
for i in range(1, m+1):
product = product*i
return product
This is O(n). What does n mean? Is it how much memory it takes? Maybe n mean number of elements in a vector? Then, how do you explain when n=3, a single number?
When somebody says O(n), the n can refer to different things depending on context. When it isn't obvious what n refers to, people ideally point it out explicitly, but several conventions exist:
When the name of the variable(s) used in the O-notation also exist in the code, they almost certainly refer to the value of the variable with that name (if they refer to anything else, that should be pointed out explicitly). So in your original example where you had a variable named n, O(n) would refer to that variable.
When the code does not contain a variable named n and n is the only variable used in the O notation, n usually refers to the total size of the input.
When multiple variables are used, starting with n and then continuing the alphabet (e.g. O(n*m)), n usually refers to the size of the first parameter, m the second and so on. However, in my opinion, it's often clearer to use something like | | or len( ) around the actual parameter names instead (e.g. O(|l1| * |l2|) or O(len(l1) * len(l2)) if your parameters are called l1 and l2).
In the context of graph problems v is usually used to refer to the number of vertices and e to the number of edges.
In all other cases (and also in some of the above cases if there is any ambiguity), it should be explicitly mentioned what the variables mean.
In your original code you had a variable named n, so the statement "This is O(n)" almost certainly referred to the value of the parameter n. If we further assume that we're only counting the number of multiplications or the number of times the loop body executes (or we measure the time and pretend that multiplication takes constant time), that statement is correct.
In your edited code, there is no longer a variable named n. So now the statement "This is O(n)" must refer to something else. Usually one would then assume that it refers to the size of the input (which would be the number of bits in m, i.e. log m). But then the statement is blatantly false (it'd be O(2^n), not O(n)), so the original statement clearly referred to the value of n and you broke it by editing the code.
n usually means amount of input data.
For example, take an array of 10 elements. To iterate all elements you will need ten iterations. n is 10 in this case.
In your example n is also value which describes size of input data. As you can see your factorial implementation will require n+1 iterations so the asymptotic complexity for this implementation is around O(n) (NOTE: I omitted 1 since it doesn't change picture a lot). If you will increase passed variable n to your function it will require more iteration to perform for calculating result.
O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.
O(N^2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set.
I hope this helps.

Time complexity for a divide and conquer algorithm that creates two uneven subproblems.

I am working with a very specific divide and conquer algorithm that always divides a problem with n elements into two subproblems with n/2 - 1 and n/2 + 1 elements.
I am pretty sure the time complexity remains O(n log n), but I wonder how could I formally prove it.
Take the "useful work done" at each recursion level to be some function f(n):
Let's observe what happens when we repeatedly substitute this back into itself.
T(n) terms:
Spot the pattern?
At recursion depth m:
There are recursive calls to T
The first term in each parameter for T is
The second term ranges from to , in steps of
Thus the sum of all T-terms at each level is given by:
f(n) terms:
Look familiar?
The f(n) terms are exactly one recursion level behind the T(n) terms. Therefore adapting the previous expression, we arrive at the following sum:
However note that we only start with one f-term, so this sum has an invalid edge case. However this is simple to rectify - the special-case result for m = 1 is simply f(n).
Combining the above, and summing the f terms for each recursion level, we arrive at the (almost) final expression for T(n):
We next need to find when the first summation for T-terms terminates. Let's assume that is when n ≤ c.
The last call to terminate intuitively has the largest argument, i.e the call to:
Therefore the final expression is given by:
Back to the original problem, what is f(n)?
You haven't stated what this is, so I can only assume that the amount of work done per call is ϴ(n) (proportional to the array length). Thus:
Your hypothesis was correct.
Note that even if we had something more general like
Where a is some constant not equal to 1, we would still have ϴ(n log n) as the result, since the terms in the above equation cancel out:

Amortized complexity of a balanced binary search tree

I'm not entirely sure what amortized complexity means. Take a balanced binary search tree data structure (e.g. a red-black tree). The cost of a normal search is naturally log(N) where N is the number of nodes. But what is the amortized complexity of a sequence of m searches in, let's say, ascending order. Is it just log(N)/m?
Well you can consider asymptotic analysis as a strict method to set a upper bound for the running time of algorithms, where as amortized analysis is a some what liberal method.
For example consider an algorithm A with two statements S1 and S2. The cost of executing S1 is 10 and S2 is 100. Both the statements are placed inside a loop as follows.
n=0;
while(n<100)
{
if(n % 10 != 0)
{
S1;
}
else
{
s2;
}
n++;
}
Here the number of times S1 executed is 10 times the count of S2. But asymptotic analysis will only consider the facts that S2 takes a time of 10 units and it is inside a loop executing 100 times. So the upper limit for execution time is of the order of 10 * 100 = 1000. Where as amortized analysis averages out the number of times the statements S1 and S2 are executed. So the upper time limit for execution is of the order of 200. Thus amortized analysis gives a better estimate of the upper limit for executing an algorithm.
I think it is mlog(N) because you have to do m search operations (each time from root node downto target node), while the complexity of one single operation is log(N).
EDIT: #user1377000 you are right, I have mistaken amortized complexity from asymptotic complexity. But I don't think it is log(N)/m... because it is not guaranteed that you can finished all m search operations in O(logN) time.
What is amortized analysis of algorithms?
I think this might help.
In case of a balanced search tree the amortized complexity is equal to asymptotic one. Each search operation takes O(logn) time, both asymptotic and average. Therefore for m searches the average complexity will be O(mlogn).
Pass in the items to be found all at once.
You can think of it in terms of divide-and-conquer.
Take the item x in the root node.
Binary-search for x into your array of m items.
Partition the array into things less than x and greater than x. (Ignore things equal to x, since you already found it.)
Recursively search for the former partition in your left child, and for the latter in your right child.
One worst case: your array of items is just the list of things in the leaf nodes. (n is roughly 2m.) You'd have to visit every node. Your search would cost lg(n) + 2*lg(n/2) + 4*lg(n/4) + .... That's linear. Think of it as doing smaller and smaller binary searches until you hit every element in the array once or twice.
I think there's also a way to do it by keeping track of where you are in the tree after a search. C++'s std::map and std::set return iterators which can move left and right within the tree, and they might have methods which can take advantage of an existing iterator into the tree.

Time Complexity confusion

Ive always been a bit confused on this, possibly due to my lack of understanding in compilers. But lets use python as an example. If we had some large list of numbers called numlist and wanted to get rid of any duplicates, we could use a set operator on the list, example set(numlist). In return we would have a set of our numbers. This operation to the best of my knowledge will be done in O(n) time. Though if I were to create my own algorithm to handle this operation, the absolute best I could ever hope for is O(n^2).
What I don't get is, what allows a internal operation like set() to be so much faster then an external to the language algorithm. The checking still needs to be done, don't they?
You can do this in Θ(n) average time using a hash table. Lookup and insertion in a hash table are Θ(1) on average . Thus, you just run through the n items and for each one checking if it is already in the hash table and if not inserting the item.
What I don't get is, what allows a internal operation like set() to be so much faster then an external to the language algorithm. The checking still needs to be done, don't they?
The asymptotic complexity of an algorithm does not change if implemented by the language implementers versus being implemented by a user of the language. As long as both are implemented in a Turing complete language with random access memory models they have the same capabilities and algorithms implemented in each will have the same asymptotic complexity. If an algorithm is theoretically O(f(n)) it does not matter if it is implemented in assembly language, C#, or Python on it will still be O(f(n)).
You can do this in O(n) in any language, basically as:
# Get min and max values O(n).
min = oldList[0]
max = oldList[0]
for i = 1 to oldList.size() - 1:
if oldList[i] < min:
min = oldList[i]
if oldList[i] > max:
max = oldList[i]
# Initialise boolean list O(n)
isInList = new boolean[max - min + 1]
for i = min to max:
isInList[i] = false
# Change booleans for values in old list O(n)
for i = 0 to oldList.size() - 1:
isInList[oldList[i] - min] = true
# Create new list from booleans O(n) (or O(1) based on integer range).
newList = []
for i = min to max:
if isInList[i - min]:
newList.append (i)
I'm assuming here that append is an O(1) operation, which it should be unless the implementer was brain-dead. So with k steps each O(n), you still have an O(n) operation.
Whether the steps are explicitly done in your code or whether they're done under the covers of a language is irrelevant. Otherwise you could claim that the C qsort was one operation and you now have the holy grail of an O(1) sort routine :-)
As many people have discovered, you can often trade off space complexity for time complexity. For example, the above only works because we're allowed to introduce the isInList and newList variables. If this were not allowed, the next best solution may be sorting the list (probably no better the O(n log n)) followed by an O(n) (I think) operation to remove the duplicates.
An extreme example, you can use that same extra-space method to sort an arbitrary number of 32-bit integers (say with each only having 255 or less duplicates) in O(n) time, provided you can allocate about four billion bytes for storing the counts.
Simply initialise all the counts to zero and run through each position in your list, incrementing the count based on the number at that position. That's O(n).
Then start at the beginning of the list and run through the count array, placing that many of the correct value in the list. That's O(1), with the 1 being about four billion of course but still constant time :-)
That's also O(1) space complexity but a very big "1". Typically trade-offs aren't quite that severe.
The complexity bound of an algorithm is completely unrelated to whether it is implemented 'internally' or 'externally'
Taking a list and turning it into a set through set() is O(n).
This is because set is implemented as a hash set. That means that to check if something is in the set or to add something to the set only takes O(1), constant time. Thus, to make a set from an iterable (like a list for example), you just start with an empty set and add the elements of the iterable one by one. Since there are n elements and each insertion takes O(1), the total time of converting an iterable to a set is O(n).
To understand how the hash implementation works, see the wikipedia artcle on hash tables
Off hand I can't think of how to do this in O(n), but here is the cool thing:
The difference between n^2 and n is sooo massive that the difference between you implementing it and python implementing is tiny compared to the algorithm used to implement it. n^2 is always worse than O(n), even if the n^2 one is in C and the O(n) one is in python. You should never think that kind of difference comes from the fact that you're not writing in a low level language.
That said, if you want to implement your own, you can do a sort then remove dups. the sort is n*ln(n) and the remove dups in O(n)...
There are two issues here.
Time complexity (which is expressed in big O notation) is a formal measure of how long an algorithm takes to run for a given set size. It's more about how well an algorithm scales than about the absolute speed.
The actual speed (say, in milliseconds) of an algorithm is the time complexity multiplied by a constant (in an ideal world).
Two people could implement the same removal of duplicates algorithm with O(log(n)*n) complexity, but if one writes it in Python and the other writes it in optimised C, then the C program will be faster.

What is Big O notation? Do you use it? [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 9 years ago.
What is Big O notation? Do you use it?
I missed this university class I guess :D
Does anyone use it and give some real life examples of where they used it?
See also:
Big-O for Eight Year Olds?
Big O, how do you calculate/approximate it?
Did you apply computational complexity theory in real life?
One important thing most people forget when talking about Big-O, thus I feel the need to mention that:
You cannot use Big-O to compare the speed of two algorithms. Big-O only says how much slower an algorithm will get (approximately) if you double the number of items processed, or how much faster it will get if you cut the number in half.
However, if you have two entirely different algorithms and one (A) is O(n^2) and the other one (B) is O(log n), it is not said that A is slower than B. Actually, with 100 items, A might be ten times faster than B. It only says that with 200 items, A will grow slower by the factor n^2 and B will grow slower by the factor log n. So, if you benchmark both and you know how much time A takes to process 100 items, and how much time B needs for the same 100 items, and A is faster than B, you can calculate at what amount of items B will overtake A in speed (as the speed of B decreases much slower than the one of A, it will overtake A sooner or later—this is for sure).
Big O notation denotes the limiting factor of an algorithm. Its a simplified expression of how run time of an algorithm scales with relation to the input.
For example (in Java):
/** Takes an array of strings and concatenates them
* This is a silly way of doing things but it gets the
* point across hopefully
* #param strings the array of strings to concatenate
* #returns a string that is a result of the concatenation of all the strings
* in the array
*/
public static String badConcat(String[] Strings){
String totalString = "";
for(String s : strings) {
for(int i = 0; i < s.length(); i++){
totalString += s.charAt(i);
}
}
return totalString;
}
Now think about what this is actually doing. It is going through every character of input and adding them together. This seems straightforward. The problem is that String is immutable. So every time you add a letter onto the string you have to create a new String. To do this you have to copy the values from the old string into the new string and add the new character.
This means you will be copying the first letter n times where n is the number of characters in the input. You will be copying the character n-1 times, so in total there will be (n-1)(n/2) copies.
This is (n^2-n)/2 and for Big O notation we use only the highest magnitude factor (usually) and drop any constants that are multiplied by it and we end up with O(n^2).
Using something like a StringBuilder will be along the lines of O(nLog(n)). If you calculate the number of characters at the beginning and set the capacity of the StringBuilder you can get it to be O(n).
So if we had 1000 characters of input, the first example would perform roughly a million operations, StringBuilder would perform 10,000, and the StringBuilder with setCapacity would perform 1000 operations to do the same thing. This is rough estimate, but O(n) notation is about orders of magnitudes, not exact runtime.
It's not something I use per say on a regular basis. It is, however, constantly in the back of my mind when trying to figure out the best algorithm for doing something.
A very similar question has already been asked at Big-O for Eight Year Olds?. Hopefully the answers there will answer your question although the question asker there did have a bit of mathematical knowledge about it all which you may not have so clarify if you need a fuller explanation.
Every programmer should be aware of what Big O notation is, how it applies for actions with common data structures and algorithms (and thus pick the correct DS and algorithm for the problem they are solving), and how to calculate it for their own algorithms.
1) It's an order of measurement of the efficiency of an algorithm when working on a data structure.
2) Actions like 'add' / 'sort' / 'remove' can take different amounts of time with different data structures (and algorithms), for example 'add' and 'find' are O(1) for a hashmap, but O(log n) for a binary tree. Sort is O(nlog n) for QuickSort, but O(n^2) for BubbleSort, when dealing with a plain array.
3) Calculations can be done by looking at the loop depth of your algorithm generally. No loops, O(1), loops iterating over all the set (even if they break out at some point) O(n). If the loop halves the search space on each iteration? O(log n). Take the highest O() for a sequence of loops, and multiply the O() when you nest loops.
Yeah, it's more complex than that. If you're really interested get a textbook.
'Big-O' notation is used to compare the growth rates of two functions of a variable (say n) as n gets very large. If function f grows much more quickly than function g we say that g = O(f) to imply that for large enough n, f will always be larger than g up to a scaling factor.
It turns out that this is a very useful idea in computer science and particularly in the analysis of algorithms, because we are often precisely concerned with the growth rates of functions which represent, for example, the time taken by two different algorithms. Very coarsely, we can determine that an algorithm with run-time t1(n) is more efficient than an algorithm with run-time t2(n) if t1 = O(t2) for large enough n which is typically the 'size' of the problem - like the length of the array or number of nodes in the graph or whatever.
This stipulation, that n gets large enough, allows us to pull a lot of useful tricks. Perhaps the most often used one is that you can simplify functions down to their fastest growing terms. For example n^2 + n = O(n^2) because as n gets large enough, the n^2 term gets so much larger than n that the n term is practically insignificant. So we can drop it from consideration.
However, it does mean that big-O notation is less useful for small n, because the slower growing terms that we've forgotten about are still significant enough to affect the run-time.
What we now have is a tool for comparing the costs of two different algorithms, and a shorthand for saying that one is quicker or slower than the other. Big-O notation can be abused which is a shame as it is imprecise enough already! There are equivalent terms for saying that a function grows less quickly than another, and that two functions grow at the same rate.
Oh, and do I use it? Yes, all the time - when I'm figuring out how efficient my code is it gives a great 'back-of-the-envelope- approximation to the cost.
The "Intuitition" behind Big-O
Imagine a "competition" between two functions over x, as x approaches infinity: f(x) and g(x).
Now, if from some point on (some x) one function always has a higher value then the other, then let's call this function "faster" than the other.
So, for example, if for every x > 100 you see that f(x) > g(x), then f(x) is "faster" than g(x).
In this case we would say g(x) = O(f(x)). f(x) poses a sort of "speed limit" of sorts for g(x), since eventually it passes it and leaves it behind for good.
This isn't exactly the definition of big-O notation, which also states that f(x) only has to be larger than C*g(x) for some constant C (which is just another way of saying that you can't help g(x) win the competition by multiplying it by a constant factor - f(x) will always win in the end). The formal definition also uses absolute values. But I hope I managed to make it intuitive.
It may also be worth considering that the complexity of many algorithms is based on more than one variable, particularly in multi-dimensional problems. For example, I recently had to write an algorithm for the following. Given a set of n points, and m polygons, extract all the points that lie in any of the polygons. The complexity is based around two known variables, n and m, and the unknown of how many points are in each polygon. The big O notation here is quite a bit more involved than O(f(n)) or even O(f(n) + g(m)).
Big O is good when you are dealing with large numbers of homogenous items, but don't expect this to always be the case.
It is also worth noting that the actual number of iterations over the data is often dependent on the data. Quicksort is usually quick, but give it presorted data and it slows down. My points and polygons alogorithm ended up quite fast, close to O(n + (m log(m)), based on prior knowledge of how the data was likely to be organised and the relative sizes of n and m. It would fall down badly on randomly organised data of different relative sizes.
A final thing to consider is that there is often a direct trade off between the speed of an algorithm and the amount of space it uses. Pigeon hole sorting is a pretty good example of this. Going back to my points and polygons, lets say that all my polygons were simple and quick to draw, and I could draw them filled on screen, say in blue, in a fixed amount of time each. So if I draw my m polygons on a black screen it would take O(m) time. To check if any of my n points was in a polygon, I simply check whether the pixel at that point is green or black. So the check is O(n), and the total analysis is O(m + n). Downside of course is that I need near infinite storage if I'm dealing with real world coordinates to millimeter accuracy.... ...ho hum.
It may also be worth considering amortized time, rather than just worst case. This means, for example, that if you run the algorithm n times, it will be O(1) on average, but it might be worse sometimes.
A good example is a dynamic table, which is basically an array that expands as you add elements to it. A naïve implementation would increase the array's size by 1 for each element added, meaning that all the elements need to be copied every time a new one is added. This would result in a O(n2) algorithm if you were concatenating a series of arrays using this method. An alternative is to double the capacity of the array every time you need more storage. Even though appending is an O(n) operation sometimes, you will only need to copy O(n) elements for every n elements added, so the operation is O(1) on average. This is how things like StringBuilder or std::vector are implemented.
What is Big O notation?
Big O notation is a method of expressing the relationship between many steps an algorithm will require related to the size of the input data. This is referred to as the algorithmic complexity. For example sorting a list of size N using Bubble Sort takes O(N^2) steps.
Do I use Big O notation?
I do use Big O notation on occasion to convey algorithmic complexity to fellow programmers. I use the underlying theory (e.g. Big O analysis techniques) all of the time when I think about what algorithms to use.
Concrete Examples?
I have used the theory of complexity analysis to create algorithms for efficient stack data structures which require no memory reallocation, and which support average time of O(N) for indexing. I have used Big O notation to explain the algorithm to other people. I have also used complexity analysis to understand when linear time sorting O(N) is possible.
From Wikipedia.....
Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n² − 2n + 2.
As n grows large, the n² term will come to dominate, so that all other terms can be neglected — for instance when n = 500, the term 4n² is 1000 times as large as the 2n term. Ignoring the latter would have negligible effect on the expression's value for most purposes.
Obviously I have never used it..
You should be able to evaluate an algorithm's complexity. This combined with a knowledge of how many elements it will take can help you to determine if it is ill suited for its task.
It says how many iterations an algorithm has in the worst case.
to search for an item in an list, you can traverse the list until you got the item. In the worst case, the item is in the last place.
Lets say there are n items in the list. In the worst case you take n iterations. In the Big O notiation it is O(n).
It says factualy how efficient an algorithm is.