Let's say I have to iterate over every character in an array of strings, in which every string has a different length, so arr[0].length != arr[1].length and so on, as this for example:
#prints every char in all the array
for str in arr:
for c in str:
print(c)
How should the time complexity of an algorithm of this nature be represented? A summation of every length of the element in the array? or just like O(N*M), taking N as number of elements and M as max length of array, which it overbounds accordingly?
There is a precise mathematical theory called complexity theory which answers your question and many more. In complexity theory, we have what is called a Turing machine which is a type of computer. The time complexity of a Turing machine doing a computation is then defined as the function f defined on natural numbers such that f(n) is the worst case running time of the machine on inputs of length n. In your case it just needs to copy its input into somewhere else, which is clearly has O(n) time complexity (n here is the combined length of your array). Since NM is greater than n, it means that your Turing machine doing the algorithm you described will not run longer than some constant times NM but it may halt sooner due to irregularities of the lengths of elements of the array.
If you are interested in learning about complexity theory, I recommend the book Introduction to the Theory of Computation by Michael Sipser, which explains these concepts from scratch.
There are many ways you could do this. Your bound of O(NM) is a conservative upper bound. You could also define a parameter L indicating the total length of all the strings and say that the runtime is Θ(N + L), which is essentially your sum idea made a bit cleaner by assigning a name to the summation. That’s a more precise bound that more clearly indicates where the work is being done.
Imagine T1(n) and T2(n) are running times of P1 and P2 programs, and
T1(n) ∈ O(f(n))
T2(n) ∈ O(g(n))
What is the amount of T1(n)+T2(n), when P1 is running along side P2?
The Answer is O(max{f(n), g(n)}) but why?
When we think about Big-O notation, we generally think about what the algorithm does as the size of the input n gets really big. A lot of times, we can fall back on some sort of intuition with math. Consider two functions, one that is O(n^2) and one that is O(n). As n gets really large, both algorithms increases without bound. The difference is, the O(n^2) algorithm grows much, MUCH faster than O(n). So much, in fact, that if you combine the algorithms into something that would be O(n^2+n), the factor of n by itself is so small that it can be ignored, and the algorithm is still in the class O(n^2).
That's why when you add together two algorithms, the combined running time is in O(max{f(n), g(n)}). There's always one that 'dominates' the runtime, making the affect of the other negligible.
The Answer is O(max{f(n), g(n)})
This is only correct if the programms run independently of each other. Anyhow, let's assume, this is the case.
In order to answer the why, we need to take a closer look at what the BIG-O-notation represents. Contrary to the way you stated it, it does not represent time but an upperbound on the complexity.
So while running both programms might take more time, the upperbound on the complexity won't increase.
Lets considder an example: P_1 calculates the the product of all pairs of n numbers in a vector, it is implemented using nested loops, and therefore has a complexity of O(n*n). P_2 just prints the numbers in a single loop and therefore has a complexity of O(n).
Now if we run both programms at the same time, the nested loops of P_1 are the most 'complex' part, leaving the combination with a complexity of O(n*n)
To complete task: find gcd(a,b) for integers a>b>0
Consider an algorithm that checks all of the numbers up to b and keeps track of the max number that divides a and b. It would use the % operator twice per check (for a and b). What would the complexity of this algorithm be?
I have not yet taken any formal CS courses in complexity theory (I will soon) so I am just looking for a quick answer.
The modulo operation is implemented in hardware, and it's pseudo O(1). Strictly speaking, it is not constant, but it depends on the number of bits of a and b. However, even then the number of bits is the same for all input sizes, so we usually ignore this factor.
The worst-case complexity of brute force GCD is just O(n) (also O(a), O(b), or O(min(a,b)); they're all the same), and it happens when when the GCD is either 1, a, or b.
Thanks for your willingness to help.
Straight to the point, I'm confused with the use of Big O notation when analyzing the worst case time complexity of search algorithms.
For example, the worst case time complexity of Alpha-Beta Pruning is O(b^d) where ^ means ~ to the power of ~, b representing the average branching factor and d representing the depth of the search tree.
I do get that the worst case time complexity would be less or equal to a positive constant multiplied by b^d, but why is the use of big O notation permitted here? Where did the variable n, the input size, go? I do know that the input of same size might cause significant difference in time complexity of an algorithm.
All of the research I've done only explains "the use of big o notation in the analysis of worst case time complexity" in terms of the growth function, a function that has variable y as time complexity and variable x as input size. There are also formal definitions of big o notation, which make me even more confused with the question above. definition 1definition 2
Any attempts to answer my question would be greatly appreciated.
The input size you refer here to n is in this case d. If n is the amount of entries in your tree, d can be calculated by ln_2(n), assuming your tree is a balanced binary tree.
Big O notation implies that you are discussing what the runtime would be for a very large n. In the case you noted, O(b^d), the n is the variable that changes with input size. In this case, d would be your n. As you've found, some notations make use of many variables.
n is just a general term for the number of elements, but runtime could vary on many factors- depth of a tree, or a different list entirely. For example, to traverse lists like this:
for n in firstList:
for k in secondList:
do stuff
the cost would be O(n*k).
I had a job interview today. And was asked about complexity of std:set_intersection. When I was answering I mentioned that
O(n+m)
is equal to:
O(max(n,m))
I was told that this is incorrect. I was unsuccessfully trying to show equivalence with:
O(0.5*(n+m)) ≤ O(max(n,m)) ≤ O(n+m)
My question is: am I really incorrect?
For all m, n ≥ 0 it is valid that max(m, n) ≤ m + n → max(m, n) in O(m + n), and m + n ≤ 2max(m, n) → m + n in O(max(m, n)).
Thus O(max(m, n)) = O(m + n).
ADDENDUM: If f belongs O(m + n) then a constant D > 0 exists, that f(n, m) < D * (m + n) for m and n large enough. Thus f(n, m) < 2 D * max(m, n), and O(m + n) must be a subset of O(max(m, n)). The proof of O(max(m, n)) is a subset of O(m + n) is made analogously.
Well you have totally right about O(n+m) is equal to O(max(n,m)),even more precise we can prove Θ(n+m)=Θ(max(n,m) which is more tight and proves your sentence. The mathematical proof is (for both big-O and Θ) very simple and easy to understand with common sense. So since we have a mathematical proof which is a way to say something but in a more well defined and strict way which doesn't leaves any ambiguity.
Though you was (wrongly) told that this is incorrect because if we want to be very precise this is not the appropriate - mathematical way to express that order of max(m,n) is same as m+n. You used the words "is equal" referring to big-O notation but what is the definition of big-O notation?
It is referred to Sets. Saying max(n+m) belongs to O(m+n) is the
most correct way and vice versa m+n belongs to O(max(m,n)). In big O
notation is commonly used and accepted to say m+n = O(max(n,m)).
The problem caused is that you didn't try to refer to the order of a function like f is O(g) but you tried to compare Sets O(f) and O(g).But proving two infinite sets are equal is not easy (and that may confused the interviewer).
We can say Sets A and B are identical(or equal) when contain same elements (we do not try to compare but instead refer to elements they contain so they must be finite). And even identification can't be easily applied when talking about Big O Sets.
Big O of F is used to notate that we are talking about the Set that
contains all functions with order greater or equal than F. How many
functions are there??
Infinite since F+c is contained and c can take infinite values.
How could you say two different Sets are identical(or equal) when they are
infinite ,well it is not that simple.
So I understand what you are thinking that n+m and max(n,m) have same
order but **the right way to express that** is by saying n+m is
O(max(n,m)) and max(n,m)is O(m+n) ( O(m+n) is equal to O(max(m,n))
may better requires a proof).
One more thing, we said that these functions have same order and this is absolutely mathematically correct but when trying to do optimization of an algorithm and you may need to take into account some lower order factors then maybe they give you slightly different results but the asymptotic behavior is proved to be the same.
CONCLUSION
As you can read in wikipedia (and in all cs courses in every university or in every algorithm book) Big O/θ/Ω/ω/ο notations helps us compare functions and find their order of growth and not for Sets of Functions and this is why you were told you were wrong. Though is easy to say O(n^2) is subset of O(n) it is very difficult to compare infinite to say if two sets are identical. Cantor have worked on categorizing infinite sets, for example we know that natural numbers are countable infinite and real numbers are uncountable infinite so real numbers are more than natural numbers even though both are infinite. It is getting very complicating when trying t order and categorize infinite sets and this would be more of a research in maths than a way of comparing functions.
UPDATE
It turns out you could simply prove O(n+m) equals to O(max(n,m)):
for every function F which belongs to O(n+m) this means that there are constant c and k such:
F <= c(n+m) for every n>=k and m>=k
then also stands:
F <= c(n+m)<= 2c*max(n,m)
so F belongs to O(max(n,m)) and as a result O(m+n) is subset of O(max(n,m)).
Now consider F belongs to O(max(n,m)) then there are constants c and k such:
F <= c*max(n+m) for every n>=k and m>=k
and we also have:
F <= c*max(n+m)<=2c(m+n) for every n>=k and m>=k
so there is c'=2c and with same k by definition: F is O(m+n) and as a result O(max(n,m)) is subset of O(n+m). Because we proved O(m+n) is subset of O(max(n,m)) we proved that O(max(m,n)) and O(m+n) are equal and this mathematical proof proves you had totally right without any doubt.
Finally note that proving that m+n is O(max(n,m)) and max(n,m) is O(m+n) doesn't proves immediately that sets are equal (we need a proof for that) as your saying it just proves that functions have same order but we didn't examine the sets. Though it is easy to see (in general case) that if f is O(g) and g is O(F) then you can easily prove in that case the big O sets equality like we did in the previous paragraph.
We'll show by rigorous Big-O analysis that you are indeed correct, given one possible choice of parameter of growth in your analysis. However, this does not necessarily mean that the viewpoint of the interviewer is incorrect, rather that his/her choice of parameter of growth differs. His/her prompt that your answer was outright incorrect, however, is questionable: you've possibly simply used two slightly different approaches to analyzing the asymptotic complexity of std::set_intersection, both leading to the general consensus that the algorithm runs in linear time.
Preparations
Lets start by looking at the reference of std::set_intersection at cppreference (emphasis mine)
http://en.cppreference.com/w/cpp/algorithm/set_intersection
Parameters
first1, last1 - the first range of elements to examine
first2, last2 - the second range of elements to examine
Complexity
At most 2·(N1+N2-1) comparisons, where
N1 = std::distance(first1, last1)
N2 = std::distance(first2, last2)
std::distance itself is naturally linear (worst case: no random access)
std::distance
...
Returns the number of elements between first and last.
We'll proceed to briefly recall the basic of Big-O notation.
Big-O notation
We loosely state the definition of a function or algorithm f being in O(g(n)) (to be picky, O(g(n)) being a set of functions, hence f ∈ O(...), rather than the commonly misused f(n) ∈ O(...)).
If a function f is in O(g(n)), then c · g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) ≤ c · g(n)
holds, for sufficiently large n (i.e. , n ≥ n0 for some constant
n0).
Hence, to show that f ∈ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) ≤ c · g(n), for all n ≥ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
We proceed with the Big-O analysis of std::set_intersection, based on the already known worst case number of comparisons of the algorithm (we'll consider one such comparison a basic operation).
Applying Big-O asymptotic analysis to the set_intersection example
Now consider two ranges of elements, say range1 and range2, and assume that the number of elements contained in these two ranges are m and n, respectively.
Note! Already at this initial stage of the analys do we make a choice: we choose to study the problem in terms of two different parameters of growth (or rather, focusing on the largest one of these two). As we shall see ahead, this will lead to the same asymptotic complexity as the one stated by the OP. However, we could just as well choose to let k = m+n be the parameter of choice: we would still conclude that std::set_intersection is of linear-time complexity, but rather in terms of k (which is m+n which is not max(m, n)) than the largest of m and n. These are simply the preconditions we freely choose to set prior to proceeding with our Big-O notation/asymptotic analysis, and it's quite possibly that the interviewer had a preference of choosing to analyze the complexity using k as parameter of growth rather than the largest of its two components.
Now, from above we know that as worst case, std::set_intersection will run 2 · (m + n - 1) comparisons/basic operations. Let
h(n, m) = 2 · (m + n - 1)
Since the goal is to find an expression of the asymptotic complexity in terms of Big-O (upper bound), we may, without loss of generality, assume that n > m, and define
f(n) = 2 · (n + n - 1) = 4n - 2 > h(n,m) (*)
We proceed to analyze the asymptotic complexity of f(n), in terms of Big-O notation. Let
g(n) = n
and note that
f(n) = 4n - 2 < 4n = 4 · g(n)
Now (choose to) let c = 4 and n0 = 1, and we can state the fact that:
f(n) < 4 · g(n) = c · g(n), for all n ≥ n0, (**)
Given (**), we know from (+) that we've now shown that
f ∈ O(g(n)) = O(n)
Furthermore, since `(*) holds, naturally
h ∈ O(g(n)) = O(n), assuming n > m (i)
holds.
If we switch our initial assumption and assume that m > n, re-tracing the analysis above will, conversely, yield the similar result
h ∈ O(g(m)) = O(m), assuming m > n (ii)
Conclusion
Hence, given two ranges range1 and range2 holding m and n elements, respectively, we've shown that the asymptotic complexity of std::set_intersection applied two these two ranges is indeed
O(max(m, n))
where we're chosen the largest of m and n as the parameter of growth of our analysis.
This is, however, not really valid annotation (at least not common) when speaking about Big-O notation. When we use Big-O notation to describe the asymptotic complexity of some algorithm or function, we do so with regard to some single parameter of growth (not two of them).
Rather than answering that the complexity is O(max(m, n)) we may, without loss of generality, assume that n describes the number of elements in the range with the most elements, and given that assumption, simply state than an upper bound for the asymptotic complexity of std::set_intersection is O(n) (linear time).
A speculation as to the interview feedback: as mentioned above, it's possible that the interviewer simply had a firm view that the Big-O notation/asymptotic analysis should've been based on k = m+n as parameter of growth rather than the largest of its two components. Another possibility could, naturally, be that the interviewer simply confusingly queried about the worst case of actual number of comparisons of std::set_intersection, while mixing this with the separate matter of Big-O notation and asymptotic complexity.
Final remarks
Finally note that the analysis of worst case complexity of std::set_intersect is not at all representative for the commonly studied non-ordered set intersection problem: the former is applied to ranges that are already sorted (see quote from Boost's set_intersection below: the origin of std::set_intersection), whereas in the latter, we study the computation of the intersection of non-ordered collections.
Boost: set_intersection
Description
set_intersection constructs a sorted range that is the intersection
of the sorted ranges rng1 and rng2. The return value is the
end of the output range.
As an example of the latter, the Intersection set method of Python applies to non-ordered collections, and is applied to say sets s and t, it has an average case and a worst-case complexity of O(min(len(s), len(t)) and O(len(s) * len(t)), respectively. The huge difference between average and worst case in this implementation stems from the fact that hash based solutions generally works very well in practice, but can, for some applications, theoretically have a very poor worst-case performance.
For additional details of the latter problem, see e.g.
Intersection of two unsorted sets or lists # SE-CSTheory