If I have a algorithm where a part of it has complexity big-O(nlogn) and part of it has complexity big-O(n). What would the final complexity of the algorithm be? As far as I am aware, it would be big-O(nlogn).
You are right, the worst case possible is what counts,
in your case o(nlog(n)).
It depends on what you mean by "part of it" ...
Let's assume you have a for loop that has a complexity of O(n) and a binary search for O(logn)
If you program look like this:
for(int i=0; i < n; i++) { // O(n)
/// some stuff here
}
binarySearch(); // O(logn)
Time Complexity would be O(n)
However if have this situation:
for(int i=0; i < n; i++){ // O(n)
binarySearch(); // O(n * logn)
}
Time Complexity would be O(nlogn)
Edit:
If the algorithm is composed of different blocks with different time complexities, then algorithm time complexity = max(O(block1), O(block2), ...)
Related
for (let i = 0; i < n; i+=2){
...operation
}
I have tried various docs on time complexity. I didn't understand properly.
The loop is simply incrementing in a linear fashion. If we use Big O notation, the time complexity for this would therefore be O(n)
I am new to studying the Big O notation and have thought of this question. What is the name for the complexity O(a * b)? Is it linear complexity? polynomial? or something else. The code for the implementation is below.
function twoInputsMult(a, b) {
for (let i = 0; i < a; i++) {
for (let j = 0; j < b; j++) {
// do something
}
}
}
Edit: According to the course I'm going through, it is not n^2 or quadratic since it uses two different numbers for the loops. Refer to the image below
O(ab) is just O(ab). Technically, ab is a multivariate polynomial of 2nd degree. But this is not equivalent to a quadratic polynomial, such as a2.
If you know more about a and b, you may be able to deduce more about their relationship. For instance, if a = O(b), then O(ab) = O(b2), which is quadratic. On the other hand, if a is a constant, then we can reduce it to O(b), which is linear.
Notice, by the way, that O(a + b) is just O(max(a, b)).
And if the real world interests you, I might also mention that both of these complexity classes show up a lot e.g. in graph theory, where we have the number of vertices |V| and the number of edges |E|, and typically |E| = O(|V|2) but not necessarily. For instance, Depth-first search has a time complexity of O(|V| + |E|), which just means that it is linear in terms of whichever there is more of: vertices or edges.
Say that I have a block of code like
for(int i = 0; i < n; i++){
randomFuntion(); // runtimes function is O(log n)
}
Would this have an overall worst-case runtime of O(n) or would it just be O(n log n)?
n logn for sure. Because the O(logn) bound function is always executed n times.
For the following problem I came up with the following algorithm. I just wondering whether I have calculated the complexity of the algorithm correctly or not.
Problem:
Given a list of integers as input, determine whether or not two integers (not necessarily distinct) in the list have a product k. For example, for k = 12 and list [2,10,5,3,7,4,8], there is a pair, 3 and 4, such that 3×4 = 12.
My solution:
// Imagine A is the list containing integer numbers
for(int i=0; i<A.size(); i++) O(n)
{
for(int j=i+1; j<A.size()-1; j++) O(n-1)*O(n-(i+1))
{
if(A.get(i) * A.get(j) == K) O(n-2)*O(n-(i+1))
return "Success"; O(1)
}
}
return "FAILURE"; O(1)
O(n) + O(n-1)*O(n-i-1) + O(n-2)*O(n-i-1)) + 2*O(1) =
O(n) + O(n^2-ni-n) + O(-n+i+1) + O(n^2-ni-n) + O(-2n+2i+2) + 2O(1) =
O(n) + O(n^2) + O(n) + O(n^2) + O(2n) + 2O(2) =
O(n^2)
Apart from my semi-algorithm, is there any more efficient algorithm?
Let's break down what your proposed algorithm is essentially doing.
For every index i (s.t 0 ≤ i ≤ n) you compare i to all unique indices j (i ≠ j) to determine whether: i * j == k.
An invariant for this algorithm would be that at every iteration, the pair {i,j} being compared hasn't been compared before.
This implementation (assuming it compiles and runs without the runtime exceptions mentioned in the comments) makes a total of nC2 comparisons (where nC2 is the binomial coefficient of n and 2, for choosing all possible unique pairs) and each such comparison would compute at a constant time (O(1)). Note it can be proven that nCk is not greater than n^k.
So O(nC2) makes for a more accurate upper bound for this algorithm - though by common big O notation this would still be O(n^2) since nC2 = n*(n-1)/2 = (n^2-n)/2 which is still order of n^2.
Per your question from the comments:
Is it correct to use "i" in the complexity, as I have used O(n-(i+1))?
i is a running index, whereas the complexity of your algorithm is only affected by the size of your sample, n.
IOW, the total complexity is calculated for all iterations in the algorithm, while i refers to a specific iteration. Therefore it is incorrect to use 'i' in your complexity calculations.
Apart from my semi-algorithm, is there any more efficient algorithm?
Your "semi-algorithm" seems to me the most efficient way to go about this. Any comparison-based algorithm would require querying all pairs in the array, which translates to the runtime complexity detailed above.
Though I have not calculated a lower bound and would be curious to hear if someone knows of a more efficient implementation.
edit: The other answer here shows a good solution to this problem which is (generally speaking) more efficient than this one.
Your algorithm looks like O(n^2) worst case and O(n*log(n)) average case, because the longer the list is, the more likely the loops will exit before evaluating all n^2 pairs.
An algorithm with O(n) worst case and O(log(n)) average case is possible. In real life it would be less efficient than your algorithm for lists where the factors of K are right at the start or the list is short, and more efficient otherwise. (pseudocode not written in any particular language)
var h = new HashSet();
for(int i=0; i<A.size(); i++)
{
var x = A.get(i);
if(x%K == 0) // If x is a factor of K
{
h.add(x); // Store x in h
if(h.contains(K/x))
{
return "Success";
}
}
}
return "FAILURE";
HashSet.add and HashSet.contains are O(1) on average (but slower than List.get even though it is also O(1)). For the purpose of this exercise I am assuming they always run in O(1) (which is not strictly true but close enough for government work). I have not accounted for edge cases, such as the list containing a 0.
How does one calculate the time complexity with conditional statements that may or may not lead to higher oder results?
For example:
for(int i = 0; i < n; i++){
//an elementary operation
for(int j = 0; j < n; j++){
//another elementary operation
if (i == j){
for(int k = 0; k < n; k++){
//yet another elementary operation
}
} else {
//elementary operation
}
}
}
And what if the contents in the if-else condition were reversed?
Your code takes O(n^2). First two loops take O(n^2) operations. The "k" loop takes O(n) operations and gets called n times. It gives O(n^2). The total complexity of your code will be O(n^2) + O(n^2) = O(n^2).
Another try:
- First 'i' loop runs n times.
- Second 'j' loop runs n times. For each of is and js (there are n^2 combinations):
- if i == j make n combinations. There are n possibilities that i==j,
so this part of code runs O(n^2).
- if it's not, it makes elementary operation. There are n^2 - n combinations like that
so it will take O(n^2) time.
- The above proves, that this code will take O(n) operations.
That depends on the kind of analysis you are performing. If you are analysing worst-case complexity, then take the worst complexity of both branches. If you're analysing average-case complexity, you need to calculate the probability of entering one branch or another and multiply each complexity by the probability of taking that path.
If you change the branches, just switch the probability coefficients.