Running Time Calculation - time-complexity

I am trying to learn time complexities. I have come across a problem that is confusing me. I need help to understand this:
my_func takes an array input of size n. It runs three for loops. In the third loop it calls for another function that is O(1) time.
def my_func(A):
for (int i=1; i<n; i++)
for (int j=1; j<i; j++)
for (int k=1; k<j; k++)
some_other_func();
My Questions:
Am I right if I say that total number of steps performed by my_func() is O(n^3) because:
the first for loop goes from 1 to n-1
the second for loop goes from 1 to n-2
the third loop goes from 1 to n-3
What is asymptotic run time and what is the asymptotic run time for the above algorithm?
What is the meaning of the following:

Am I right if I say that total number of steps performed by my_func()
is O(n^3)
Yes, its time-complexity is O(n^3).
What is asymptotic run time and what is the asymptotic run time for
the above algorithm?
The limiting behavior of the execution time of an algorithm when the size of the problem goes to infinity [here]. for example:
lim (n^3) when n-> infinite
What is the meaning of the following
1st, it shows dependant variables k->j->i as it is. Moreover, what if all of the variables(i, j, k) were independent of each other? for example a constant x then each loop would iterate for x times but here k depends on j, and j depends on I. for example:
x(x(x)) = sigma(sigma(sigma(O(1))))
2nd, time-complexity is investigated on large input. Therefore, either the variables are dependant or non-dependant, big O would be O(n^3).

Yes, it's O(n^3), and that IS the asymptotic run time. The sum expression at the end means the same thing as the three nested loops at the top, assuming "some_other_func()" is sum = sum + 1.

Related

How to calculate time complexity of this loop?

for (let i = 0; i < n; i+=2){
...operation
}
I have tried various docs on time complexity. I didn't understand properly.
The loop is simply incrementing in a linear fashion. If we use Big O notation, the time complexity for this would therefore be O(n)

How recursion reduces the time complexity in merge sort

As per my understanding, time complexity is derived by calculating increment in number of operations with increasing input size.
In merge sort, there are 2 phases.
Divide the input array into smaller array
Sort and Merge those arrays
As per a video lecture, time complexity to divide an array for a merge sort is O(log n).
But here he is not referring the number of operations to calculate time complexity but number of decompressions or number of times recursive function is called.
*He used recursion to divide an array.
Talking purely in terms of pseudo code, recursion is taking more than n operations in this case;
Instead, this code always takes n operations;
function divide(arr){
for (let i = 0; i < arr.length; i++) {
arr[i] = [arr[i]];
}
}
So how the complexity of recursive code is lesser than loop?
Recursion doesn't reduce time complexity. You've already shown a diagram for top down merge sort. For the original bottom up merge sort, the code treats an array of n elements as n runs of size 1, for O(1) time complexity to divide the array.
Most libraries use some variation of a hybrid insertion sort and bottom up merge sort. Top down merge sort is mostly used for academic purposes.

What is the complexity of this program. Is it O(n)?

This is a simple program I want to know the complexity of this program. I assume this is O(n) as it has only a single operation in one for loop.
a = int(input("Enter a:"))
b = int(input("Enter b:"))
sol = a
for i in range(a,b):
sol = sol & i+1
print("\nSol",sol)
Yes, it is O(n), sort of. You have to remember O(n) means the number of operations grows with the size of the input. Perhaps you're worried about the & and (i+1) operations in the for loop. What you need to keep in mind here is these operations are constant since they're all performing on a 32-bit integer. Therefore, the only parameters changing how long the program will run is the actual number of iterations of the for loop.
If you're assuming n = b - a, then this program is O(n). In fact, if you break down the actual runtime:
per loop: 1 AND operation, 1 addition operation
now do (b-a) iterations, so 2 operations per loop, (b-a) times = 2*(b-a)
If we assume n = b-a, then this runtime becomes 2*n, which is O(n).
I assume you define n := b - a. The complexity is actually n log(n). There is only 1 operation in the loop so the complexity is n * Time(operation in loop), but as i consists of log(n) bits, the complexity is O(n log(n))
EDIT:
I now regard n := b. It does not affect my original answer, and it makes more sense as it's the size of the input. (It doesn't make sense to say that n=1 for some big a,a+1)
To make it more efficient, notice you calculate (a)&(a+1)&(a+2)&..&(b).
So we just need to set 0's instead of 1's in the binary representation of b, in every place in which there is a 0 in this position for some a <= k < b. How can we know whether to set a digit to 0 or not then? I'll leave it
to you :)
It is possible to do in log(n) time, the size of the binary representation of b.
So in this case we get that the time is O(log(n)^2) = o(n)

Is the approach I have used to find the time complexity correct?

For the following problem I came up with the following algorithm. I just wondering whether I have calculated the complexity of the algorithm correctly or not.
Problem:
Given a list of integers as input, determine whether or not two integers (not necessarily distinct) in the list have a product k. For example, for k = 12 and list [2,10,5,3,7,4,8], there is a pair, 3 and 4, such that 3×4 = 12.
My solution:
// Imagine A is the list containing integer numbers
for(int i=0; i<A.size(); i++) O(n)
{
for(int j=i+1; j<A.size()-1; j++) O(n-1)*O(n-(i+1))
{
if(A.get(i) * A.get(j) == K) O(n-2)*O(n-(i+1))
return "Success"; O(1)
}
}
return "FAILURE"; O(1)
O(n) + O(n-1)*O(n-i-1) + O(n-2)*O(n-i-1)) + 2*O(1) =
O(n) + O(n^2-ni-n) + O(-n+i+1) + O(n^2-ni-n) + O(-2n+2i+2) + 2O(1) =
O(n) + O(n^2) + O(n) + O(n^2) + O(2n) + 2O(2) =
O(n^2)
Apart from my semi-algorithm, is there any more efficient algorithm?
Let's break down what your proposed algorithm is essentially doing.
For every index i (s.t 0 ≤ i ≤ n) you compare i to all unique indices j (i ≠ j) to determine whether: i * j == k.
An invariant for this algorithm would be that at every iteration, the pair {i,j} being compared hasn't been compared before.
This implementation (assuming it compiles and runs without the runtime exceptions mentioned in the comments) makes a total of nC2 comparisons (where nC2 is the binomial coefficient of n and 2, for choosing all possible unique pairs) and each such comparison would compute at a constant time (O(1)). Note it can be proven that nCk is not greater than n^k.
So O(nC2) makes for a more accurate upper bound for this algorithm - though by common big O notation this would still be O(n^2) since nC2 = n*(n-1)/2 = (n^2-n)/2 which is still order of n^2.
Per your question from the comments:
Is it correct to use "i" in the complexity, as I have used O(n-(i+1))?
i is a running index, whereas the complexity of your algorithm is only affected by the size of your sample, n.
IOW, the total complexity is calculated for all iterations in the algorithm, while i refers to a specific iteration. Therefore it is incorrect to use 'i' in your complexity calculations.
Apart from my semi-algorithm, is there any more efficient algorithm?
Your "semi-algorithm" seems to me the most efficient way to go about this. Any comparison-based algorithm would require querying all pairs in the array, which translates to the runtime complexity detailed above.
Though I have not calculated a lower bound and would be curious to hear if someone knows of a more efficient implementation.
edit: The other answer here shows a good solution to this problem which is (generally speaking) more efficient than this one.
Your algorithm looks like O(n^2) worst case and O(n*log(n)) average case, because the longer the list is, the more likely the loops will exit before evaluating all n^2 pairs.
An algorithm with O(n) worst case and O(log(n)) average case is possible. In real life it would be less efficient than your algorithm for lists where the factors of K are right at the start or the list is short, and more efficient otherwise. (pseudocode not written in any particular language)
var h = new HashSet();
for(int i=0; i<A.size(); i++)
{
var x = A.get(i);
if(x%K == 0) // If x is a factor of K
{
h.add(x); // Store x in h
if(h.contains(K/x))
{
return "Success";
}
}
}
return "FAILURE";
HashSet.add and HashSet.contains are O(1) on average (but slower than List.get even though it is also O(1)). For the purpose of this exercise I am assuming they always run in O(1) (which is not strictly true but close enough for government work). I have not accounted for edge cases, such as the list containing a 0.

Time complexity with conditional statements

How does one calculate the time complexity with conditional statements that may or may not lead to higher oder results?
For example:
for(int i = 0; i < n; i++){
//an elementary operation
for(int j = 0; j < n; j++){
//another elementary operation
if (i == j){
for(int k = 0; k < n; k++){
//yet another elementary operation
}
} else {
//elementary operation
}
}
}
And what if the contents in the if-else condition were reversed?
Your code takes O(n^2). First two loops take O(n^2) operations. The "k" loop takes O(n) operations and gets called n times. It gives O(n^2). The total complexity of your code will be O(n^2) + O(n^2) = O(n^2).
Another try:
- First 'i' loop runs n times.
- Second 'j' loop runs n times. For each of is and js (there are n^2 combinations):
- if i == j make n combinations. There are n possibilities that i==j,
so this part of code runs O(n^2).
- if it's not, it makes elementary operation. There are n^2 - n combinations like that
so it will take O(n^2) time.
- The above proves, that this code will take O(n) operations.
That depends on the kind of analysis you are performing. If you are analysing worst-case complexity, then take the worst complexity of both branches. If you're analysing average-case complexity, you need to calculate the probability of entering one branch or another and multiply each complexity by the probability of taking that path.
If you change the branches, just switch the probability coefficients.