calculating the time complexity of a recursive algorithm - time-complexity

I'm trying to get the time complexity of this algorithm but I'm not sure how to. will be glad for any help.
`int g(int arr[], int start, int end, int k)
{
if (start > end) return 0;
int mid = (start + end) / 2;
if (arr[mid] < k) return 1 + g(arr, mid + 2, end, k);
if (arr[mid] > k) return 1 + g(arr, start, mid - 1, k);
return g(arr, start, mid - 1, k) + 1 +
g(arr, mid + 1, end, k);
}`
the answer is O(n).

This is recursion that uses the mechanism of binary search.
Every time we check if arr[mid] is equal to the value k; if it is less than k then we search the right half of the array (the mid+2 should be mid+1), if it is more then we search the left half of the array, if it is equal to k then we search both halves of the array.
So every time we call the recursive function we are only using half the input (half the array).
Thus we can write something like this:
T(n)=2*T(n/2)+1
T(n)=2*2*T(n/(2*2))+1+1
...continue expanding
T(n)=2^k*T(n/(2^k))+k
n/(2^k)=2 ==> k=log(n)
T(n)=2^(log(n))*1+log(n) = O(n) knowing that 2^log(n)=n using log rules.
even though you didn't ask but the space complexity would be O(log(n)) since the maximum depth of the recursion tree would be log(n).

Related

How to calculate time complexity of this code?=.=

char[] chars = String.valueOf(num).toCharArray();
int n = chars.length;
for (int i = n - 1; i > 0; i--) {
if (chars[i - 1] > chars[i]) {
chars[i - 1]--;
Arrays.fill(chars, i, n, '9');
}
}
return Integer.parseInt(new String(chars));
What is the time complexity of this code? Could you teach me how to calculate it? Thank you!
Time complexity is a measure of how a program's run-time changes as input size grows. The first thing you have to determine is what (if any) aspect of your input can cause run-time to vary, and to represent that aspect as a variable. Framing run-time as a function of that variable (or those variables) is how you determine the program's time complexity. Conventionally, when only a single aspect of the input causes run-time to vary, we represent that aspect by the variable n.
Your program primarily varies in run-time based on how many times the for-loop runs. We can see that the for-loop runs the length of chars times (note that the length of chars is the number of digits of num). Conveniently, that length is already denoted as n, which we will use as the variable representing input size. That is, taking n to be the number of digits in num, we want to determine exactly how run-time varies with n by expressing run-time as a function of n.
Also note that when doing complexity analysis, you are primarily concerned with the growth in run-time as n gets arbitrarily large (how run-time scales with n as n goes to infinity), so you typically ignore constant factors and only focus on the highest order terms, writing run-time in "Big O" notation. That is, because 3n, n, and n/2 all grow in about the same way as n goes to infinity, we would represent them all as O(n), because our primary goal is to distinguish this kind of linear growth, from the quadratic O(n^2) growth of n^2, 5n^2 + 10, or n^2 + n, from the logarithmic O(logn) growth of log(n), log(2n), or log(n) + 1, or the constant O(1) time (time that doesn't scale with n) represented by 1, 5, 100000 etc.
So, let's try to express the total number of operations the program does in terms of n. It can be helpful at first to just go line-by-line for this and to add everything up in the end.
The first line, turning num into an n character string and then turning that string to an length n array of chars, does O(n) work. (n work to turn each of n digits into a character, than another n work to put each of those characters into an array. n + n = 2n = O(n) total work)
char[] chars = String.valueOf(num).toCharArray(); // O(n)
The next line just reads the length value from the array and saves it as n. This operation takes the same amount of time no matter how long the array is, so it is O(1).
int n = chars.length; // O(1)
For every digit of num, our program runs 1 for loop, so O(n) total loops:
for (int i = n - 1; i > 0; i--) { // O(n)
Inside the for loop, a conditional check is performed, and then, if it returns true, a value may be decremented and the array from i to n filled.
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n-i) = O(n)
}
The fill operation is O(n-i) because that is how many characters may be changed to '9'. O(n-i) is O(n), because i is just a constant and lower order than n, which, as previously mentioned, means it gets ignored in big O.
Finally, you parse the n characters of chars as an int, and return it. Altogether:
static Integer foo(int num) {
char[] chars = String.valueOf(num).toCharArray(); // O(n)
int n = chars.length; // O(1)
for (int i = n - 1; i > 0; i--) { // O(n)
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n)
}
}
return Integer.parseInt(new String(chars)); // O(n)
}
When we add everything up, we get, the total time-complexity as a function of n, T(n).
T(n) = O(n) + O(1) + O(n)*(O(1) + O(1) + O(n)) + O(n)
There is a product in the expression to represent the total work done across all iterations of the for-loop: O(n) iterations times O(1) + O(1) + O(n) work in each iteration. In reality, some iterations the for loop might only do O(1) work (when the condition is false), but in the worst case the whole body is executed every iteration, and complexity analysis is typically done for the worst case unless otherwise specified.
You can simplify this function for run-time by using the fact that big O strips constants and lower-order terms, along with the facts that that O(a) + O(b) = O(a + b) and a*O(b) = O(a*b).
T(n) = O(n+1+n) + O(n)*O(1 + 1 + n)
= O(2n+1) + O(n)*O(n+2)
= O(n) + O(n)*O(n)
= O(n) + O(n^2)
= O(n^2 + n)
= O(n^2)
So you would say that the overall time complexity of the program is O(n^2), meaning that run-time scales quadratically with input size in the worst case.

Determine time-complexity of recursive function

Can anybody help me find time conplexity of this recursive function?
int test(int m, int n) {
if(n == 0)
return m;
else
return (3 + test(m + n, n - 1));
The test(m+n, n-1) is called n-1 times before base case which is if (n==0), so complexity is O(n)
Also, this is a duplicate of Determining complexity for recursive functions (Big O notation)
It is really important to understand recursion and the time-complexity of recursive functions.
The first step to understand easy recursive functions like that is to be able to write the same function in an iterative way. This is not always easy and not always reasonable but in case of an easy function like yours this shouldn't be a problem.
So what happens in your function in every recursive call?
is n > 0?
If yes:
m = m + n + 3
n = n - 1
If no:
return m
Now it should be pretty easy to come up with the following (iterative) alternative:
int testIterative(int m, int n) {
while(n != 0) {
m = m + n + 3;
n = n - 1;
}
return m;
}
Please note: You should pay attention to negative n. Do you understand what's the problem here?
Time complexity
After looking at the iterative version, it is easy to see that the time-complexity is depending on n: The time-complexity therefore is O(n).

How are they calculating the Time Complexity for this Problem

Problem 6: Find the complexity of the below program: 
void function(int n)
{
    int i = 1, s =1;
    while (s <= n)
    {
        i++;
        s += i;
        printf("*");
    }
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
FROM: https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
Looking it over and over.
Apparently they are saying the Time Complexity is O(√n). I don't understand how they are getting to this result, and i've tried looking at this problem over and over. Can anyone break it down into detail?
At the start of the while-loop, we have s = 1; i = 1, and n is some (big) number. In each step of the loop, the following is done,
Take the current i, and increment it by one;
Add this new value for i to the sum s.
It is not difficult to see that successive updates of i forms the sequence 1, 2, 3, ..., and s the sequence 1, 1 + 2, 1 + 2 + 3, .... By a result attributed to the young Gauss, the sum of the first k natural numbers 1 + 2 + 3 + ... + k is k(k + 1) / 2. You should recognise that the sequence s fits this description, where k indicates the number of iterations!
The while-loop terminates when s > n, which is now equivalent to finding the lowest iteration number k such that (k(k + 1) / 2) > n. Simplifying for the asymptotic case, this gives a result such that k^2 > n, which we can simplify for k as k > sqrt(n). It follows that this algorithm runs in a time proportional to sqrt(n).
It is clear that k is the first integer such that k(k+1)/2 > n (otherwise the loop would have stopped earlier).
Then k-1 cannot have this same property, which means that (k-1)((k-1)+1)/2 <= n or (k-1)k/2 <= n. And we have the following sequence of implications:
(k-1)k/2 <= n → (k-1)k <= 2n
→ (k-1)^2 < 2n ; k-1 < k
→ k <= sqrt(2n) + 1 ; solve for k
<= sqrt(2n) + sqrt(2n) ; 1 < sqrt(2n)
= 2sqrt(2)sqrt(n)
= O(sqrt(n))

Calculate function time complexity

I am trying to calculate the time complexity of this function
Code
int Almacen::poner_items(id_sala s, id_producto p, int cantidad){
it_prod r = productos.find(p);
if(r != productos.end()) {
int n = salas[s - 1].size();
int m = salas[s - 1][0].size();
for(int i = n - 1; i >= 0 && cantidad > 0; --i) {
for(int j = 0; j < m && cantidad > 0; ++j) {
if(salas[s - 1][i][j] == "NULL") {
salas[s - 1][i][j] = p;
r->second += 1;
--cantidad;
}
}
}
}
else {
displayError();
return -1;
}
return cantidad;
}
the variable productos is a std::map and its find method has a time complexity of Olog(n) and other variable salas is a std::vector.
I calculated the time and I found that it was log(n) + nm but am not sure if it is the correct expression or I should leave it as nm because it is the worst or if I whould use n² only.
Thanks
The overall function is O(nm). Big-O notation is all about "in the limit of large values" (and ignores constant factors). "Small" overheads (like an O(log n) lookup, or even an O(n log n) sort) are ignored.
Actually, the O(n log n) sort case is a bit more complex. If you expect m to be typically the same sort of size as n, then O(nm + nlogn) == O(nm), if you expect n ≫ m, then O(nm + nlogn) == O(nlogn).
Incidentally, this is not a question about C++.
In general when using big O notation, you only leave the most dominant term when taking all variables to infinity.
n by itself is much larger than log n at infinity, so even without m you can (and generally should) drop the log n term, so O(nm) looks fine to me.
In non-theoretical use cases, it is sometimes important to understand the actual complexity (for non-infinite inputs), since sometimes algorithms that are slow at infinity can produce better results for shorter inputs (there are some examples where O(1) algorithms have such a terrible constant that an exponential algorithm does better in real life). quick sort is considered a practical example of an O(n^2) algorithm that often does better than it's O(n log n) counterparts.
Read about "Big O Notation" for more info.
let
k = productos.size()
n = salas[s - 1].size()
m = salas[s - 1][0].size()
your algorithm is O(log(k) + nm). You need to use a distinct name for each independent variable
Now it might be the case that there is a relation between k, n, m and you can re-label with a reduced set of variables, but that is not discernible from your code, you need to know about the data.
It may also be the case that some of these terms won't grow large, in which case they are actually constants, i.e. O(1).
E.g. you may know k << n, k << m and n ~= m , which allows you describe it as O(n^2)

Time complexity of this loop with multiplication

I wonder the complexity for this loop in terms of n
for (int i = 1; i <= n; i++) {
for (int j = 1; j * i <= n; j++) {
minHeap.offer(arr1[i - 1] + arr2[j - 1]);
}
}
What I did was to follow the concept of Big-O and gave it an upper bound -- O(n^2).
This will involve some math, so get ready :)
Let's first count how many times the line minHeap.offer(arr1[i - 1] + arr2[j - 1]); gets invoked. For each i from the outer loop, the number of iterations of the inner loop is n/i because the condition j * i <= n is equivalent to j <= n/i. Therefore, the total number of iterations of inner loop is n/1 + n/2 + n/3 + .. + 1, or, formally written,
There is a good approximation for this sum explained in detail e.g. here, so take a look. Since we are interested only in asymptotic complexity, we can take only the highest order term which is n * logn. If there was some O(1) operation instead of minHeap.offer(arr1[i - 1] + arr2[j - 1]); that would be a solution of your problem. However, the complexity of offer method in Java is O(logk), where k denotes the current size of priority queue. In our case, priority queue gets larger and larger, so the total running time is log1 + log2 + ... + log(n * logn) = log(1 * 2 * ... * nlogn) = log((nlogn)!).
We can additionally simplify this by using Stirling's approximation, so the final complexity is O(n * logn * log(n * logn)).