Calculating Time Complexity of a recursive function - time-complexity

.0 < c < 1 ,T(n) = T(cn) + T((1 - c)n) + 1
Base level:
if(n<=1) return;
data type - positive integers
I have to find the Big-Theta function of this recursive function.
I've tried to develop the recursive equation but it gets complicated from level to level and no formation is seen.
I also tried this -
assume that c<(1-c).
so -
2T(cn) + 1 <= T(cn) + T((1-c)n)+1 <= 2T((1-c)n)+1
It gave me some lower bound and upper bound but not a theta bound :(

As c approaches either 0 or 1, the recursion approaches T(n) = T(n-1) + 2 (assuming that T(0) = 1 as well). This has as a solution the linear function T(n) = 2n - 1 for n > 0.
For c = 1/2, the recursion becomes T(n) = 2T(n/2) + 1. It looks like T(n) = 2n - 1 is a solution to this for n > 0.
This seems like strong evidence that the function T(n) = 2n - 1 is a solution for all c: it works on both ends and in the middle. If we sub in...
2n - 1 = 2cn - 1 + 2(1-c)n - 1 + 1
= 2cn - 1 + 2n - 2cn - 1 + 1
= 2n - 1
We find that T(n) = 2n - 1 is a solution for the general case.

Related

How do I properly calculate this time complexity

I'm examining this code as preparation for my test and I'm having some problems figuring out what is the correct time complexity:
a = 1;
while (a < n) {
b = 1;
while (b < a^2)
b++;
a = a*2;
}
The values for a are as follows : 1, 2, 4, 8, ... , 2^(logn) = n
Therefore we have logn iterations for the outer loop.
In every nested loop, there are a^2 iterations, so basically what I've come up with is:
T(n) = 1 + 4 + 16 + 64 + ... + (2^logn)^2
I'm having problems finding the general term of this series and therefore getting to a final result.
(maybe due to being completely off in my calculations though)
Would appreciate any help, thank you.
Your calculations are alright, you are correct with your analysis of the inner while-loop. We can demonstrate this with a small table:
This is basically the sum of a geometric progression with:
a1 = 1, q = 4, #n = lg n + 1
Where #n is the number of elements.
We have: Sn = 1 * (4^(lgn +1) - 1)/(4-3) = (4*n^2 - 1)/3
Thus we can say your code running complexity is Θ(n^2)
Mathematical explanation in LaTeX:

How are they calculating the Time Complexity for this Problem

Problem 6: Find the complexity of the below program: 
void function(int n)
{
    int i = 1, s =1;
    while (s <= n)
    {
        i++;
        s += i;
        printf("*");
    }
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
FROM: https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
Looking it over and over.
Apparently they are saying the Time Complexity is O(√n). I don't understand how they are getting to this result, and i've tried looking at this problem over and over. Can anyone break it down into detail?
At the start of the while-loop, we have s = 1; i = 1, and n is some (big) number. In each step of the loop, the following is done,
Take the current i, and increment it by one;
Add this new value for i to the sum s.
It is not difficult to see that successive updates of i forms the sequence 1, 2, 3, ..., and s the sequence 1, 1 + 2, 1 + 2 + 3, .... By a result attributed to the young Gauss, the sum of the first k natural numbers 1 + 2 + 3 + ... + k is k(k + 1) / 2. You should recognise that the sequence s fits this description, where k indicates the number of iterations!
The while-loop terminates when s > n, which is now equivalent to finding the lowest iteration number k such that (k(k + 1) / 2) > n. Simplifying for the asymptotic case, this gives a result such that k^2 > n, which we can simplify for k as k > sqrt(n). It follows that this algorithm runs in a time proportional to sqrt(n).
It is clear that k is the first integer such that k(k+1)/2 > n (otherwise the loop would have stopped earlier).
Then k-1 cannot have this same property, which means that (k-1)((k-1)+1)/2 <= n or (k-1)k/2 <= n. And we have the following sequence of implications:
(k-1)k/2 <= n → (k-1)k <= 2n
→ (k-1)^2 < 2n ; k-1 < k
→ k <= sqrt(2n) + 1 ; solve for k
<= sqrt(2n) + sqrt(2n) ; 1 < sqrt(2n)
= 2sqrt(2)sqrt(n)
= O(sqrt(n))

Big-O notation prove

I'm trying to prove that this formula (n2+1)/(n+1) is O(n)
As you know, we need to come up with n0 and C.
So I'm confused a little bit about how to choose an appropriate C since the equation here is division.
So with C=1, (n2+1) / (n+1) / n
(n2+n) / (n+n) / n >= (n2+1) /(n+1)
but I'm stuck here in how to simplify the division here.
As n tends to infinity your original equation becomes n^2/n which is equivalent to O(n)
Choosing c = 1:
(n^2 + 1)/(n + 1) <= 1*n definition of Big-Oh with c = 1
n^2 + 1 <= n^2 + n multiplying both sides by n + 1
1 <= n subtracting n^2 from both sides
n >= 1 rearranging
Therefore, the choice n0 = 1 works for c = 1.

Finding Big O of the Harmonic Series

Prove that
1 + 1/2 + 1/3 + ... + 1/n is O(log n).
Assume n = 2^k
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated
This follows easily from a simple fact in Calculus:
and we have the following inequality:
Here we can conclude that S = 1 + 1/2 + ... + 1/n is both Ω(log(n)) and O(log(n)), thus it is Ɵ(log(n)), the bound is actually tight.
Here's a formulation using Discrete Mathematics:
So, H(n) = O(log n)
If the problem was changed to :
1 + 1/2 + 1/4 + ... + 1/n
series can now be written as:
1/2^0 + 1/2^1 + 1/2^2 + ... + 1/2^(k)
How many times loop will run? 0 to k = k + 1 times.From both series we can see 2^k = n. Hence k = log (n). So, number of times it ran = log(n) + 1 = O(log n).

Understanding recurrence relation

I have this recurrence relation
T(n) = T(n-1) + n, for n ≥ 2
T(1) = 1
Practice exercise: Solve recurrence relation using the iteration method and give an asymptotic running time.
So I solved it like this:
T(n) = T(n-1) + n
= T(n-2) + (n - 1) + n
= T(n-3) + (n - 2) + (n - 1) + n
= …
= T(1) + 2 + … (n - 2) + (n - 1) + n **
= 1 + 2 + … + (n - 2) + (n - 1) + n
= O(n^2)
I have some questions:
1)How I can find asymptotic running time?
**2)At this state of problem T(1) means that there was n that when it was subtracted with a number it gave the result 1, right?
3)What if T(0) = 1 and what if T(2) = 1?
Edit: 4) Why n ≥ 2 is useful?
I need really to understand it for my Mid-Term test
T(n) = T(n-1) + n, for n ≥ 2
T(1) = 1
If T(x) represents the running time:
You have already found the asymptotic running time, O(n^2) (quadratic).
If the relation is changed to T(0) = 1 or T(2) = 1, then the running time is still quadratic. The asymptotic behavior does not change if you add a constant or multiply by a constant, and changing the initial condition only adds a constant to the following terms.
n ≥ 2 is present in the relation so that T(n) is defined at exactly once for every positive n. Otherwise, both lines would apply to T(1). You cannot compute T(1) from T(0) using T(n) = T(n-1) + n. Even if you could, T(1) would be defined in two different (and potentially inconsistent) ways.