What will be complexity of reccurance relation : T (n) = 0.5T (n/2) + 1/n - time-complexity

How to calculate the complexity of this case since master method fails for a<1?

Let's solve the recurrence relation below:
T(n) = (1/2) * T(n/2) + (1/n)
T(n/2) = (1/2) * T(n/4) + (1/(2/n))
Therefore:
T(n) = (1/4) * T(n/4) + (1/n) + (1/n) Notice the pattern.
In overall:
T(n) = (1/2^k) * T(n/2^k) + (1/n)*k
2^k = n --> k = log(n)
T(n) = (1/n) * T(1) + (1/n)*log(n)
T(n) = (log(n) + c)/n c = constant
T(n) ~ log(n)/n
Notice that as n goes to infinity, total number of operations goes to zero.
Recall the formal definition of Big-Oh notation:
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k.
T(n) = O(1) since 0 <= T(n) <= c*1 for all n >= k for some k>0 and c>0.

Related

is log(n!) equivalent to nlogn? (Big O notation)

In my book there is a multiple choice question:
What is the big O notation of the following function:
n^log(2) +log(n^n) + nlog(n!)
I know that log(n!) belongs to O(nlogn), But I read online that they are equivalent. how is log(n!) the same thing as saying nlogn?
how is:
log(n!) = logn + log(n-1) + ... + log2 + log1
equivalent to nlogn?
Let n/2 be the quotient of the integer division of n by 2. We have:
log(n!) = log(n) + log(n-1) + ... + log(n/2) + log(n/2 - 1) + ... + log(2) + log(1)
>= log(n/2) + log(n/2) + ... + log(n/2) + log(n/2 - 1) + ... + log(2)
>= (n/2)log(n/2) + (n/2)log(2)
>= (n/2)(log(n) -log(2) + log(2))
= (n/2)log(n)
then
n log(n) <= 2log(n!) = O(log(n!))
and n log(n) = O(log(n!)). Conversely,
log(n!) <= log(n^n) = n log(n)
and log(n!) = O(n log(n)).

Solving recurrence relation T(n) = T(n-1) + n

I see from a previous answer to this question, the person gave:
T(n) = T(n-2) + n-1 + n
T(n) = T(n-3) + n-2 + n-1 + n
T(n) = T(n-k) +kn - k(k-1)/2
I'm not understanding completely the third line. I can see they may have derived it from arithmetic series formula summation of 1/2n(n+1)? But how did they get kn and the minus sign in front of k(k-1)/2?
starting from:
T(n) = T(n-2) + n-1 + n
we may rewrite it as follows:
T(n) = T(n-2) + 2n - 1
The second formula says:
T(n) = T(n-3)+n-2+n-1+n
let us convert it the same way we do with the first one:
T(n) = T(n-3)+n+n+n-2-1
T(n) = T(n-3)+3n-2-1
By expanding more terms, we notice that the number subtracted from n in the recursive term:T(n-3) is always the same as the number multiplied by n. we may rewrite it as follows:
T(n) = T(n-k)+kn+...
we also notice that -2 -1 is the arithmetic series but negated and stars from k-1. the arithmetic of k-1 is (k-1)*k/2 just like n(n+1)/2. so the relation would be
T(n) = T(n-k)+kn-(k-1)*k/2 or T(n) = T(n-k)+kn-k*(k-1)/2
Hope this help ;)
The k(k-1)/2 term is just the sum of the numbers 0 to k-1. You can see why you need to subtract it from the following calculation:
T(n) =
T(n-k) + n + (n-1) + (n-2) + ... + (n-(k-1)) =
T(n-k) + (n-0) + (n-1) + (n-2) + ... + (n-(k-1)) =
T(n-k) + n + n + n + ... + n - 0 - 1 - 2 ... - (k-1) =
T(n-k) + kn - (0 + 1 + 2 + ... + (k-1)) =
T(n-k) + kn - k*(k-1)/2
If you look closly:
T(n) = T(n-2) + n-1 + n = T(n-2) + 2n -1
T(n)= T(n-3) + n-2 + n-1 + n = T(n-3)+ 3n -(2+1)
.
.
.
T(n)= T(n-k) + n-(k-1) + n-(k-2) + ... + n = T(n-k) + K * n + (-1 -2 - ... -(k-2) -(k-1))= T(n-k) + kn - k(k-1)/2
You can use recurrence theorem to demonstrate it

Time complexity of recursive function inside for loop

If we have a function:-
int x=0;
int fun(int n)
{
if(n==0)
return 1;
for(int i=0; i<n;i++)
x += fun(i);
}
According to me, time complexity can be calculated as:-
T(n) = T(n-1) + T(n-2) +...+ T(0).
T(n) = nT(n-1).
T(n) = O(n!).
Am I correct?
1. T(n) = T(n-1) + T(n-2) + T(n-3) + .... + T(0)
// Replace n with n-1
2. T(n-1) = T(n-2) + T(n-3) + .... + T(0)
Replace T(n-2) + T(n-3) + .... + T(0) with T(n-1) in 1st Equation
T(n) = T(n-1) + T(n-1)
= 2 * T(n-1)
= 2 * 2 * T(n-2) // Using T(n-1) = 2 * T(n-2)
= 2^n * T(n-n)
= 2^n * T(0) // Consider T(0) = 1
= 2^n
If you're measuring the number of function calls (or additions -- it turns out the same), the correct recurrence relations are:
T(0) = 0
T(n) = T(0) + T(1) + T(2) + ... + T(n-1) + n
You can compute the first few values:
T(0) = 0
T(1) = 1
T(2) = 3
T(3) = 7
T(4) = 15
You can guess from this that T(n) = 2^n - 1, and that's easy to check by a proof by induction.
In some sense you are right that T(n) = O(n!) since n! > 2^n for n>3, but T(n) = O(2^n) is a tighter bound.

Complexity of the program [duplicate]

In the book Programming Interviews Exposed it says that the complexity of the program below is O(N), but I don't understand how this is possible. Can someone explain why this is?
int var = 2;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j *= 2) {
var += var;
}
}
You need a bit of math to see that. The inner loop iterates Θ(1 + log [N/(i+1)]) times (the 1 + is necessary since for i >= N/2, [N/(i+1)] = 1 and the logarithm is 0, yet the loop iterates once). j takes the values (i+1)*2^k until it is at least as large as N, and
(i+1)*2^k >= N <=> 2^k >= N/(i+1) <=> k >= log_2 (N/(i+1))
using mathematical division. So the update j *= 2 is called ceiling(log_2 (N/(i+1))) times and the condition is checked 1 + ceiling(log_2 (N/(i+1))) times. Thus we can write the total work
N-1 N
∑ (1 + log (N/(i+1)) = N + N*log N - ∑ log j
i=0 j=1
= N + N*log N - log N!
Now, Stirling's formula tells us
log N! = N*log N - N + O(log N)
so we find the total work done is indeed O(N).
Outer loop runs n times. Now it all depends on the inner loop.
The inner loop now is the tricky one.
Lets follow:
i=0 --> j=1 ---> log(n) iterations
...
...
i=(n/2)-1 --> j=n/2 ---> 1 iteration.
i=(n/2) --> j=(n/2)+1 --->1 iteration.
i > (n/2) ---> 1 iteration
(n/2)-1 >= i > (n/4) ---> 2 iterations
(n/4) >= i > (n/8) ---> 3 iterations
(n/8) >= i > (n/16) ---> 4 iterations
(n/16) >= i > (n/32) ---> 5 iterations
(n/2)*1 + (n/4)*2 + (n/8)*3 + (n/16)*4 + ... + [n/(2^i)]*i
N-1
n*∑ [i/(2^i)] =< 2*n
i=0
--> O(n)
#Daniel Fischer's answer is correct.
I would like to add the fact that this algorithm's exact running time is as follows:
Which means:

Complexity classes examples

I wanted to know if my answers are indeed correct for the following statements:
3(n^3) + 5(n^2) + 25n + 10 = BigOmega(n^3) -> T
->Grows at a rate equals or slower
3(n^3) + 5(n^2) + 25n + 10 = Theta(n^3) -> T
-Grows at a rate exactly equals
3(n^3) + 5(n^2) + 25n + 10 = BigO(n^3) -> T
-Grows at a rate equals or faster
Thanks!!!
Formal definitions of O notations are:
f(n) = O(g(n)) means that there exists some constant c and n0 that
f(n) <= c*g(n) for n >= n0.
f(n) = Ω(g(n)) means that there exists some constant c and n0 that
f(n) >= c*g(n) for n >= n0.
f(n) = Θ(g(n)) means that there exists some constants c1 and c2 and n0
that f(n) >= c1*g(n) and f(n) <= c2*g(n) for n >= n0.
Proof for O:
3(n^3) + 5(n^2) + 25n + 10 < 3*n^3 + n^3 + n^3 = 5*n^3
You can see that for n >= 10 this formula is true.
So there exists c = 5 and n0 = 10, thus it is O(n^3).
Proof for Ω:
3(n^3) + 5(n^2) + 25n + 10 > 3*n^3
So c = 3 and n0 = 1, thus it is Ω(n^3).
Because both O and Ω apply then the 3rd statement for Θ is also true.