Recurrence Relation without using Master Theorem - series

I can easily solve some recurrence relations using the master theorem but I want to understand how to solve them w/o using the theorem
EX:
T(n) = 5T(n/2) + O(n) T(1) =1
Answer: O(n^{log_2(5)}
Expanding,
T(n) = 5T(n/2) + cn = 5(5T(n/4) + c(n/2)) + cn =
..... = 5^i * T(n/(2^i)) + cn*(1 + (5/2) + (5/2)^2 +......+ (5/2)^i)
Now let i= log_2(n)
then
5^(log_2(n)) * T(1) + cn*(1 + (5/2) + (5/2)^2 +......+ (5/2)^(log_2(n)))
After this I am lost . How do I get something similar to n^{log_2(5)?
Update:
Using the formula for the sum of geometric series (Sum = a* (1-r^n)/(1-r))
I get Sum = 1*(1-(5/2)^{log_2(n)})/(-3/2) = 2/3*c*(5^{log_2(n)} - n
How are 5^{log_2(n)} and n^{log_2(5)} related?
Thanks :D

I did not check the rest of your calculation, but note that
a^b = exp(b * ln(a))
and
log_b(a) = ln(a) / ln(b)
And thus
5^{log_2(n)} = exp(log_2(n) * ln(5)) = exp(ln(n) / ln(2) * ln(5))
and also
n^{log_2(5)} = exp(ln(5) / ln(2) * ln(n))

Related

How to find time and space complexity for this algorithm?

I need to find time and space complexity for this java code for recursive calculation of the determinant of a matrix:
public int determinant(int[][] m) {
int n = m.length;
if(n == 1) {
return m[0][0];
} else {
int det = 0;
for(int j = 0; j < n; j++) {
det += Math.pow(-1, j) * m[0][j] * determinant(minor(m, 0, j));
}
return det;
}
}
public int[][] minor(final int[][] m, final int i, final int j) {
int n = m.length;
int[][] minor = new int[n - 1][n - 1];
int r = 0, s = 0;
for(int k = 0; k < n; k++) {
int[] row = m[k];
if(k != i) {
for(int l = 0; l < row.length; l++) {
if(l != j) {
minor[r][s++] = row[l];
}
}
r++;
s = 0;
}
}
return minor;
}
Help: Determine the number of operations and memory consumption of the algorithm with respect to n, and after dividing by n^2 you will get the desired result.
I'm confused. I calculate the time complexity as the sum of the input size (n^2) and the number of steps, and the space complexity as the input size. (?) So its O(n^2) but I don't think I'm doing it right. And why should I divide it (help is from the teacher).
Can someone explain to me how to calculate this in this case?
Let us see. For an input matrix of size n, denote the time complexity as T(n).
So, for a matrix of size n:
if n = 1, we have the answer right away: T(1) = O(1);
otherwise, we loop over j for a total of n times, and for each j, we:
construct a minor in O(n^2) (the minor function), and then
run the function recursively for that minor: this one takes T(n-1) time.
Putting it all together, we have T(1) = O(1) and T(n) = n * (n^2 + T(n-1)).
To understand what is going on, write down what is T(n-1) there:
T(n) = n * (n^2 + T(n-1)) =
= n * (n^2 + (n-1) * ((n-1)^2 + T(n-2)))
And then, do the same for T(n-2):
T(n) = n * (n^2 + T(n-1)) =
= n * (n^2 + (n-1) * ((n-1)^2 + T(n-2))) =
= n * (n^2 + (n-1) * ((n-1)^2 + (n-2) * ((n-2)^2 + T(n-3)))) = ...
Then we write what is T(n-3), and so on.
Now, open the brackets:
T(n) = n * (n^2 + (n-1) * ((n-1)^2 + (n-2) * ((n-2)^2 + T(n-3)))) =
= n^3 + n * (n-1) * ((n-1)^2 + (n-2) * ((n-2)^2 + T(n-3)))) =
= n^3 + n * (n-1)^3 + n * (n-1) * (n-2) * ((n-2)^2 + T(n-3)))) =
= n^3 + n * (n-1)^3 + n * (n-1) * (n-2)^3 + n * (n-1) * (n-2) * T(n-3)))) =...
As we can see going further, the highest term among these, in terms of n, will be
T(n) = n * (n-1) * (n-2) * ... * 3 * 2 * 1^3, which is just n! (n-factorial).
The above is about time complexity.
To address space complexity, consider how recursion uses memory.
Note that it is much different from what happens to the time.
The reason is that, at each point of time, we are in some single particular branch of the recursion tree, and at this time other branches don't use the memory they need.
Therefore, unlike with time, we can't just add up the total memory used by all recursive calls.
Instead, we have to consider all branches of the recursion tree, and find the one that uses maximum memory.
In this particular case, it is just any deepest branch of the recursion.
Denote the memory consumption by M(n) where n is the matrix size.
if n = 1, we have the answer right away: M(1) = O(1);
otherwise, we loop over j for a total of n times, and for each j, we:
construct a minor that takes O(n^2) memory (the minor function), and then
run the function recursively for that minor: this one takes M(n-1) memory.
Note that the loop does not accumulate memory used by minors.
Instead, when we construct the minor for next j, the minor for previous j is not needed anymore.
Thus we have M(n) = n^2 + M(n-1).
Again, writing down what is M(n-1), then M(n-2), and so on gets us to
M(n) = n^2 + (n-1)^2 + (n-2)^2 + ... + 3^2 + 2^2 + 1^2, which is O(n^3) with some constant factor we need not care about.
So, by the above reasoning, the answer is: T(n) = O(n!) and M(n) = O(n^3).
What's the hint about, with dividing by n^2, I don't have a clue, sorry!

DAX If Value is 0 then dont use it in calculation

I have four columns with respective weights (0.1667 or 0.3333) and if one of them (Ta, Tb, Tc, Td) are 0 I don't want them to be involved in my end calculation. Can someone help me with the denomiator part since it needs to be dynamic regarding the output?
var Tb= ('T2'[Col2] * 0.1667)
var Tc = ('T3'[Col3] * 0.3333)
var Td = ('T4'[Col4] * 0.3333)
Return end_calculation = Ta + Tb + Tc + Td / 0.1667 + 0.1667 + 0.3333 + 0.3333 ```
(Only take all four weights in the denomiator into account if it is not empty)
Use the IF function, and use DIVIDE to add support for divide-by-zero.
divide(Ta + Tb + Tc + Td, if(Ta>0,0.1667,0) + ...

Solve: T(n) = T(n/2) + n/2 + 1

I struggle to define the running time for the following algorithm in O notation. My first guess was O(n), but the gap between the iterations and the number I apply isn't steady. How have I incorrectly defined this?
public int function (int n )
{
if ( n == 0) {
return 0;
}
int i = 1;
int j = n ;
while ( i < j )
{
i = i + 1;
j = j - 1;
}
return function ( i - 1) + 1;
}
The while is executed in about n/2 time.
The recursion is executed passing as n a value that is about half of the original n, so:
n/2 (first iteration)
n/4 (second iteration, equal to (n/2)/2)
n/8
n/16
n/32
...
This is similar to a geometric serie.
Infact it can be represented as
n * (1/2 + 1/4 + 1/8 + 1/16 + ...)
So it converges to n * 1 = n
So the O notation is O(n)
Another approach is to write it down as T(n) = T(n/2) + n/2 + 1.
The while loop does n/2 work. Argument passed to next call is n/2.
Solving this using the master theorem where:
a = 1
b = 2
f = n/2 + 1
Let c=0.9
1*(f(n/2) + 1) <? c*f(n)
1*(n/4)+1 <? 0.9*(n/2 + 1)
0.25n + 1 <? 0.45n + 0.9
0 < 0.2n - 0.1
Which is:
T(n) = Θ(n)

Time complexity analysis - how to simplify the expression

My algorithm's complexity has the below expression. But I am not sure how to simplify this further to express in Big-O notation.
T(n) = 3 * T(n-1) + 3 * T(n-2) + 3 * T(n-3) + ... + 3 *T(1)
T(1) takes constant time.
Appreciate any help.
Calculating T(n-1), we get:
T(n-1) = 3*T(n-2) + 3*T(n-3) + ... + 3*T(1)
So effectively,
T(n) = 3*T(n-1) + T(n-1) = 4*T(n-1) = 4*(4*T(n-2))
Thus T(n) = 4(n - 1).

complexity for recursive functions (Big O notation)

I have two functions which I would like to determine the complexity for.
The first one I just need to know whether my solving is correct, the second one because of the two recursive calls I am struggling to find the solution to, if possible would be good to have the working out so that I can learn how its done.
First:
def sum(list):
assert len(list)>0
if len(list) == 1:
return list[0]
else:
return sum(list[0:-1]) + list[-1]
Attempted solution :
T(0) = 4
T(n) = T(n-1) + 1 + c -- True for all n >0
T(n) = T(n-1) + 1 + c
= T(n-2) + 2 + 2C
= T(n-k) + k = kC --(n-k = 0 implies that k=n)
T(n) = T(0) + n + nC
= T(0) + 2nC --(T0 is nothing but 4)
= 6nC
Complexity = O(n)
Second:
def binSum(list):
if len(list) == 1:
return list[0]
else:
return binSum(list[:len(list)//2]) + binSum(list[len(list)//2:])
Any help would be greatly appreciated.
Regards
For the first case, you can model the time complexity with the recursive function T(n) = T(n-1) + O(1) and T(0) = O(1). which obviously solves to T(n) = O(n).
Here's a more direct and more formal proof by induction. The base case is easy: T(0)<=C by the definition of O(1). For the inductive step, suppose that T(k) <= C*k for all k<=n for some absolute constant C>0. Now T(n+1) <= D + T(n) <= D + C*n <= max(C,D)*(n+1) by the inductive hypothesis, where D>0 is an absolute constant.
For the second case, you can model the time complexity with T(n) = T(n/2) + T(n/2) = 2T(n/2) for n>1 and T(1)=O(1). This solves to T(n)=O(n) by the master theorem.