Suppose I have two factors, N and M, and a constraint that M <= N, and I have an operation with O(log(N)) time complexity, that needs to be run M times, however, N reduces by 1 on each iteration, so it looks roughly like this:
O(log(N) + log(N - 1) + ... + log(N - (M - 2)) + log(N - (M - 1)))
How do I reduce this to a simple expression?
as a bonus, I kind of simplified things above, N doesn't definitely reduce by 1 on each iteration, this only occurs in the worst case (where M = N), it actually reduces by the result of the prior log(N) operation, which is some series of M numbers, lets call it series R, and series R sums to N, so it's really like:
O(log(N) + log(N - R(0)) + log(N - R(0) - R(1)) + ... + log(N - R(0) - R(1) - ... - R(M - 2)) + log(N - R(0) - R(1) - ... - R(M - 2) - R(M - 1)))
where it's a summation with sub summations... is this able to be simplified?
Since log(a) + log(b) = log(a*b) it follows that your equation equals:
O( log( N*(N-1)*(N-2)* ... * (N-(M-1)) * (N-M) ) )
So for the worst case scenario M=N-1 gives the upper bound O(log(N!))
In the general case the complexity is O(log(N!/(N-M)!)). Which increases with M as expected.
Related
.0 < c < 1 ,T(n) = T(cn) + T((1 - c)n) + 1
Base level:
if(n<=1) return;
data type - positive integers
I have to find the Big-Theta function of this recursive function.
I've tried to develop the recursive equation but it gets complicated from level to level and no formation is seen.
I also tried this -
assume that c<(1-c).
so -
2T(cn) + 1 <= T(cn) + T((1-c)n)+1 <= 2T((1-c)n)+1
It gave me some lower bound and upper bound but not a theta bound :(
As c approaches either 0 or 1, the recursion approaches T(n) = T(n-1) + 2 (assuming that T(0) = 1 as well). This has as a solution the linear function T(n) = 2n - 1 for n > 0.
For c = 1/2, the recursion becomes T(n) = 2T(n/2) + 1. It looks like T(n) = 2n - 1 is a solution to this for n > 0.
This seems like strong evidence that the function T(n) = 2n - 1 is a solution for all c: it works on both ends and in the middle. If we sub in...
2n - 1 = 2cn - 1 + 2(1-c)n - 1 + 1
= 2cn - 1 + 2n - 2cn - 1 + 1
= 2n - 1
We find that T(n) = 2n - 1 is a solution for the general case.
1) ....+n/16+n/8 + n/4+ n/n..=?
2) ...+n/5+n/4 + n/3+n/2...n/n..=?
I am working on finding time complexity of few algorithms where i came across few geometric series.
I believe 1st geometric series has log(n) .What is time complexity of 2nd geometric series?
Assuming that (1) is n * (… + 1/2^k + … + 1/16 + 1/8 + 1/4 + 1/2 + 1/1), the answer is 2n because the sum 1 + 1/2 + 1/4 + … + 1/2^k + … converges to the value 2. To see this:
1/1 + 1/2 + … + 1/2^n + … = k
(1/1 + 1/2 + … + 1/2^n + …)/2 = k/2
1/2 + 1/4 + … + 1/2^(n+1) + … = k/2
k - 1 = k/2
k/2 = 1
k = 2
The key step above was recognizing the LHS of the third line is one less than the LHS of the first line.
For (2), n * (… + 1/k + … + 1/5 + 1/4 + 1/3 + 1/2 + 1/1) is n times the harmonic series. The harmonic series diverges so this is undefined, tending toward infinity. To see this, compare the two series:
1/1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + …
1/1 + 1/2 + 1/4 + 1/4 + 1/8 + 1/8 + 1/8 + 1/8 + …
The second is the same as the first but all terms have had denominators increased to the next higher power of two. Thus the second series cannot sum to a larger value than the first. But the second series clearly diverges since we can group two 1/4s, four 1/8s, etc., to get the sum 1 + 1/2 + 1/2 + … + 1/2 + …
1) n{(1/1)+(1/2)+(1/4)+(1/8)+...} ==>O(n) because this series {(1/1)+(1/2)+(1/4)+...} equal to number 2
2) n{(1/1)+(1/2)+(1/3)+(1/4)+...} ==>O(n Lnn) because this series {(1/1)+(1/2)+(1/3)+...} equal to Lnn(this is harmonic series)
1)
...+n/16+n/8 + n/4+ n/n..=? is a geometric series and its sum will always be less than equal 2n. So it is O(n).
2)...+n/5+n/4 + n/3+n/2...n/n..=? is a harmonic series whose sum will be logn.There are mathematical calculations to derive this.So it is O(log n)
I have a question in my data structure course homework and I thought of 2 algorithms to solve this question, one of them is O(n^2) time and the other one is:
T(n) = 3 * n + 1*1 + 2*2 + 4*4 + 8*8 + 16*16 + ... + logn*logn
And I'm not sure which one is better.
I know that the sum of geometric progression from 1 to logn is O(logn) because I can use the geometric series formula for that. But here I have the squares of the geometric progression and I have no idea how to calculate this.
You can rewrite it as:
log n * log n + ((log n) / 2) * ((log n) / 2) + ((log n) / 4) * ((log n) / 4) ... + 1
if you substitute (for easier understanding) log^2 n with x, you get:
x + x/4 + x/16 + x/64 + ... + 1
You can use formula to sum the series, but if you dont have to be formal, then basic logic is enough. Just imagine you have 1/4 of pie and then add 1/16 pie and 1/64 etc., you can clearly see, it will never reach whole piece therefore:
x + x/4 + x/16 + x/64 + ... + 1 < 2x
Which means its O(x)
Changing back the x for log^2 n:
T(n) = O(3*n + log^2 n) = O(n)
Prove that
1 + 1/2 + 1/3 + ... + 1/n is O(log n).
Assume n = 2^k
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated
This follows easily from a simple fact in Calculus:
and we have the following inequality:
Here we can conclude that S = 1 + 1/2 + ... + 1/n is both Ω(log(n)) and O(log(n)), thus it is Ɵ(log(n)), the bound is actually tight.
Here's a formulation using Discrete Mathematics:
So, H(n) = O(log n)
If the problem was changed to :
1 + 1/2 + 1/4 + ... + 1/n
series can now be written as:
1/2^0 + 1/2^1 + 1/2^2 + ... + 1/2^(k)
How many times loop will run? 0 to k = k + 1 times.From both series we can see 2^k = n. Hence k = log (n). So, number of times it ran = log(n) + 1 = O(log n).
I have this recurrence relation
T(n) = T(n-1) + n, for n ≥ 2
T(1) = 1
Practice exercise: Solve recurrence relation using the iteration method and give an asymptotic running time.
So I solved it like this:
T(n) = T(n-1) + n
= T(n-2) + (n - 1) + n
= T(n-3) + (n - 2) + (n - 1) + n
= …
= T(1) + 2 + … (n - 2) + (n - 1) + n **
= 1 + 2 + … + (n - 2) + (n - 1) + n
= O(n^2)
I have some questions:
1)How I can find asymptotic running time?
**2)At this state of problem T(1) means that there was n that when it was subtracted with a number it gave the result 1, right?
3)What if T(0) = 1 and what if T(2) = 1?
Edit: 4) Why n ≥ 2 is useful?
I need really to understand it for my Mid-Term test
T(n) = T(n-1) + n, for n ≥ 2
T(1) = 1
If T(x) represents the running time:
You have already found the asymptotic running time, O(n^2) (quadratic).
If the relation is changed to T(0) = 1 or T(2) = 1, then the running time is still quadratic. The asymptotic behavior does not change if you add a constant or multiply by a constant, and changing the initial condition only adds a constant to the following terms.
n ≥ 2 is present in the relation so that T(n) is defined at exactly once for every positive n. Otherwise, both lines would apply to T(1). You cannot compute T(1) from T(0) using T(n) = T(n-1) + n. Even if you could, T(1) would be defined in two different (and potentially inconsistent) ways.