What is the time complexity of the SearchInWindow algorithm? - time-complexity

I wonder what the time complexity of the following algorithm is.
function SearchInWindow(p)
for l ← 1 ... L do
c_l ← CountOccurrences(p, l)
end for
end function
The CountOccurrences returns the number of occurrences of a fragment of the length l positioned at the position p in the window (p + 1 ... p + W − 1), where W is the window length.
The CountOccurrences is in O(l × W). The p is the pointer to the data and should not affect the time complexity.
My guess is binom(L+1, 2) × W. But here I am not at all sure.

Let's take a look at the time complexity.
CountOccurrences takes a parameter l (and also p, but as it does not affect the time complexity, we will disregard it), and complete in O(l × W).
You're running a loop from 1 to L and calling CountOccurences with each value.
The time complexity of that is:
O(1 × W) + O(2 × W) + O(3 × W) + ... + O(L × W)
= O(W × (1 + 2 + 3 + ... + L))
= O(W × 1/2 × (L²+L)
Note that we can disregard the constant. Additionally, L² >> L, so O(L²+L) = O(L²).
So we have: = O(W × L²) as the final answer.
Note: Your answer of binom(L+1, 2) × W given in the question is technically equivalent as my answer, but since 2 in binom(L+1, 2) is a constant, we can simplify it further.

Related

assess the time complexity of the following code in terms of theta

I know the time complexity is n*log(n) , however I could only assess it with an integral for the inner loop to get an upper bound, how do I get a lower bound? to make it theta and not O?
S=0;
for( i=1; i<n; i++)
for( j=0;j<n;j+=i)
S++;
so line 1 is executed once, line 2 is executed n-1 times + 1 check without entering, each one of these n-1 times line 3 is executed n/i times and we get:
T= 1 + n + (n/1+n/2+...+n/n-1) =< 1+n+n (integral of 1/x from 1 to n) = 1+n+nlog(n) . And that's big O, how about Omega?
Let's decompose the function in the following way: T(n) = 1 + n + n + n*S(n), where S = sum(1/x from x=2 to n-1). Note that it is identical to what you wrote.
The function f(x) = 1/x is monotonically decreasing, therefore you can bound the sum S from above by int(1/x from x=1 to n-1) and from below by int(1/x from x=2 to n). In both cases you get log(n) up to constant terms. For the lower bound log(n-1)-log(1) = log(n-1) and for the lower bound log(n)-log(2).
If these bounds are not obvious picture the left and right Riemann sums of the integrals for a decreasing function.
You did use the lower bound, not the upper one, in your question, by the way. (Because 1/x is decreasing, not increasing.)
Then adding that back into the expression for T(n) we have T(n) >= 1 + 2n + n log(n) - n log(2) and T(n) <= 1 + 2n + n log(n-1). Both are asymptotically proportional to n log(n), giving you the Theta class.

complexity of the sum of the squares of geometric progression

I have a question in my data structure course homework and I thought of 2 algorithms to solve this question, one of them is O(n^2) time and the other one is:
T(n) = 3 * n + 1*1 + 2*2 + 4*4 + 8*8 + 16*16 + ... + logn*logn
And I'm not sure which one is better.
I know that the sum of geometric progression from 1 to logn is O(logn) because I can use the geometric series formula for that. But here I have the squares of the geometric progression and I have no idea how to calculate this.
You can rewrite it as:
log n * log n + ((log n) / 2) * ((log n) / 2) + ((log n) / 4) * ((log n) / 4) ... + 1
if you substitute (for easier understanding) log^2 n with x, you get:
x + x/4 + x/16 + x/64 + ... + 1
You can use formula to sum the series, but if you dont have to be formal, then basic logic is enough. Just imagine you have 1/4 of pie and then add 1/16 pie and 1/64 etc., you can clearly see, it will never reach whole piece therefore:
x + x/4 + x/16 + x/64 + ... + 1 < 2x
Which means its O(x)
Changing back the x for log^2 n:
T(n) = O(3*n + log^2 n) = O(n)

How do I calculate the feature scaling in Viola Jones algorithm?

I'm confused about how to calculate feature scaling in Viola Jones algorithm. For example in "An Analysis of the Viola-Jones Face Detection Algorithm" of Yi-Qing Wang, he proposed the following for feature type "a":
set the original feature support size a ← 2wh
i <- Jie/24K, j <- Jje/24K, h <- Jhe/24K where JzK defines the nearest integer to z ∈ R+
w <- max{κ ∈ N : κ ≤ J1 + 2we/24K/2, 2κ ≤ e − j + 1}
compute the sum S1 of the pixels in [i, i + h − 1] × [j, j + w − 1]
compute the sum S2 of the pixels in [i, i + h − 1] × [j + w, j + 2w − 1]
return the scaled feature ((S1−S2)a)/2wh
In this case, I don't understand how to calculate "w" (line 3). Do you know another way of to calculate feature scaling?
On the other hand, we know that an strong classifier have weak classifiers, a polarity and a threshold. The weak classifiers depends of features. When we scale a feature, is there any a change in its threshold?

Finding Big O of the Harmonic Series

Prove that
1 + 1/2 + 1/3 + ... + 1/n is O(log n).
Assume n = 2^k
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated
This follows easily from a simple fact in Calculus:
and we have the following inequality:
Here we can conclude that S = 1 + 1/2 + ... + 1/n is both Ω(log(n)) and O(log(n)), thus it is Ɵ(log(n)), the bound is actually tight.
Here's a formulation using Discrete Mathematics:
So, H(n) = O(log n)
If the problem was changed to :
1 + 1/2 + 1/4 + ... + 1/n
series can now be written as:
1/2^0 + 1/2^1 + 1/2^2 + ... + 1/2^(k)
How many times loop will run? 0 to k = k + 1 times.From both series we can see 2^k = n. Hence k = log (n). So, number of times it ran = log(n) + 1 = O(log n).

Understanding recurrence relation

I have this recurrence relation
T(n) = T(n-1) + n, for n ≥ 2
T(1) = 1
Practice exercise: Solve recurrence relation using the iteration method and give an asymptotic running time.
So I solved it like this:
T(n) = T(n-1) + n
= T(n-2) + (n - 1) + n
= T(n-3) + (n - 2) + (n - 1) + n
= …
= T(1) + 2 + … (n - 2) + (n - 1) + n **
= 1 + 2 + … + (n - 2) + (n - 1) + n
= O(n^2)
I have some questions:
1)How I can find asymptotic running time?
**2)At this state of problem T(1) means that there was n that when it was subtracted with a number it gave the result 1, right?
3)What if T(0) = 1 and what if T(2) = 1?
Edit: 4) Why n ≥ 2 is useful?
I need really to understand it for my Mid-Term test
T(n) = T(n-1) + n, for n ≥ 2
T(1) = 1
If T(x) represents the running time:
You have already found the asymptotic running time, O(n^2) (quadratic).
If the relation is changed to T(0) = 1 or T(2) = 1, then the running time is still quadratic. The asymptotic behavior does not change if you add a constant or multiply by a constant, and changing the initial condition only adds a constant to the following terms.
n ≥ 2 is present in the relation so that T(n) is defined at exactly once for every positive n. Otherwise, both lines would apply to T(1). You cannot compute T(1) from T(0) using T(n) = T(n-1) + n. Even if you could, T(1) would be defined in two different (and potentially inconsistent) ways.