Time complexity of an iteration algorithm - time-complexity

I have an iteration algorithm, where at each iteration the amount of computation decrease gradually. Here is an illustration of my algorithm:
Input size: n and Total iteration = k
iter 1: time taken -> f1 * n
iter 2: time taken -> f2 * n
iter 3: time taken -> f3 * n
...
iter k: time taken -> fk * n
where f1 > f2 > f3 >...> fk and 0 <= f1, f2,...,fk <= 1
Question: What is the time complexity of this algorithm? is it Big-O(klog n)
Update:
I think the question seems vague. I'll explain it in words:
Input for my algorithm is n and I'll run it over k iterations. but on each iteration the input size reduces by a factor which is unknown. there is no pattern in the reduction.
eg :
iter 1: input size = n (always n)
iter 2: input size = n/2 (can change)
iter 3: input size = n/5 (can change)
iter 4: input size = n/8 (can change)
...
iter k: input size = n/10 (can change)

The given information is not enough, all we can determine is that the complexity is O((f1+ ... + fk)*n)1.
Why? I'll show with an example, of two cases for fi - each giving different complexity:
Case 1: fi = 1/2^i
In this case, we get n * 1/2 + n* 1/4 + ... + n*1/2^k < n, and the algorithm is O(n)
Case 2: fi = 1/i
In this case, we get a harmonic series: n * 1/2 + n*1/3 + ... + n*1/k = n(1/2+1/3+...+1/k) = O(nlogk)
EDIT:
based on your comments and edit, it seems that the worst case for the algorithm to run as described (if I understood you correctly) is:
iter1 -> n ops
iter2 -> n/2 ops
iter3 -> n/3 ops
...
iterk -> n/k ops
If this is indeed the case, it matches the described case2, the total run time is an harmonic series: n + n/2 + n/3 + .. + n/k = n(1 + 1/2 + 1/3 + ... + 1/k), which is O(nlogk).
(1) Strictly mathematically speaking - big O is an upper asymptotic bound, and since fi <= 1, we can deduce the algorithm is O(nk), but it is NOT a strict bound, as the examples show - different fi values can give different strict bounds.

EDIT
More specifically:
If the denominators of your example:
iter 1: input size = n (always n)
iter 2: input size = n/2 (can change)
iter 3: input size = n/5 (can change)
iter 4: input size = n/8 (can change)
...
iter k: input size = n/10 (can change)
are strictly integers, then it is O(n*log k ).
Here's why. For a sequence Xn to be O(Yn), there must exists some M, a real number, and m, an integer, such that Xn < M*|Yn| for all n > m.
Now consider the sequence K = {1, 1/2, 1/3, ... 1/k}. Also consider the sequence N = {1, 2, 3, 4...}.
Now let's let Yn = N^t * K (that's the outer left product of N and K). This sequence Yn is always greater than your sequence, regardless of the values of the fi's.
So Xn < 1 * |Yn|, where Yn is the harmonic series times n. As amit pointed out, Yn falls into O(n*log k), so Xn does also. Since we couldn't have bounded Xn any closer above, our best limiting approximation for Xn is also O(n*log k).

Related

Why does the following algorithm have runtime log(log(n))?

I don't understand how the runtime of the algorithm can be log(log(n)). Can someone help me?
s=1
while s <= (log n)^2 do
s=3s
Notation note: log(n) indicates log2(n) throughout the solution.
Well, I suppose (log n)^2 indicates the square of log(n), which means log(n)*log(n). Let us try to analyze the algorithm.
It starts from s=1 and goes like 1,3,9,27...
Since it goes by the exponents of 3, after each iteration s can be shown as 3^m, m being the number of iterations starting from 1.
We will do these iterations until s becomes bigger than log(n)*log(n). So at some point 3^m will be equal to log(n)*log(n).
Solve the equation:
3^m = log(n) * log(n)
m = log3(log(n) * log(n))
Time complexity of the algorithm can be shown as O(m). We have to express m in terms of n.
log3(log(n) * log(n)) = log3(log(n)) + log3(log(n))
= 2 * log3(log(n)) For Big-Oh notation, constants do not matter. So let us get rid of 2.
Time complexity = O(log3(log(n)))
Well ok, here is the deal: By the definition of Big-Oh notation, it represents an upper bound runtime for our function. Therefore O(n) ⊆ O(n^2).
Notice that log3(a) < log2(a) after a point.
By the same logic we can conclude that O(log3(log(n)) ⊆ O(log(log(n)).
So the time complexity of the algorithm : O(log(logn))
Not the most scientific explanation, but I hope you got the point.
This follows as a special case of a more general principle. Consider the following loop:
s = 1
while s < k:
s = 3s
How many times will this loop run? Well, the values of s taken on will be equal to 1, 3, 9, 27, 81, ... = 30, 31, 32, 33, ... . And more generally, on the ith iteration of the loop, the value of s will be 3i.
This loop stops running at soon as 3i overshoots k. To figure out where that is, we can equate and solve:
3i = k
i = log3 k
So this loop will run a total of log3 k times.
Now, what do you think would happen if we used this loop instead?
s = 1
while s < k:
s = 4s
Using similar logic, the number of loop iterations would be log4 k. And more generally, if we have the following loop:
s = 1
while s < k:
s = c * s
Then assuming c > 1, the number of iterations will be logc k.
Given this, let's look at your loop:
s = 1
while s <= (log n)^2 do
s = 3s
Using the reasoning from above, the number of iterations of this loop works out to log3 (log n)2. Using properties of logarithms, we can simplify this to
log3 (log n)2
= 2 log3 log n
= O(log log n).

assess the time complexity of the following code in terms of theta

I know the time complexity is n*log(n) , however I could only assess it with an integral for the inner loop to get an upper bound, how do I get a lower bound? to make it theta and not O?
S=0;
for( i=1; i<n; i++)
for( j=0;j<n;j+=i)
S++;
so line 1 is executed once, line 2 is executed n-1 times + 1 check without entering, each one of these n-1 times line 3 is executed n/i times and we get:
T= 1 + n + (n/1+n/2+...+n/n-1) =< 1+n+n (integral of 1/x from 1 to n) = 1+n+nlog(n) . And that's big O, how about Omega?
Let's decompose the function in the following way: T(n) = 1 + n + n + n*S(n), where S = sum(1/x from x=2 to n-1). Note that it is identical to what you wrote.
The function f(x) = 1/x is monotonically decreasing, therefore you can bound the sum S from above by int(1/x from x=1 to n-1) and from below by int(1/x from x=2 to n). In both cases you get log(n) up to constant terms. For the lower bound log(n-1)-log(1) = log(n-1) and for the lower bound log(n)-log(2).
If these bounds are not obvious picture the left and right Riemann sums of the integrals for a decreasing function.
You did use the lower bound, not the upper one, in your question, by the way. (Because 1/x is decreasing, not increasing.)
Then adding that back into the expression for T(n) we have T(n) >= 1 + 2n + n log(n) - n log(2) and T(n) <= 1 + 2n + n log(n-1). Both are asymptotically proportional to n log(n), giving you the Theta class.

Proving or Refuting Time Complexity

I have an exam soon and I wasn't at university for a long time, cause I was at the hospital
Prove or refute the following statements:
log(n)= O(
√
n)
3^(n-1)= O(2^n)
f(n) + g(n) = O(f(g(n)))
2^(n+1) = O(2^n)
Could someone please help me and explain to me ?
(1) is true because log(n) grows asymptotically slower than any polynomial, including sqrt(n) = n^(1/2). To prove this we can observe that both log(n) and sqrt(n) are strictly increasing functions for n > 0 and then focus on a sequence where both evaluate easily, e.g., 2^(2k). Now we see log(2^(2k)) = 2k, but sqrt(2^(2k)) = 2^k. For k = 2, 2k = 2^k, and for k > 2, 2k < 2^k. This glosses over some details but the idea is sound. You can finish this by arguing that between 2^(2k) and 2^(2(k+1)) both functions have values greater than one for k >= 2 and thus any crossings can be eliminated by multiplying sqrt(n) by some constant.
(2) it is not true that 3^(n-1) is O(2^n). Suppose this were true. Then there exists an n0 and c such that for n > n0, 3^(n-1) <= c*2^n. First, eliminate the -1 by adding a (1/3) to the front; so (1/3)*3^n <= c*2^n. Next, divide through by 2^n: (1/3)*(3/2)^n <= c. Multiply by 3: (3/2)^n <= 3c. Finally, take the log of both sides with base 3/2: n <= log_3/2 (3c). The RHS is a constant expression and n is a variable; so this cannot be true of arbitrarily large n as required. This is a contradiction so our supposition was wrong; that is, 3^(n-1) is not O(2^n).
(3) this is not true. f(n) = 1 and g(n) = n is an easy counterexample. In this case, f(n) + g(n) = 1 + n but O(f(g(n)) = O(f(n)) = O(1).
(4) this is true. Rewrite 2^(n+1) as 2*2^n and it becomes obvious that this is true for n >= 1 by choosing c > 2.

How are they calculating the Time Complexity for this Problem

Problem 6: Find the complexity of the below program: 
void function(int n)
{
    int i = 1, s =1;
    while (s <= n)
    {
        i++;
        s += i;
        printf("*");
    }
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
FROM: https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
Looking it over and over.
Apparently they are saying the Time Complexity is O(√n). I don't understand how they are getting to this result, and i've tried looking at this problem over and over. Can anyone break it down into detail?
At the start of the while-loop, we have s = 1; i = 1, and n is some (big) number. In each step of the loop, the following is done,
Take the current i, and increment it by one;
Add this new value for i to the sum s.
It is not difficult to see that successive updates of i forms the sequence 1, 2, 3, ..., and s the sequence 1, 1 + 2, 1 + 2 + 3, .... By a result attributed to the young Gauss, the sum of the first k natural numbers 1 + 2 + 3 + ... + k is k(k + 1) / 2. You should recognise that the sequence s fits this description, where k indicates the number of iterations!
The while-loop terminates when s > n, which is now equivalent to finding the lowest iteration number k such that (k(k + 1) / 2) > n. Simplifying for the asymptotic case, this gives a result such that k^2 > n, which we can simplify for k as k > sqrt(n). It follows that this algorithm runs in a time proportional to sqrt(n).
It is clear that k is the first integer such that k(k+1)/2 > n (otherwise the loop would have stopped earlier).
Then k-1 cannot have this same property, which means that (k-1)((k-1)+1)/2 <= n or (k-1)k/2 <= n. And we have the following sequence of implications:
(k-1)k/2 <= n → (k-1)k <= 2n
→ (k-1)^2 < 2n ; k-1 < k
→ k <= sqrt(2n) + 1 ; solve for k
<= sqrt(2n) + sqrt(2n) ; 1 < sqrt(2n)
= 2sqrt(2)sqrt(n)
= O(sqrt(n))

How the time complexity of gcd is Θ(logn)?

I was solving a time-complexity question on Interview Bit as given in the below image.
The answer given is Θ(theta)(logn) and I am not able to grasp how the logn term arrive here in the time complexity of this program.
Can someone please explain how the answer is theta of logn?
Theorem given any x, gcd(n, m) where n < fib(x) is recursive called equal or less than x times.
Note: fib(x) is fibonacci(x), where fib(x) = fib(x-1) + fib(x-2)
Prove
Basis
every n <= fib(1), gcd(n, m) is gcd(1,m) only recursive once
Inductive step
assume the theorem is hold for every number less than x, which means:
calls(gcd(n, m)) <= x for every n <= fib(x)
consider n where n <= fib(x+1)
if m > fib(x)
calls(gcd(n, m))
= calls(gcd(m, (n-m))) + 1
= calls(gcd(n-m, m%(n-m))) + 2 because n - m <= fib(x-1)
<= x - 1 + 2
= x + 1
if m <= fib(x)
calls(gcd(n, m))
= calls(gcd(m, (n%m))) + 1 because m <= fib(x)
<= x + 1
So the theorem also holds for x + 1, as mathematical induction, the theorem holds for every x.
Conclusion
gcd(n,m) is Θ(reverse fib) which is Θ(logn)
This algorithm generates a decreasing sequence of integer (m, n) pairs. We can try to prove that such sequence decays fast enough.
Let's say we start with m_1 and n_1, with m_1 < n_1.
At each step we take n_1 % m_1, which is < m_1, and repeat recursively on the pair m_2 = n_1 % m_1 and n_2 = m_1.
Now, let's say n_1 % m_1 = m_1 - p for some p where 0 < p < m_1.
We have max(m_2, n_2) = m_1 - p.
Let's take another step (m_2, n_2) -> (m_3, n_3), we can easily see that max(m_3, n_3) < p, but clearly it is also true that max(m_3, n_3) < m_1 - p as the sequence is strictly decreasing.
So we can write max(m_3, n_3) < min(m_1 - p, p), where min(m_1 - p, p) = m_1 / 2. This result expresses the fact that the sequence decreases geometrically, therefore the algorithm has to terminate in at most log_2(m_1) steps.