How do I properly calculate this time complexity - time-complexity

I'm examining this code as preparation for my test and I'm having some problems figuring out what is the correct time complexity:
a = 1;
while (a < n) {
b = 1;
while (b < a^2)
b++;
a = a*2;
}
The values for a are as follows : 1, 2, 4, 8, ... , 2^(logn) = n
Therefore we have logn iterations for the outer loop.
In every nested loop, there are a^2 iterations, so basically what I've come up with is:
T(n) = 1 + 4 + 16 + 64 + ... + (2^logn)^2
I'm having problems finding the general term of this series and therefore getting to a final result.
(maybe due to being completely off in my calculations though)
Would appreciate any help, thank you.

Your calculations are alright, you are correct with your analysis of the inner while-loop. We can demonstrate this with a small table:
This is basically the sum of a geometric progression with:
a1 = 1, q = 4, #n = lg n + 1
Where #n is the number of elements.
We have: Sn = 1 * (4^(lgn +1) - 1)/(4-3) = (4*n^2 - 1)/3
Thus we can say your code running complexity is Θ(n^2)
Mathematical explanation in LaTeX:

Related

How are they calculating the Time Complexity for this Problem

Problem 6: Find the complexity of the below program: 
void function(int n)
{
    int i = 1, s =1;
    while (s <= n)
    {
        i++;
        s += i;
        printf("*");
    }
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
FROM: https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
Looking it over and over.
Apparently they are saying the Time Complexity is O(√n). I don't understand how they are getting to this result, and i've tried looking at this problem over and over. Can anyone break it down into detail?
At the start of the while-loop, we have s = 1; i = 1, and n is some (big) number. In each step of the loop, the following is done,
Take the current i, and increment it by one;
Add this new value for i to the sum s.
It is not difficult to see that successive updates of i forms the sequence 1, 2, 3, ..., and s the sequence 1, 1 + 2, 1 + 2 + 3, .... By a result attributed to the young Gauss, the sum of the first k natural numbers 1 + 2 + 3 + ... + k is k(k + 1) / 2. You should recognise that the sequence s fits this description, where k indicates the number of iterations!
The while-loop terminates when s > n, which is now equivalent to finding the lowest iteration number k such that (k(k + 1) / 2) > n. Simplifying for the asymptotic case, this gives a result such that k^2 > n, which we can simplify for k as k > sqrt(n). It follows that this algorithm runs in a time proportional to sqrt(n).
It is clear that k is the first integer such that k(k+1)/2 > n (otherwise the loop would have stopped earlier).
Then k-1 cannot have this same property, which means that (k-1)((k-1)+1)/2 <= n or (k-1)k/2 <= n. And we have the following sequence of implications:
(k-1)k/2 <= n → (k-1)k <= 2n
→ (k-1)^2 < 2n ; k-1 < k
→ k <= sqrt(2n) + 1 ; solve for k
<= sqrt(2n) + sqrt(2n) ; 1 < sqrt(2n)
= 2sqrt(2)sqrt(n)
= O(sqrt(n))

Understanding time complexity: iterative algorithm

I'm new with getting time complexities and I can't seem to understand the logic behind getting this at the end:
100 (n(n+1) / 2)
For this function:
function a() {
int i,j,k,n;
for(i=1; i<=n; i++) {
for(j=1; j<=i; j++) {
for(k=1; k<=100; k++) {
print("hello");
}
}
}
}
Here's how I understand its algorithm:
i = 1, 2, 3, 4...n
j = 1, 2, 3, 4...(dependent to i, which can be 'n')
k = 1(100), 2(100), 3(100), 4(100)...n(100)
= 100 [1, 2, 3, 4.....]
If I'll use this algorithm above to simulate the end equation, I'll get this result:
End Equation:
100 (n(n+1) / 2)
Simulation
i = 1, 2, 3, 4... n
j = 1, 2, 3, 4... n
k = 100, 300, 600, 10000
I usually study these in youtube and get the idea of Big O, Omega & Theta but when it comes to this one, I can't figure out how they end with the equation such as what I have given. Please help and if you have some best practices, please share.
EDIT:
As for my own assumption of answer, it think it should be this one:
100 ((n+n)/2) or 100 (2n / 2)
Source:
https://www.youtube.com/watch?v=FEnwM-iDb2g
At around: 15:21
I think you've got i and j correct, except that it's not clear why you say k = 100, 200, 300... In every loop, k runs from 1 to 100.
So let's think through the inner loop first:
k from 1 to 100:
// Do something
The inner loop is O(100) = O(1) because its runtime does not depend on n. Now we analyze the outer loops:
i from 1 to n:
j from 1 to i:
// Do inner stuff
Now lets count how many times Do inner stuff executes:
i = 1 1 time
i = 2 2 times
i = 3 3 times
... ...
i = n n times
This is our classic triangular sum 1 + 2 + 3 + ... n = n(n+1) / 2. Therefore, the time complexity of the outer two loops is O(n(n+1)/2) which reduces to O(n^2).
The time complexity of the entire thing is O(1 * n^2) = O(n^2) because nesting loops multiplies the complexities (assuming the runtime of the inner loop is independent of the variables in the outer loops). Note here that if we had not reduced at various phases, we would be left with O(100(n)(n+1)/2), which is equivalent to O(n^2) because of the properties of big-O notation.
SOME TIPS:
You asked for some best practices. Here are some "rules" that I made use of in analyzing the example you posted.
In time complexity analysis, we can ignore multiplication by a constant. This is why the inner loop is still O(1) even though it executes 100 times. Understanding this is the basis of time complexity. We are analyzing runtime on a large scale, not counting the number of clock cycles.
With nested loops where the runtime is independent of each other, just multiply the complexity. Nesting the O(1) loop inside the outer O(N^2) loops resulted in O(N^2) code.
Some more reduction rules: http://courses.washington.edu/css162/rnash/quarters/current/labs/bigOLab/lab9.htm
If you can break code up into smaller pieces (in the same way we analyzed the k loop separately from the outer loops) then you can take advantage of the nesting rule to find the combined complexity.
Note on Omega/Theta:
Theta is the "exact bound" for time complexity whereas Big-O and Omega are upper and lower bounds respectively. Because there is no random data (like there is in a sorting algorithm), we can get an exact bound on the time complexity and the upper bound is equal to the lower bound. Therefore, it does not make any difference if we use O, Omega or Theta in this case.

Time complexity of this loop with multiplication

I wonder the complexity for this loop in terms of n
for (int i = 1; i <= n; i++) {
for (int j = 1; j * i <= n; j++) {
minHeap.offer(arr1[i - 1] + arr2[j - 1]);
}
}
What I did was to follow the concept of Big-O and gave it an upper bound -- O(n^2).
This will involve some math, so get ready :)
Let's first count how many times the line minHeap.offer(arr1[i - 1] + arr2[j - 1]); gets invoked. For each i from the outer loop, the number of iterations of the inner loop is n/i because the condition j * i <= n is equivalent to j <= n/i. Therefore, the total number of iterations of inner loop is n/1 + n/2 + n/3 + .. + 1, or, formally written,
There is a good approximation for this sum explained in detail e.g. here, so take a look. Since we are interested only in asymptotic complexity, we can take only the highest order term which is n * logn. If there was some O(1) operation instead of minHeap.offer(arr1[i - 1] + arr2[j - 1]); that would be a solution of your problem. However, the complexity of offer method in Java is O(logk), where k denotes the current size of priority queue. In our case, priority queue gets larger and larger, so the total running time is log1 + log2 + ... + log(n * logn) = log(1 * 2 * ... * nlogn) = log((nlogn)!).
We can additionally simplify this by using Stirling's approximation, so the final complexity is O(n * logn * log(n * logn)).

Time complexity for the loop

The outer loop executes n times while the inner loop executes ? So the total time is n*something.
Do i need to learn summation,if yes then any book to refer?
for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
printf("*");
This question can be approached by inspection:
n = 16
i | j values | # terms
1 | 1, 2, ..., 16 | n
2 | 1, 3, 5, ..., 16 | n / 2
.. | .. | n / 3
16 | 16 | n / n
In the above table, i is the outer loop value, and j values show the iterations of the inner loop. By inspection, we can see that the loops will take n * (1 + 1/2 + 1/3 + ... + 1/n) steps. This is a bounded harmonic series. As this Math Stack Exchange article shows, there is no closed form for the above expression in terms of n. However, as this SO article shows, there is an upper bound of O(n*ln(n)).
So, the running time for your two loops is O(n*ln(n)).
I believe the time complexity of that is O(n*log(n)). Here is why:
Let us pick some arbitrary natural number i and see how many steps the inner loop takes for this given i. Well for this i, you are going from j=1 to j<=n with a jump of i in between. So basically you are doing this summation many steps:
summation = 1 + (1+i) + (1+2i) + ... (1+ki)
where k is the largest integer such that 1+ki <= n. That is, k is the number of steps and this is what we want to solve for. Well we can solve for k in the equality resulting in k <= (n-1)/i and thus k = ⌊(n-1)/i⌋. That is, k is the floor function/integer division of (n-1)/i. Since we are dealing with time complexities, this floor function doesn't matter so we will just say k = n/i for simplicity. This is the number of steps that the inner loop will take for a given i. So we basically need to add all these for i = 1 to i <= n.
So numsteps will be this addition:
numsteps = n/1 + n/2 + n/3 + ... n/n
= n(1 + 1/2 + 1/3 + ... 1+n)
So we need to find the sum of 1 + 1/2 + ... 1/n to finish this. There is actually no good closed form for this sum but it is on the order of ln(n). You can read more about this here. You can also guess this since the integral from 1 to n of 1/x is ln(n). Again, since we are dealing with time complexity, we can just use ln(n) to represent its complexity. Thus we have:
numsteps = n(ln(n))
And so the time complexity is O(n*log(n)).
Edit: My bad, i was calculating the sum :P

Loop Invariant for Proving Partial Correctness

I'm trying to find a loop invariant so that we can prove this program partially-correct:
{ n >= 1 } pre-condition
i = 1;
z = 1;
while (i != n) {
i = i + 1;
z = z + i*i;
}
{ z = n*(n+1)*(2*n + 1)/6 } post-condition
I am really stuck. Some of the invariants I've tried so far are:
z <= n*(n+1)*(2*n + 1)/6 ^ i <= n
and
z = i*(i+1)*(2*i + 1)/6 ^ i <= n
I would really appreciate some advice.
To find an appropriate invariant you have to have an intuition what the investigated function actually does. In your example the value i^2 is successively added to the accumulator z. So the function computes (just going through the first few iterations of the while-loop by hand and then generalizing):
1^2 + 2^2 + 3^2 + 4^2 + 5^2 + ... + n^2
or written a bit more formally
SUM_{i=1}^{n} i^2
i.e., the sum of all squares of i ranging from 1 to n.
On first sight this might not look similar to your post-condition. However, it can be shown by induction on n that the above sum is equal to
(n*(n+1)*(2*n + 1))/6
which I guess is the intended post-condition. Since we now know that the post-condition is equal to this sum, it should be easy to read off the invariant from the sum.