Loop Invariant for Proving Partial Correctness - while-loop

I'm trying to find a loop invariant so that we can prove this program partially-correct:
{ n >= 1 } pre-condition
i = 1;
z = 1;
while (i != n) {
i = i + 1;
z = z + i*i;
}
{ z = n*(n+1)*(2*n + 1)/6 } post-condition
I am really stuck. Some of the invariants I've tried so far are:
z <= n*(n+1)*(2*n + 1)/6 ^ i <= n
and
z = i*(i+1)*(2*i + 1)/6 ^ i <= n
I would really appreciate some advice.

To find an appropriate invariant you have to have an intuition what the investigated function actually does. In your example the value i^2 is successively added to the accumulator z. So the function computes (just going through the first few iterations of the while-loop by hand and then generalizing):
1^2 + 2^2 + 3^2 + 4^2 + 5^2 + ... + n^2
or written a bit more formally
SUM_{i=1}^{n} i^2
i.e., the sum of all squares of i ranging from 1 to n.
On first sight this might not look similar to your post-condition. However, it can be shown by induction on n that the above sum is equal to
(n*(n+1)*(2*n + 1))/6
which I guess is the intended post-condition. Since we now know that the post-condition is equal to this sum, it should be easy to read off the invariant from the sum.

Related

Determine time-complexity of recursive function

Can anybody help me find time conplexity of this recursive function?
int test(int m, int n) {
if(n == 0)
return m;
else
return (3 + test(m + n, n - 1));
The test(m+n, n-1) is called n-1 times before base case which is if (n==0), so complexity is O(n)
Also, this is a duplicate of Determining complexity for recursive functions (Big O notation)
It is really important to understand recursion and the time-complexity of recursive functions.
The first step to understand easy recursive functions like that is to be able to write the same function in an iterative way. This is not always easy and not always reasonable but in case of an easy function like yours this shouldn't be a problem.
So what happens in your function in every recursive call?
is n > 0?
If yes:
m = m + n + 3
n = n - 1
If no:
return m
Now it should be pretty easy to come up with the following (iterative) alternative:
int testIterative(int m, int n) {
while(n != 0) {
m = m + n + 3;
n = n - 1;
}
return m;
}
Please note: You should pay attention to negative n. Do you understand what's the problem here?
Time complexity
After looking at the iterative version, it is easy to see that the time-complexity is depending on n: The time-complexity therefore is O(n).

What is the time complexity of the given code

I want to know the time complexity of the code attached.
I get O(n^2logn), while my friends get O(nlogn) and O(n^2).
SomeMethod() = log n
Here is the code:
j = i**2;
for (k = 0; k < j; k++) {
for (p = 0; p < j; p++) {
x += p;
}
someMethod();
}
The question is not very clear about the variable N and the statement i**2.
i**2 gives a compilation error in java.
assuming someMethod() takes log N time(as mentioned in question), and completely ignoring value of N,
lets call i**2 as Z
someMethod(); runs Z times. and time complexity of the method is log N so that becomes:
Z * log N ----------------------------------------- A
lets call this expression A.
Now, x+=p runs Z^2 times (i loop * j loop) and takes constant time to run. that makes the following expression:
( Z^2 ) * 1 = ( Z^2 ) ---------------------- B
lets call this expression B.
The total run time is sum of expression A and expression B. which brings us to:
O((Z * log N) + (Z^2))
where Z = i**2
so final expression will be O(((i**2) * log N) + ((i**2)^2))
if we can assume i**2 is i^2, the expression becomes,
O(((i^2) * log N) + (i^4))
Considering only the higher order variables, like we consider n^2 in n^2 + 2n + 5, the complexity can be expressed as follows,
i^4
Based on the picture, the complexity is O(logNI2 + I4).
We cannot give a complexity class with one variable because the picture does not explain the relationship between N and I. They must be treated as separate variables.
And likewise, we cannot eliminate the logNI2 term because the N variable will dominate the I variable in some regions of N x I space.
If we treat N as a constant, then the complexity class reduces to O(I4).
We get the same if we treat N as being the same thing as I; i.e. there is a typo in the question.
(I think there is mistake in the way the question was set / phrased. If not, this is a trick question designed to see if you really understood the mathematical principles behind complexity involving multiple independent variables.)

How are they calculating the Time Complexity for this Problem

Problem 6: Find the complexity of the below program: 
void function(int n)
{
    int i = 1, s =1;
    while (s <= n)
    {
        i++;
        s += i;
        printf("*");
    }
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
FROM: https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
Looking it over and over.
Apparently they are saying the Time Complexity is O(√n). I don't understand how they are getting to this result, and i've tried looking at this problem over and over. Can anyone break it down into detail?
At the start of the while-loop, we have s = 1; i = 1, and n is some (big) number. In each step of the loop, the following is done,
Take the current i, and increment it by one;
Add this new value for i to the sum s.
It is not difficult to see that successive updates of i forms the sequence 1, 2, 3, ..., and s the sequence 1, 1 + 2, 1 + 2 + 3, .... By a result attributed to the young Gauss, the sum of the first k natural numbers 1 + 2 + 3 + ... + k is k(k + 1) / 2. You should recognise that the sequence s fits this description, where k indicates the number of iterations!
The while-loop terminates when s > n, which is now equivalent to finding the lowest iteration number k such that (k(k + 1) / 2) > n. Simplifying for the asymptotic case, this gives a result such that k^2 > n, which we can simplify for k as k > sqrt(n). It follows that this algorithm runs in a time proportional to sqrt(n).
It is clear that k is the first integer such that k(k+1)/2 > n (otherwise the loop would have stopped earlier).
Then k-1 cannot have this same property, which means that (k-1)((k-1)+1)/2 <= n or (k-1)k/2 <= n. And we have the following sequence of implications:
(k-1)k/2 <= n → (k-1)k <= 2n
→ (k-1)^2 < 2n ; k-1 < k
→ k <= sqrt(2n) + 1 ; solve for k
<= sqrt(2n) + sqrt(2n) ; 1 < sqrt(2n)
= 2sqrt(2)sqrt(n)
= O(sqrt(n))

Time complexity of this loop with multiplication

I wonder the complexity for this loop in terms of n
for (int i = 1; i <= n; i++) {
for (int j = 1; j * i <= n; j++) {
minHeap.offer(arr1[i - 1] + arr2[j - 1]);
}
}
What I did was to follow the concept of Big-O and gave it an upper bound -- O(n^2).
This will involve some math, so get ready :)
Let's first count how many times the line minHeap.offer(arr1[i - 1] + arr2[j - 1]); gets invoked. For each i from the outer loop, the number of iterations of the inner loop is n/i because the condition j * i <= n is equivalent to j <= n/i. Therefore, the total number of iterations of inner loop is n/1 + n/2 + n/3 + .. + 1, or, formally written,
There is a good approximation for this sum explained in detail e.g. here, so take a look. Since we are interested only in asymptotic complexity, we can take only the highest order term which is n * logn. If there was some O(1) operation instead of minHeap.offer(arr1[i - 1] + arr2[j - 1]); that would be a solution of your problem. However, the complexity of offer method in Java is O(logk), where k denotes the current size of priority queue. In our case, priority queue gets larger and larger, so the total running time is log1 + log2 + ... + log(n * logn) = log(1 * 2 * ... * nlogn) = log((nlogn)!).
We can additionally simplify this by using Stirling's approximation, so the final complexity is O(n * logn * log(n * logn)).

How to calculate the complexity of the Algorithm?

I have to calculate the complexity of this algorithm,I tried to solve it and found the answer to be O(nlogn). Is it correct ? If not please explain.
for (i=5; i<n/2; i+=5)
{
for (j=1; j<n; j*=4)
op;
x = 3*n;
while(x > 6)
{op; x--;}
}
Katrina, In this example we've got an O (n*log(n))
`
for (int i= 0; i < N; i++) {
c= i;
while (c > 0) {
c= c/2
}
}
How ever you have another for that includes this both bucles.
Im not quite sure to understand the way it works the algorithm but an standard way considering an another for bucle, should be O(nnlog n
for (int j= 0; i < N); j++) { --> O(n)
for (int i= 0; i < N; i++) { O(n)
c= i;
while (c > 0) { O(logn)
c= c/2 O(1)
}
}
} ?
Aftermath in this standard algorithm would be O(n) * O(n) * O(logn) * O(1)
So, I think you forgot to include another O(n)
)
Hope it helps
Let's count the number of iterations in each loop.
The outermost loop for (i=5; i<n/2; i+=5) steps through all values between 5 and n / 2 in steps of 5. It will, thus, require approximately n / 10 - 1 iterations.
There are two inner loops. Let's consider the first one: for (j=1; j<n; j*=4). This steps through all values of the form 4^x between 1 and n for integers of x. The lowest value of x for this to be true is 0, and the highest value is the x that fulfills 4^x < n -- i.e., the integer closest to log_4(n). Thus, labelling the iterations by x, we have iterations 0, 1, ..., log_4(n). In other words, we've approximately log_4(n) + 1 iterations for this loop.
Now consider the second inner loop. It steps through all values from 3 * n down to 7. Thus the number of iterations are approximately 3 n - 6.
All other operations have constant run time and can therefore be ignored.
How do we put this together? The two inner loops are run sequentially (i.e., they are not nested) so the run time for both of them together is simply the sum:
(log_4(n) + 1) + (3 n - 6) = 3 n + log_4(n) - 5.
The outer loop and the two inner loops are, however, nested. For every iteration of the outer loop, both the inner ones are run. Therefore we multiply the number of iterations of the outer with the total of the inner:
(n / 10 - 1) * (3 n + log_4(n) - 5) =
= 3 n^2 / 10 + n log_4(n) / 10 - 7 n / 2 - log_4(n) + 5.
Finally, complexity is often expressed in Big-O notation -- that is, we're only interested in the order of the run time. This means two things. First, we can ignore all constant factors in all terms. For example, O(3 n^2 / 10) becomes just O(n^2). Thereby we have:
O(3 n^2 / 10 + n log_4(n) / 10 - 7 n / 2 - log_4(n) + 5) =
= O(n^2 + n log_4(n) - n - log_4(n) + 1).
Second, we can ignore all terms that have a lower order than the term with the highest order. For example, n is of an higher order than 1 so we have O(n + 1) = O(n). Thereby we have:
O(n^2 + n log_4(n) - n - log_4(n) + 1) = O(n^2).
Finally, we have the answer. The complexity of the algorithm your code describes is O(n^2).
(In practice, one would never calculate the (approximate) number of iterations as we did here. The simplification we did in the last step can be done earlier which makes the calculations much easier.)
As Fredrik mentioned, time complexity of first and third loops are O(n). an time for second loop is O(log(n)).
So complexity of following algorithm is O(n^2).
for (i=5; i<n/2; i+=5)
{
for (j=1; j<n; j*=4)
op;
x = 3*n;
while(x > 6)
{op; x--;}
}
Note that complexity of following algorithm is O(n^2*log(n)) which is not same with above algorithm.
for (i=5; i<n/2; i+=5)
{
for (j=1; j<n; j*=4)
{
op;
x = 3*n;
while(x > 6)
{op; x--;}
}
}