I am trying to find the loop invariant in the following code:
Find Closest Pair Iter(A) :
# Precondition: A is a non-empty list of 2D points and len(A) > 1.
# Postcondition: Returns a pair of points which are the two closest points in A.
min = infinity
p = -1
q = -1
for i = 0,...,len(A) - 1:`=
for j = i + 1,...,len(A) - 1:
if Distance(A[i],A[j]) < min:
min = Distance(A[i],A[j])
p = i
q = j
return (A[p],A[q])
I think the loop invariant is min = Distance(A[i],A[j]) so closest point in A is A[p] and a[q] .
I'm trying to show program correctness. Here I want to prove the inner loop by letting i be some constant, then once I've proven the inner loop, replace it by it's loop invariant and prove the outer loop. By the way this is homework. Any help will be much appreciated.
I'm not sure I fully understand what you mean by replacing the inner loop by its loop invariant. A loop invariant is a condition that holds before the loop and after every iteration of the loop (including the last one).
That being said, I wouldn't like to spoil your homework, so I'll try my best to help you without giving too much of the answer away. Let me try:
There are three variables in your algorithm that hold very important values (min, p and q). You should ask yourself what is true about these values as the algorithm goes through each pair of points (A[i], A[j])?
In a simpler example: if you were designing an algorithm to sum values in a list, you would create a variable called sum before the loop and assign 0 to it. You would then sum the elements one by one through a loop, and then return the variable sum.
Since it is true that this variable holds the sum of every single element "seen" in the loop, and since after the main loop the algorithm will have "seen" every element in the list, the sum variable necessarily holds the sum of all values in the list. In this case the loop invariant would be: The sum variable holds the sum of every element "seen" so far.
Good luck with your homework!
Related
for(i=1;i<=n;i=i*2)
{
for(j=1;j<=i;j++)
{
}
}
How the complexity of the following code is O(nlogn) ?
Time complexity in terms of what? If you want to know how many inner loop operations the algorithm performs, it is not O(n log n). If you want to take into account also the arithmetic operations, then see further below. If you literally are to plug in that code into a programming language, chances are the compiler will notice that your code does nothing and optimise the loop away, resulting in constant O(1) time complexity. But only based on what you've given us, I would interpret it as time complexity in terms of whatever might be inside the inner loop, not counting arithmetic operations of the loops themselves. If so:
Consider an iteration of your inner loop a constant-time operation, then we just need to count how many iterations the inner loop will make.
You will find that it will make
1 + 2 + 4 + 8 + ... + n
iterations, if n is a square number. If it is not square, it will stop a bit sooner, but this will be our upper limit.
We can write this more generally as
the sum of 2i where i ranges from 0 to log2n.
Now, if you do the math, e.g. using the formula for geometric sums, you will find that this sum equals
2n - 1.
So we have a time complexity of O(2n - 1) = O(n), if we don't take the arithmetic operations of the loops into account.
If you wish to verify this experimentally, the best way is to write code that counts how many times the inner loop runs. In javascript, you could write it like this:
function f(n) {
let c = 0;
for(i=1;i<=n;i=i*2) {
for(j=1;j<=i;j++) {
++c;
}
}
console.log(c);
}
f(2);
f(4);
f(32);
f(1024);
f(1 << 20);
If you do want to take the arithmetic operations into account, then it depends a bit on your assumptions but you can indeed get some logarithmic coefficients to account for. It depends on how you formulate the question and how you define an operation.
First, we need to estimate number of high-level operations executed for different n. In this case the inner loop is an operation that you want to count, if I understood the question right.
If it is difficult, you may automate it. I used Matlab for example code since there was no tag for specific language. Testing code will look like this:
% Reasonable amount of input elements placed in array, change it to fit your needs
x = 1:1:100;
% Plot linear function
plot(x,x,'DisplayName','O(n)', 'LineWidth', 2);
hold on;
% Plot n*log(n) function
plot(x, x.*log(x), 'DisplayName','O(nln(n))','LineWidth', 2);
hold on;
% Apply our function to each element of x
measured = arrayfun(#(v) test(v),x);
% Plot number of high level operations performed by our function for each element of x
plot(x,measured, 'DisplayName','Measured','LineWidth', 2);
legend
% Our function
function k = test(n)
% Counter for operations
k = 0;
% Outer loop, same as for(i=1;i<=n;i=i*2)
i = 1;
while i < n
% Inner loop
for j=1:1:i
% Count operations
k=k+1;
end
i = i*2;
end
end
And the result will look like
Our complexity is worse than linear but not worse than O(nlogn), so we choose O(nlogn) as an upper bound.
Furthermore the upper bound should be:
O(n*log2(n))
The worst case is n being in 2^x. x€real numbers
The inner loop is evaluated n times, the outer loop log2 (logarithm basis 2) times.
Given an integer n such that (1<=n<=10^18)
We need to calculate f(1)+f(2)+f(3)+f(4)+....+f(n).
f(x) is given as :-
Say, x = 1112222333,
then f(x)=1002000300.
Whenever we see a contiguous subsequence of same numbers, we replace it with the first number and zeroes all behind it.
Formally, f(x) = Sum over all (first element of the contiguous subsequence * 10^i ), where i is the index of first element from left of a particular contiguous subsequence.
f(x)=1*10^9 + 2*10^6 + 3*10^2 = 1002000300.
In, x=1112222333,
Element at index '9':-1
and so on...
We follow zero based indexing :-)
For, x=1234.
Element at index-'0':-4,element at index -'1':3,element at index '2':-2,element at index 3:-1
How to calculate f(1)+f(2)+f(3)+....+f(n)?
I want to generate an algorithm which calculates this sum efficiently.
There is nothing to calculate.
Multiplying each position in the array od numbers will yeild thebsame number.
So all you want to do is end up with 0s on a repeated number
IE lets populate some static values in an array in psuedo code
$As[1]='0'
$As[2]='00'
$As[3]='000'
...etc
$As[18]='000000000000000000'```
these are the "results" of 10^index
Given a value n of `1234`
```1&000 + 2&00 +3 & 0 + 4```
Results in `1234`
So, if you are putting this on a chip, then probably your most efficient method is to do a bitwise XOR between each register and the next up the line as a single operation
Then you will have 0s in all the spots you care about, and just retrive the values in the registers with a 1
In code, I think it would be most efficient to do the following
```$n = arbitrary value 11223334
$x=$n*10
$zeros=($x-$n)/10```
Okay yeah we can just do bit shifting to get a value like 100200300400 etc.
To approach this problem, it could help to begin with one digit numbers and see what sum you get.
I mean like this:
Let's say, we define , then we have:
F(1)= 45 # =10*9/2 by Euler's sum formula
F(2)= F(1)*9 + F(1)*100 # F(1)*9 is the part that comes from the last digit
# because for each of the 10 possible digits in the
# first position, we have 9 digits in the last
# because both can't be equal and so one out of ten
# becomse zero. F(1)*100 comes from the leading digit
# which is multiplied by 100 (10 because we add the
# second digit and another factor of 10 because we
# get the digit ten times in that position)
If you now continue with this scheme, for k>=1 in general you get
F(k+1)= F(k)*100+10^(k-1)*45*9
The rest is probably straightforward.
Can you tell me, which Hackerrank task this is? I guess one of the Project Euler tasks right?
I am trying to rank these functions — 2n, n100, (n + 1)2, n·lg(n), 100n, n!, lg(n), and n99 + n98 — so that each function is the big-O of the next function, but I do not know a method of determining if one function is the big-O of another. I'd really appreciate if someone could explain how I would go about doing this.
Assuming you have some programming background. Say you have below code:
void SomeMethod(int x)
{
for(int i = 0; i< x; i++)
{
// Do Some Work
}
}
Notice that the loop runs for x iterations. Generalizing, we say that you will get the solution after N iterations (where N will be the value of x ex: number of items in array/input etc).
so This type of implementation/algorithm is said to have Time Complexity of Order of N written as O(n)
Similarly, a Nested For (2 Loops) is O(n-squared) => O(n^2)
If you have Binary decisions made and you reduce possibilities into halves and pick only one half for solution. Then complexity is O(log n)
Found this link to be interesting.
For: Himanshu
While the Link explains how log(base2)N complexity comes into picture very well, Lets me put the same in my words.
Suppose you have a Pre-Sorted List like:
1,2,3,4,5,6,7,8,9,10
Now, you have been asked to Find whether 10 exists in the list. The first solution that comes to mind is Loop through the list and Find it. Which means O(n). Can it be made better?
Approach 1:
As we know that List of already sorted in ascending order So:
Break list at center (say at 5).
Compare the value of Center (5) with the Search Value (10).
If Center Value == Search Value => Item Found
If Center < Search Value => Do above steps for Right Half of the List
If Center > Search Value => Do above steps for Left Half of the List
For this simple example we will find 10 after doing 3 or 4 breaks (at: 5 then 8 then 9) (depending on how you implement)
That means For N = 10 Items - Search time was 3 (or 4). Putting some mathematics over here;
2^3 + 2 = 10 for simplicity sake lets say
2^3 = 10 (nearly equals --- this is just to do simple Logarithms base 2)
This can be re-written as:
Log-Base-2 10 = 3 (again nearly)
We know 10 was number of items & 3 was the number of breaks/lookup we had to do to find item. It Becomes
log N = K
That is the Complexity of the alogorithm above. O(log N)
Generally when a loop is nested we multiply the values as O(outerloop max value * innerloop max value) n so on. egfor (i to n){ for(j to k){}} here meaning if youll say for i=1 j=1 to k i.e. 1 * k next i=2,j=1 to k so i.e. the O(max(i)*max(j)) implies O(n*k).. Further, if you want to find order you need to recall basic operations with logarithmic usage like O(n+n(addition)) <O(n*n(multiplication)) for log it minimizes the value in it saying O(log n) <O(n) <O(n+n(addition)) <O(n*n(multiplication)) and so on. By this way you can acheive with other functions as well.
Approach should be better first generalised the equation for calculating time complexity. liken! =n*(n-1)*(n-2)*..n-(n-1)so somewhere O(nk) would be generalised formated worst case complexity like this way you can compare if k=2 then O(nk) =O(n*n)
I'm new to Scilab (and programming in general). I'm trying to implement a Scilab code to solve the cutting stock problem aka 'bin packing'.
The problem: given 'n' items of sizes[] (a vector from s1 to sn), and same capacity for all bins (c=1000), I need to minimize the number of bins required to fit all items.
I'm trying the 'next item algorithm', i.e., pick the first item from the vector, put it in the bin, then pick the next item and try to put in the same bin, in case there is no enough space, then create another bin.
Actually I don't need help in improving the algorithm, but rather in implement the code for this specific one.
Here is what I've tried so far:
// 'n' is the number of items to be packed
// 'c' is the capacity (how much items fit in the bin)
// 'sizes' a vector containing the size of n items
// 'nbins' number of bins used so far
// 'bin_rem' space left in current bin
sizes=[400,401,402,403,404,405,406,408,409,411,428,450,482]
c=1000
n=length(sizes)
nbins = 0
bin_rem = c
function y = bins(sizes,c,n)
for i=0; i<n; i=i+1
if sizes[i] > bin_rem
nbins=nbins+1
bin_rem = c - sizes(i)
bin_rem = bin_rem - sizes(i)
end
endfunction
disp ("Number of bins needed "+string(bins([sizes,c,n])))
end
I'm stuck with this error below and have no idea on how to solve it.
at line 20 of executed file
endfunction
^~~~~~~~~~~^
Error: syntax error, unexpected endfunction, expecting end
Any help?
First, seems like you still don't quite understand Scilab's syntax, since I see you using sizes[i], instead of sizes(i), and calling bins([sizes,c,n]). So, for now, try not to use functions. As for the error you get, it happens because you forgot one end. The way you wrote your code, only the if statement is closed, and for loop is still open.
Secondly, when you correct that, you'll notice that your program does not work properly, and that is because you defined the loop wrong. In Scilab, the for loop is actually a "for each" loop, therefore, you need to provide a full range of values for each iteration, instead of starting value (i=0), condition (i<n) and increment function (i=i+1).
Thirdly, seems like you understand the algorithm you're trying to use, but you implemented it wrong: the last line inside the loop should be the else statement.
Solving all those, you have the following piece of code:
sizes=[400,401,402,403,404,405,406,408,409,411,428,450,482]
c=1000
n=length(sizes)
nbins = 0 //should start as 0
bin_rem = 0 //should start as 0
for i = 1:n
if sizes(i) > bin_rem
nbins = nbins + 1;
bin_rem = c - sizes(i);
else
bin_rem = bin_rem - sizes(i);
end
end
disp ("Number of bins needed "+string(nbins))
Just to clarify, 1:n means a vector from 1 to n with pace of 1. You could have also written for i = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Finding three elements in an array whose sum is closest to an given number
How can I write an Objective C code to check if the sum of any three numbers in an array/list matches a given number?
step 1: sort, O(nlgn)
step 2: iterate every number, say A,(this costs O(n)), then check whether the sum of any two numbers equals to the given number minus A(this is a classic problem which costs O(n))
total complexity: O(n^2)
Here is another way
X,Y,Z are indices of array and P is given Number .
If conditions is X+Y=P
then we sort the array
and then We pick each element and then search P-Y in remaining array .If searching is successful then fine are else return False .
So searching takes log(n) time(binary search) so for n elements it takes O(nlog(n)) time .
Now Our condition is X+Y+Z=P
We deduce it to X+Y=P-Z
Now Pick a number Z and calculate P-Z and let it be R .
Now the problem is deduce to X+Y=R .So time complexity is O(nlog(n))
Since R varies n times for n picks in array so complexity is O((N^2)log(n))) .
Here's a brute-force solution in Python, valuable only for its succinctness, not at all for its efficiency:
import itertools
def anyThreeEqualTo(list, value):
return any([sum(c) == value for c in itertools.combinations(list, 3)])
Another idea:
import itertools
def anyThreeEqualTo(list, value):
for c in itertools.combinations(list, 3)])
if sum(c) == value:
return True
return False
These solutions try each of the triplets in turn until one is found with the desired sum.