Related
I'm building a model using or-tools CP tools. The values I want to find are placed in a vector X, and I want to add a constraint that says up to each position of X, the next position cannot have as a value something bigger than the maximum found until X[:i] + 1
It would be something like this:
X[i] <= (max(X[:i]) + 1)
Of course, I cannot add this as a linear constraint with a max(), and creating one extra feature for each value of X upper bound seems excessive and also I would need to minimize each one to make it the "max", otherwise those are just upper bounds that could be huge and not prune my search space (and I already have an objective function).
I already have an objective function.
I know that one trick to add for instance a min-max (min(max(x[i])) problem is to create another variable that is an upper bound of each x and minimize that one. It would be sth like this:
model = cp_model.CpModel()
lb =0; ub=0
model.NewIntVar(z, lb, ub)
for i in domain(X):
model.NewIntVar(X[i], lb, up)
model.Add(X[i] <= z)
model.Minimize(z)
In case you don't want to program this you can use the method in or-tools:
model.AddMaxEquality(z, X)
Now I want to add a constraint that at each value of X sets an upper limit which is the maximum value found until the previous x. It would be something like this:
X[i] <= max(X[:i]) + 1
I was thinking of replicating the previous idea but that would require creating a "z" for each x... not sure if that is the best approach and how much it will reduce my space of solutions. At the same time couldn't find a method in or-tools to do this.
Any suggestions, please?
PS: I already have as an objective function min(z) like it is in the example presented.
Example:
For instance, you can have as a result of the model:
[0, 1, 2, 0, 2, 3]
But you shouldn't have:
[0, 1, 1, 2, 4]
Since the max until X[:3] is 2, so the ub of X[4] should be 2 + 1.
Thanks!
I have no specific hints except:
you need to experiment. One modeling trick may work on one kind of model and not on the other
make sure to use reuse the max variable at index i - 1. With X the array of variables and M the array of max, i.e. M[i] = max(X[0], .., X[i - 1])
M[i] = max(M[i - 1], X[i - 1])
X[i] <= M[i] + 1
In Cracking the Coding Interview, 6th edition, page 6, the amortized time for insertion is explained as:
As we insert elements, we double the capacity when the size of the array is a power of 2. So after X elements, we double the capacity at
array sizes 1, 2, 4, 8, 16, ... , X.
That doubling takes, respectively, 1, 2, 4, 8, 16, 32, 64, ... , X
copies. What is the sum of 1 + 2 + 4 + 8 + 16 + ... + X?
If you read this sum left to right, it starts with 1 and doubles until
it gets to X. If you read right to left, it starts with X and halves
until it gets to 1.
What then is the sum of X + X/2 + X/4 + ... + 1? This is roughly 2X.
Therefore, X insertions take O( 2X) time. The amortized time for each
insertion is O(1).
While for this code snippet(a recursive algorithm),
`
int f(int n) {
if (n <= 1) {
return 1;
}
return f(n - 1) + f(n - 1); `
The explanation is:
The tree will have depth N. Each node has two children. Therefore,
each level will have twice as many calls as the one above it.
Therefore,there will be 2^0+ 2^1 + 2^2 + 2^3 + ... + 2^N(which is
2^(N+1) - 1) nodes. . In this case, this gives us O(2^N) .
My question is:
In the first case, we have a GP 1+2+4+8...X. In the 2nd case we have the same GP 1+2+4+8..2^N. Why is the sum 2X in one case while it is 2^(N+1)-1 in another.
I think that it might be because we can't represent X as 2^N but I'm not sure.
Because in the second case N is the depth of the tree and not the total number of nodes. It would be 2^N = X, as you already stated.
So there was a puzzle:
This equation is incomplete: 1 2 3 4 5 6 7 8 9 = 100. One way to make
it accurate is by adding seven plus and minus signs, like so: 1 + 2 +
3 – 4 + 5 + 6 + 78 + 9 = 100.
How can you do it using only 3 plus or minus signs?
I'm quite new to Prolog, solved the puzzle, but i wonder how to optimize it
makeInt(S,F,FinInt):-
getInt(S,F,0,FinInt).
getInt(Start, Finish, Acc, FinInt):-
0 =< Finish - Start,
NewAcc is Acc*10 + Start,
NewStart is Start +1,
getInt(NewStart, Finish, NewAcc, FinInt).
getInt(Start, Finish, A, A):-
0 > Finish - Start.
itCounts(X,Y,Z,Q):-
member(XLastDigit,[1,2,3,4,5,6]),
FromY is XLastDigit+1,
numlist(FromY, 7, ListYLastDigit),
member(YLastDigit, ListYLastDigit),
FromZ is YLastDigit+1,
numlist(FromZ, 8, ListZLastDigit),
member(ZLastDigit,ListZLastDigit),
FromQ is ZLastDigit+1,
member(YSign,[-1,1]),
member(ZSign,[-1,1]),
member(QSign,[-1,1]),
0 is XLastDigit + YSign*YLastDigit + ZSign*ZLastDigit + QSign*9,
makeInt(1, XLastDigit, FirstNumber),
makeInt(FromY, YLastDigit, SecondNumber),
makeInt(FromZ, ZLastDigit, ThirdNumber),
makeInt(FromQ, 9, FourthNumber),
X is FirstNumber,
Y is YSign*SecondNumber,
Z is ZSign*ThirdNumber,
Q is QSign*FourthNumber,
100 =:= X + Y + Z + Q.
Not sure this stands for an optimization. The code is just shorter:
sum_123456789_eq_100_with_3_sum_or_sub(L) :-
append([G1,G2,G3,G4], [0'1,0'2,0'3,0'4,0'5,0'6,0'7,0'8,0'9]),
maplist([X]>>(length(X,N), N>0), [G1,G2,G3,G4]),
maplist([G,F]>>(member(Op, [0'+,0'-]),F=[Op|G]), [G2,G3,G4], [F2,F3,F4]),
append([G1,F2,F3,F4], L),
read_term_from_codes(L, T, []),
100 is T.
It took me a while, but I got what your code is doing. It's something like this:
itCounts(X,Y,Z,Q) :- % generate X, Y, Z, and Q s.t. X+Y+Z+Q=100, etc.
generate X as a list of digits
do the same for Y, Z, and Q
pick the signs for Y, Z, and Q
convert all those lists of digits into numbers
verify that, with the signs, they add to 100.
The inefficiency here is that the testing is all done at the last minute. You can improve the efficiency if you can throw out some possible solutions as soon as you pick one of your numbers, that is, testing earlier.
itCounts(X,Y,Z,Q) :- % generate X, Y, Z, and Q s.t. X+Y+Z+Q=100, etc.
generate X as a list of digits, and convert it to a number
if it's so big or small the rest can't possibly bring the sum back to 100, fail
generate Y as a list of digits, convert to number, and pick it sign
if it's so big or so small the rest can't possibly bring the sum to 100, fail
do the same for Z
do the same for Q
Your function is running pretty fast already, even if I search all possible solutions. It only picks 6 X's; 42 Y's; 224 Z's; and 15 Q's. I don't think optimizing will be worth your while.
But if you really wanted to: I tested this by putting a testing function immediately after selecting an X. It reduced the 6 X's to 3 (all before finding the solution); 42 Y's to 30; 224 Z's to 184; and 15 Q's to 11. I believe we could reduce it further by testing immediately after a Y is picked, to see whether X YSign Y is already so large or small there can be no solution.
In PROLOG programs that are more computationally intensive, putting parts of the 'test' earlier in 'generate and test' algorithms can help a lot.
I'm trying to get a good grasp with this problem but I'm struggling.
Let's say that I have a S={1,2,3,4,5}, an L={(1,3,4),(2,3),(4,5),(1,3),(2),(5)} and an other tuple with the costs of L like C={10,20,12,15,4,10}
I want to make a constraint program in Prolog so as to take the solution that solves the problem with the minimum cost.(in this occasion it is the total sum of the costs of the subsets i will get)
My problem is that I cannot understand the way I'll make my modelisation. What I know is that I should choose a modelisation of binary variables {0,1} but I hardly understand how i will manage to express it via Prolog.
There is an easy way to do it: You can use Boolean indicators to denote which elements comprise a subset. For example, in your case:
subsets(Sets) :-
Sets = [[1,0,1,1,0]-10, % {1,3,4}
[0,1,1,0,0]-20, % {2,3}
[0,0,0,1,1]-12, % {4,5}
[1,0,1,0,0]-15, % {1,3}
[0,1,0,0,0]-4, % {2}
[0,0,0,0,1]-10]. % {5}
I now use SICStus Prolog and its Boolean constraint solver to express set covers:
:- use_module(library(lists)).
:- use_module(library(clpb)).
setcover(Cover, Cost) :-
subsets(Sets),
keys_and_values(Sets, Rows, Costs0),
transpose(Rows, Cols),
same_length(Rows, Coeffs),
maplist(cover(Coeffs), Cols),
labeling(Coeffs),
phrase(coeff_is_1(Coeffs, Rows), Cover),
phrase(coeff_is_1(Coeffs, Costs0), Costs),
sumlist(Costs, Cost).
cover(Coeffs, Col) :-
phrase(coeff_is_1(Col,Coeffs), Cs),
sat(card([1],Cs)).
coeff_is_1([], []) --> [].
coeff_is_1([1|Cs], [L|Ls]) --> [L], coeff_is_1(Cs, Ls).
coeff_is_1([0|Cs], [_|Ls]) --> coeff_is_1(Cs, Ls).
For each subset, a Boolean variable is used to denote whether that subset is part of the cover. Cardinality constraints make sure that each element is covered exactly once.
Example query and its result:
| ?- setcover(Cover, Cost).
Cover = [[0,0,0,1,1],[1,0,1,0,0],[0,1,0,0,0]],
Cost = 31 ? ;
Cover = [[1,0,1,1,0],[0,1,0,0,0],[0,0,0,0,1]],
Cost = 24 ? ;
no
I leave picking a cover with minimum cost as an easy exercise.
Maybe an explicit model for your problem instance makes things a bit clearer:
cover(SetsUsed, Cost) :-
SetsUsed = [A,B,C,D,E,F], % a Boolean for each set
SetsUsed #:: 0..1,
A + D #= 1, % use one set with element 1
B + E #= 1, % use one set with element 2
A + B + D #= 1, % use one set with element 3
A + C #= 1, % use one set with element 4
C + F #= 1, % use one set with element 5
Cost #= 10*A + 20*B + 12*C + 15*D + 4*E + 10*F.
You can solve this e.g. in ECLiPSe:
?- cover(SetsUsed,Cost), branch_and_bound:minimize(labeling(SetsUsed), Cost).
SetsUsed = [1, 0, 0, 0, 1, 1]
Cost = 24
Yes (0.00s cpu)
In NTRUEncryption, I seen the trucated polynimials, but I cannot understand the trunacated polynomial calculation.
So, could tell me anyone How we calculate the truncated polynomial?
The polynomials are truncated in the sense that they only have coefficients up to a certain degree.
Here is how you truncate the product of two truncated polynomials (the sum is trivial):
Assume you have two truncated polynomials, i.e. two polynomials of degree no greater than n-1
a = a[0] + a[1]X + ... + a[n-1]X^(n-1)
b = b[0] + b[1]X + ... + b[n-1]X^(n-1)
Then their "truncated" product is defined as the polynomial
a * b = c[0] + c[1]X + ... +c[n-1]X^(n-1)
where the c[k] coefficients are computed as follow:
Reverse b[0]..b[n-1] to get b[n-1]..b[0].
Rotate the result of step 1 above k+1 times to the right and get b[k]..b[0]b[n-1]..b[k+1]
Denote with b_k[0]..b_k[n-1] the array calculated in 2.
Now define
c[k] = a[0]b_k[0] + a[1]b_k[1] + ... + a[n-1]b_k[n-1].
This operation can also be made by multiplying the polynomials a and b in the usual way and then truncating the result to the degree n-1. The reason for the algorithm above is to avoid computing coefficients that will not be used in the final result.