Explicitly write the sum of bilinear or trilinear terms in GAMS? - gams-math

I am working on writing a constraint in GAMS which involves a polynomial with the sum of 1st, 2nd, and 3rd order terms. Since there are 7 variables (from x('1') to x('7')), there are in total 119 terms (7 first-order terms, 28 second-order terms, and 84 third-order terms).
The first-order terms are easy to write out:
c('1')*x('1') + c('2')*x('2') ...
However, the second-order terms are more difficult to write:
c('8')*x('1')*x('1')+c('9')*x('1')*x('2') ... +c('14')*x('1')*x('7')+c('15')*x('2')*x('2') ... + c('35')*x('7')*x('7')
The thrid order terms are just too many and too long:
c('36')*x('1')*x('1')*x('1') + c('37')*x('1')*x('1')*x('2') + ... + c('119')*x('7')*x('7')*x('7')
It is really challenging to write the equations manually. I was wondering if there is a way to use the 'sum' and conditional ($) operation to write out these 119 terms explicitly.
For instance,
Set
i /1 * 7/
num /1 * 119/
;
Variable
x(i)
func
;
Parameters
c(num)
;
Equation
Eq1;
Eq1.. func =e= c('1')*x('1') + c('2')*x('2') + {many terms in between} + c('8')*x('1')*x('1')+c('9')*x('1')*x('2') + {many terms in between} + c('36')*x('1')*x('1')*x('1') + c('37')*x('1')*x('1')*x('2') + {many terms in between} + c('119')*x('7')*x('7')*x('7')
Thanks!

Related

What does O(nm/8 * log(nm/8)) + O(nm/9 * log(nm/9)) + ... + O(nm/m * log(nm/m)) equal to?

I'm sorry for the question title but I can't find a simpler way to put it. Basically, my algorithm involves quicksort for O(nm/k) elements, where k ranges from 8 to m. I wonder what the total complexity for this is, and how to deduce it? Thank you!
Drop the division inside the logarithms and we get nmlog(mn) * (1/8 + ... + 1/m) = O(nmlog(mn)log(m)) = O(mnlog(m)^2 + mnlog(m)log(n)). [I used the fact that the harmonic series is asymptotically ln(m))
Note that the fact that we dropped the divisions inside the logarithms means that we got an upper bound rather than an exact bound (but a better one than the naive approach of taking the biggest term multiplied by m).

Time complexity of Simpson's rule for simple intergral calculus

I am looking for a reference and a proof for the time complexity of Simpson's rule for integral calculus.
I am not sure if the class complexity of that rule belongs to O(N).
Could you point me out to the right direction ?
Thanks
First of all, the Simpson's Rule requires three inputs:
The function f(x), assume it takes O(1) time.
The bounds of integration (a, b)
The number of subdivisions, n. Then the width of the "bar" d = (b - a) / n Note n must be an even positive integer.
Simpson's Rule states that
∫ab f(x) ≈ (d/3)([f(x0) + f(xn)] + [2f(x1) + 4f(x2)] + [2f(x3) + 4f(x4)] + ... [2f(xn-2) + 4f(xn-1)])
∫ab f(x) ≈ (d/3)([f(x0) + f(xn)] + ∑k=2(n-1)/2 f(xk)
where xk is equal to a + kd. Note x0 = a, xn = a + nd = b.
From the summation term ∑k=2(n-1)/2, we can easily state that there are [(n-1)/2 - 2 + 1] terms, and there are also two more terms for f(x0), f(xn). The number of terms used for the Simpson rule for a given n is linear to n.
Assuming multiplication is constant and the function complexity is constant, we note the summation formula to determine that the time complexity of the Simpson rule is O(n), it runs in linear time.

Set partitioning

I'm trying to get a good grasp with this problem but I'm struggling.
Let's say that I have a S={1,2,3,4,5}, an L={(1,3,4),(2,3),(4,5),(1,3),(2),(5)} and an other tuple with the costs of L like C={10,20,12,15,4,10}
I want to make a constraint program in Prolog so as to take the solution that solves the problem with the minimum cost.(in this occasion it is the total sum of the costs of the subsets i will get)
My problem is that I cannot understand the way I'll make my modelisation. What I know is that I should choose a modelisation of binary variables {0,1} but I hardly understand how i will manage to express it via Prolog.
There is an easy way to do it: You can use Boolean indicators to denote which elements comprise a subset. For example, in your case:
subsets(Sets) :-
Sets = [[1,0,1,1,0]-10, % {1,3,4}
[0,1,1,0,0]-20, % {2,3}
[0,0,0,1,1]-12, % {4,5}
[1,0,1,0,0]-15, % {1,3}
[0,1,0,0,0]-4, % {2}
[0,0,0,0,1]-10]. % {5}
I now use SICStus Prolog and its Boolean constraint solver to express set covers:
:- use_module(library(lists)).
:- use_module(library(clpb)).
setcover(Cover, Cost) :-
subsets(Sets),
keys_and_values(Sets, Rows, Costs0),
transpose(Rows, Cols),
same_length(Rows, Coeffs),
maplist(cover(Coeffs), Cols),
labeling(Coeffs),
phrase(coeff_is_1(Coeffs, Rows), Cover),
phrase(coeff_is_1(Coeffs, Costs0), Costs),
sumlist(Costs, Cost).
cover(Coeffs, Col) :-
phrase(coeff_is_1(Col,Coeffs), Cs),
sat(card([1],Cs)).
coeff_is_1([], []) --> [].
coeff_is_1([1|Cs], [L|Ls]) --> [L], coeff_is_1(Cs, Ls).
coeff_is_1([0|Cs], [_|Ls]) --> coeff_is_1(Cs, Ls).
For each subset, a Boolean variable is used to denote whether that subset is part of the cover. Cardinality constraints make sure that each element is covered exactly once.
Example query and its result:
| ?- setcover(Cover, Cost).
Cover = [[0,0,0,1,1],[1,0,1,0,0],[0,1,0,0,0]],
Cost = 31 ? ;
Cover = [[1,0,1,1,0],[0,1,0,0,0],[0,0,0,0,1]],
Cost = 24 ? ;
no
I leave picking a cover with minimum cost as an easy exercise.
Maybe an explicit model for your problem instance makes things a bit clearer:
cover(SetsUsed, Cost) :-
SetsUsed = [A,B,C,D,E,F], % a Boolean for each set
SetsUsed #:: 0..1,
A + D #= 1, % use one set with element 1
B + E #= 1, % use one set with element 2
A + B + D #= 1, % use one set with element 3
A + C #= 1, % use one set with element 4
C + F #= 1, % use one set with element 5
Cost #= 10*A + 20*B + 12*C + 15*D + 4*E + 10*F.
You can solve this e.g. in ECLiPSe:
?- cover(SetsUsed,Cost), branch_and_bound:minimize(labeling(SetsUsed), Cost).
SetsUsed = [1, 0, 0, 0, 1, 1]
Cost = 24
Yes (0.00s cpu)

What is definition of truncated polynomial?

In NTRUEncryption, I seen the trucated polynimials, but I cannot understand the trunacated polynomial calculation.
So, could tell me anyone How we calculate the truncated polynomial?
The polynomials are truncated in the sense that they only have coefficients up to a certain degree.
Here is how you truncate the product of two truncated polynomials (the sum is trivial):
Assume you have two truncated polynomials, i.e. two polynomials of degree no greater than n-1
a = a[0] + a[1]X + ... + a[n-1]X^(n-1)
b = b[0] + b[1]X + ... + b[n-1]X^(n-1)
Then their "truncated" product is defined as the polynomial
a * b = c[0] + c[1]X + ... +c[n-1]X^(n-1)
where the c[k] coefficients are computed as follow:
Reverse b[0]..b[n-1] to get b[n-1]..b[0].
Rotate the result of step 1 above k+1 times to the right and get b[k]..b[0]b[n-1]..b[k+1]
Denote with b_k[0]..b_k[n-1] the array calculated in 2.
Now define
c[k] = a[0]b_k[0] + a[1]b_k[1] + ... + a[n-1]b_k[n-1].
This operation can also be made by multiplying the polynomials a and b in the usual way and then truncating the result to the degree n-1. The reason for the algorithm above is to avoid computing coefficients that will not be used in the final result.

How do i find the number of times x=x+1 is executed in terms of N?

im also having trouble finding omega(), and theta() as appropriate
x=0;
for k=1 to n
for j=1 to n-k
X=X+1;
The inner loop is n-1 + n-2 + n-3 ... + 1 + 0. Use this tutorial on calculating the sum of an arithmetic series to find the solution. The outer loop is obviously just "n."
This will be the big-theta. The big-oh will be the same as big-theta when you pull off everything but the first term and remove the multiplier, e.g. Theta(2*log(n) + 5) becomes O(log(n)). Omega is the same as big-Oh in this case, because the best case and worst case are identical; or you can cheat and say that big-Omega is constant time, because the big-Omega of EVERY function is constant time.
First, look at your boundaries. k=1 and k=n.
For k=1, the inside loop is executed (n-1) times.
For k=n the inside loop is execured (0) times.
So, 0 + 1 + ... + (n-1) is an arithmetic sum => (n-1)(n)/2 times.
Now, test it on a few small values :)
the answer is like this :
n-1 + n -2 + n -3 + ... = n*n - (1+2+3+ ... + n) = n^2 - n(n-1)/2