Unsupported type for LinExpr addition argument - gurobi

I'm new to Gurobi and I'm trying to solve a simple model, but im getting the following error: Unsupported type (<'class 'gurobipy.TempConstr'>) for LinExpr addition argument.
The model is the following:
from gurobipy import *
precios={1:{1:5,2:5,3:5},
2:{1:2,2:2.2,3:2.1},
3:{1:3,2:3.3,3:3.1},
4:{1:4,2:4.4,3:4.1}}
vitaminas={1:{1:0.1,2:0.1,3:0.1},
2:{1:0.05,2:0.05,3:0.05},
3:{1:0.07,2:0.07,3:0.07},
4:{1:0.09,2:0.09,3:0.09}}
carbohidratos={1:{1:0.1,2:0.1,3:0.1},
2:{1:0.07,2:0.07,3:0.07},
3:{1:0.08,2:0.08,3:0.08},
4:{1:0.09,2:0.09,3:0.09}}
hmax={1:24,2:27.5,3:30}
vmin={1:22,2:22,3:22}
bdown={1:0.02,
3:0.2}
bup={1:0.04,
3:0.3}
componentes=[1,2,3,4]
comp_control=[1,3]
paises=[1,2,3]
tav=22
demanda=1000
model = Model("Produccion")
x = model.addVars(componentes, paises,vtype=GRB.CONTINUOUS,name="x")
model.addConstrs((quicksum(carbohidratos[i][j]*x[i,j]<=hmax[j] for i in
componentes) for j in paises), name='Maximo de carbohidratos')
model.addConstrs((quicksum(vitaminas[i][j]*x[i,j]>=vmin[j] for i in
componentes) for j in paises),name='Minimo de vitaminas')
model.addConstrs(((bdown[i]*quicksum(x[i,j] for i in componentes)-x[i,j]<=0
for i in comp_control) for j in paises),name='Limite inferior por
componentes')
model.addConstrs(((x[i,j]-bup[j]*quicksum(x[i,j] for i in componentes)<=0
for i in comp_control) for j in paises),name='Limite inferior por
componentes')
model.addConstr((quicksum(x[1,j])<=tav),name='Maximo de formula disponible')
model.addConstr((quicksum(x[i,j] for i in componentes for j in
paises)==demanda),name='Demanda total')
model.addConstrs((x[i,j]>=0 for i in componentes for j in
paises),name='Producciones no negativas')
obj = quicksum(quicksum(precios[i][j] * x[i, j] for i in componentes) for j
in paises)
model.setObjective(obj, GRB.MINIMIZE)
model.optimize()
I think that the complicating constraints are the 3rd and 4th ones.
Thanks in advance.

Related

Nested dictionaries getting key error/ defaultdict problem in python with Gurobi

It is my first time implementing an optimization model in python with Gurobi, and I run into issues building up a decision variable.
I tried at first to use the following method with a defaultdict:
from gurobipy import *
from collections import defaultdict
def make_dict():
return defaultdict(make_dict)
decvary = defaultdict(make_dict)
for k in K:
for d in D:
for i in V_L:
for w in V_D:
if (w != i):
for j in V:
if (w != j) and (i != j):
decvary[k][d][i][w][j] = m.addVar(lb=0, ub=1, obj=0, vtype=GRB.BINARY, name="y.%d.%d.%d.%d.%d" % (k,d,i,w,j))
But later when I try to add constraints in the optimization model, the variable decvary[k][d][i][w][j] is of type <class 'collections.defaultdict'> but it should actually be 1 or 0 (binary).
So then I tried the old silly way to build the nested dictionary:
for k in K:
decvary[k]={}
for d in D:
decvary[k][d]={}
for i in V_L:
decvary[k][d][i]={}
for w in V_D:
if (w != i):
decvary[k][d][i][w]={}
for j in V:
if (w != j) and (i != j):
decvary[k][d][i][w][j] = m.addVar(lb=0, ub=1, obj=0, vtype=GRB.BINARY, name="y.%d.%d.%d.%d.%d" % (k,d,i,w,j))
But this time, I am getting a KeyError when adding constraints, and the KeyError always happens at the last key [j]
Does anyone have any idea of what's going on? Many thanks!
Gurobi's Python API has a built-in method to create a dictionary very easily: the model's addVars method. E.g. you could do
decvary = m.addVars(K, D, V_L, V_D, V, ub=1, vtype=GRB.BINARY, name="y")
or (to also respect your exceptions)
decvary = m.addVars(((k, d, i, w, j) for k in K for d in D for i in V_L for w in V_D for j in V if w!=i and w!=j and i!=j), ub=1, vtype=GRB.BINARY, name="y")
to create that dictionary.

LoadError using approximate bayesian criteria

I am getting an error that is confusing me.
using DifferentialEquations
using RecursiveArrayTools # for VectorOfArray
using DiffEqBayes
f2 = #ode_def_nohes LotkaVolterraTest begin
dx = x*(1 - x - A*y)
dy = rho*y*(1 - B*x - y)
end A B rho
u0 = [1.0;1.0]
tspan = (0.0,10.0)
p = [0.2,0.5,0.3]
prob = ODEProblem(f2,u0,tspan,p)
sol = solve(prob,Tsit5())
t = collect(linspace(0,10,200))
randomized = VectorOfArray([(sol(t[i]) + .01randn(2)) for i in 1:length(t)])
data = convert(Array,randomized)
priors = [Uniform(0.0, 2.0), Uniform(0.0, 2.0), Uniform(0.0, 2.0)]
bayesian_result_abc = abc_inference(prob, Tsit5(), t, data,
priors;num_samples=500)
Returns the error
ERROR: LoadError: DimensionMismatch("first array has length 400 which does not match the length of the second, 398.")
while loading..., in expression starting on line 20.
I have not been able to locate any array of size 400 or 398.
Thanks for your help.
Take a look at https://github.com/JuliaDiffEq/DiffEqBayes.jl/issues/52, that was due to an error in passing the t. This has been fixed on master so you can use that or wait some time, we will have a new release soon with the 1.0 upgrades which will have this fixed too.
Thanks!

Efficient implementation of factorization machine with matrix operations?

Link is here : https://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf (slides 5-6)
Given the following matrices:
X : n * d
W : d * k
Is there an efficient way to calculate the n x 1 matrix using only matrix operations (eg. numpy, tensorflow), where the jth element is :
EDIT:
Current attempt is this, but obviously it's not very space efficient, as it requires storing matrices of size n*d*d :
n = 1000
d = 256
k = 32
x = np.random.normal(size=[n,d])
w = np.random.normal(size=[d,k])
xxt = np.matmul(x.reshape([n,d,1]),x.reshape([n,1,d]))
wwt = np.matmul(w.reshape([1,d,k]),w.reshape([1,k,d]))
output = xxt*wwt
output = np.sum(output,(1,2))
Avoid large temporary arrays
Not all types of algorithms are that easily or obviously to vectorize. The np.sum(xxt*wwt) can be rewritten using np.einsum. This should be faster than your solution, but has some other limitations (eg. no multithreading).
I would therefor suggest using a compiler like Numba.
Example
import numpy as np
import numba as nb
import time
#nb.njit(fastmath=True,parallel=True)
def factorization_nb(w,x):
n = x.shape[0]
d = x.shape[1]
k = w.shape[1]
output=np.empty(n,dtype=w.dtype)
wwt=np.dot(w.reshape((d,k)),w.reshape((k,d)))
for i in nb.prange(n):
sum=0.
for j in range(d):
for jj in range(d):
sum+=x[i,j]*x[i,jj]*wwt[j,jj]
output[i]=sum
return output
def factorization_orig(w,x):
n = x.shape[0]
d = x.shape[1]
k = w.shape[1]
xxt = np.matmul(x.reshape([n,d,1]),x.reshape([n,1,d]))
wwt = np.matmul(w.reshape([1,d,k]),w.reshape([1,k,d]))
output = xxt*wwt
output = np.sum(output,(1,2))
return output
Mesuring Performance
n = 1000
d = 256
k = 32
x = np.random.normal(size=[n,d])
w = np.random.normal(size=[d,k])
#first call has some compilation overhead
res_1=factorization_nb(w,x)
t1=time.time()
for i in range(100):
res_1=factorization_nb(w,x)
#res_2=factorization_orig(w,x)
print(time.time()-t1)
Timings
factorization_nb: 4.2 ms per iteration
factorization_orig: 460 ms per iteration (110x speedup)
For an einsum implemtnation in pytorch, it would be something like
V = torch.randn([50, 10])
x = torch.randn([50])
result = (torch.einsum('ik,jk,i,j->', V, V, x, x)-torch.einsum('ik,ik,i,i->', V, V, x, x))/2
where we subtract the contribution from the feature weight being dotted with itself.

cardinality constraint in portfolio optimisation

I am using cvxpy to work on some simple portfolio optimisation problem. The only constraint I can't get my head around is the cardinality constraint for the number non-zero portfolio holdings. I tried two approaches, a MIP approach and a traditional convex one.
here is some dummy code for a working traditional example.
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
constraints = [cvx.sum_entries(w) == 1, w>= 0, cvx.sum_smallest(w, n-k) >= 0, cvx.sum_largest(w, k) <=1 ]
prob = cvx.Problem(objective, constraints)
prob.solve()
print prob.status
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print 'Number of non-zero elements : ',sum(1 for i in output if i > 0)
I had the idea to use, sum_smallest and sum_largest (cvxpy manual) my thought was to constraint the smallest n-k entries to 0 and let my target range k sum up to one, I know I can't change the direction of the inequality in order to stay convex, but maybe anyone knows about a clever way of constraining the problem while still keeping it simple.
My second idea was to make this a mixed integer problem, s.th along the lines of
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
binary = cvx.Bool(n)
integer = cvx.Int(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
constraints = [cvx.sum_entries(w) == 1, w>= 0, cvx.sum_entries(binary) == k ]
prob = cvx.Problem(objective, constraints)
prob.solve()
print prob.status
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print sum(1 for i in output if i > 0)
for i in range(len(w.value)):
print round(binary[i].value,2)
print output
looking at my binary vector it seems to be doing the right thing but the sum_entries constraint doesn't work, looking into the binary vector values I noticed that 0 isn't 0 it's very small e.g xxe^-20 I assume this will mess things up. Anyone can give me any guidance if this is the right way to go? I can use the standard solvers, as well as Mosek if that helps. I would prefer to have a non MIP implementation as I understand this is a combinatorial problem and will get very slow for larger problems. Ultimately I would like to either constraint on exact number of target holdings or a range e.g. 20-30.
Also the documentation in cvxpy around MIP is very short. thanks
A bit chaotic, this question.
So first: this kind of cardinality-constraint is NP-hard. This means, you can't express it using cvxpy without using Integer-programming (or else it would implicate P=NP)!
That beeing said, it would have been nicer, if there would be a pure version of the code without trying to formulate this constraint. I just assume it's the first code without the sum_smallest and sum_largest constraints.
So let's tackle the MIP-approach:
Your code trying to do this makes no sense at all
You introduce some binary-vars, but they have no connection to any other variable at all (so a constraint on it's sum is useless)!
You introduce some integer-vars, but they don't have any use at all!
So here is a MIP-approach:
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
binary = cvx.Bool(n) # !!!
constraints = [cvx.sum_entries(w) == 1, w>= 0, w - binary <= 0., cvx.sum_entries(binary) == k] # !!!
prob = cvx.Problem(objective, constraints)
prob.solve(verbose=True)
print(prob.status)
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print('Number of non-zero elements : ',sum(1 for i in output if i > 0))
So we just added some binary-variables and connected them to w to indicate if w is nonzero or not.
If w is nonzero:
w will be > 0 because of constraint w>= 0
binary needs to be 1, or else constraint w - binary <= 0. is not fulfilled
So it's just introducing these binaries and this one indicator-constraint.
Now the cvx.sum_entries(binary) == k does what it should do.
Be careful with the implication-direction we used here. It might be relevant when chaging the constraint on k (like <=).
Keep in mind, that the default MIP-solver is awful. I also fear that Mosek's interface (sub-optimal within cvxpy) won't solve this, but i might be wrong.
Edit: Your in-range can easily be formulated using two more indicators for:
(k >= a) <= ind_0
(k <= b) <= ind_1
and adding a constraint which equals a logical_and:
ind_0 + ind_1 >= 2
I've had a similar problem where my weights could be negative and did not need to sum to 1 (but still need to be bounded), so I've modified sascha's example to accommodate relaxing these constraints using the CVXpy absolute value function. This should allow for a more general approach to tackling cardinality constraints with MIP
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
binary = cvx.Variable(n,boolean=True) # !!!
maxabsw=2
constraints = [ w>= -maxabsw,w<=maxabsw, cvx.abs(w)/maxabsw - binary <= 0., cvx.sum(binary) == k] # !!!
prob = cvx.Problem(objective, constraints)
prob.solve(verbose=True)
print(prob.status)
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print('Number of non-zero elements : ',sum(1 for i in output if i > 0))

Any way to have a loop to build a variable array in ternsorflow?

New with Tensorflow. If i want do some thing like
x_pl=tf.placeholder([None,n])
y_pl=tf.placeholder([None,m])
b_0=tf.Variable(tf.zeros(n))
k=tf.Variable)[n,n])
b_1=tf.matmul(b_0,k)
b_2=tf.matmul(b_1,k)
...
b_m=tf.matmul(b_(m-1),k)
y_prd=tf.matmul(x_pl,[b_0,...b_m])
loss=tf.reduce_mean(tf.square(y_prd-y_pl)
What's the best way to do this?
it seems to me that i need to have a loop that can generate a variable array before session init all the variables.
Any help will be highly appreciated.
Just use a regular python loop:
x_pl = tf.placeholder([None, n])
y_pl = tf.placeholder([None, m])
b_0 = tf.Variable(tf.zeros(n))
k = tf.Variable([n,n])
b_list = [b_0]
for i in xrange(1, m + 1):
b_list.append(tf.matmul(b_list[i-1], k)
y_prd = tf.matmul(x_pl, b_list)
loss = tf.reduce_mean(tf.square(y_prd - y_pl)