How to extract variable values that equal a certain value (pyomo)? - variables

I am building a routing optimization model using pyomo on python.
I have solved my model but I am trying to extract the decision variable information for my model. My model is binary, and the values I am looking for are values of my model.z decision variable that equal to 1.
When I write instance.pprint() I get the following sample of output. I therefore want to code something that gives me only the decision variables that are equal to 1 such as z(1,4).
Sample of my code is shown below:
model.I = RangeSet(5)
model.J = RangeSet(5)
model.z = Var(model.I, model.J, domain = Binary)
def constraint (model,i):
return sum(model.z[i,j] - model.z[j,i] for j in model.J if i != j) == 0
model.constraint = Constraint(model.I, rule=constraint)
print()
z_values = pd.Series(model.z[i,j].extract_values(), name = model.z.name)
print(z_values)
I have tried the above code but as some of my values are 0 (because they have not being visited), I have been getting the following error message.
ValueError: Error retrieving component z[5,4]: The component has not been constructed.
Ideally the output should be something like this:
(0,3) -- 1
(1,2) -- 1
(2,4) -- 1
(3,1) -- 1
(4,5) -- 1
(5,0) -- 1
Any ideas?

This should work (and answer your other derivative question)
# value extract
import pyomo.environ as pyo
nodes = [1,2,3,4,5,6]
model = pyo.ConcreteModel()
model.N = pyo.Set(initialize=nodes)
model.Z = pyo.Var(model.N, model.N, domain=pyo.Binary, initialize=0) # only initializing here for demo...
# blah blah constraints & solve
# stuff in some fake results...
model.Z[1, 2] = 1
model.Z[2, 6] = 1
model.Z[3, 5] = 1
model.Z[6, 3] = 1
# model.display()
# make a dictionary of the route ...
# recall that binary "1" variables evaluate as True
route = {start: stop for (start, stop) in model.Z.index_set() if pyo.value(model.Z[start, stop])}
# print(route)
start_node = 1
print(f'from {start_node} ', end='')
while start_node in route.keys():
end_node = route.get(start_node)
print(f'-> {end_node} ' , end='')
start_node = end_node

Related

GEKKO - MINLP in Matrix Form - Errors using m.axb()

I am trying to solve a MINLP problem using GEKKO. My code is the following:
m = GEKKO(remote = True)
m.options.SOLVER = 3
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
# Array Variable
rows = nb_phases + 3*b_max*(nb_phases+1)#48
columns = 1
x = np.empty((rows,columns),dtype=object)
for i in range(3*nb_phases*b_max+nb_phases+1):
for j in range(columns):
x[i,j] = m.Var(value = xinit[i,j], lb = LB[i,j], ub = UB[i,j], integer = False)
for i in range(3*nb_phases*b_max+nb_phases+1, (3*nb_phases+3)*b_max+nb_phases):
for j in range(columns):
x[i,j] = m.Var(value = xinit[i,j], lb = LB[i,j], ub = UB[i,j], integer = True)
# Constraints
#m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
m.axb(A,B, etype = '<=',sparse=False)
#m.axb(A = A_eq,b = B_eq, x = x, etype = '=', sparse = False)
m.axb(A_eq,B_eq, etype = '=',sparse=False)
for i in range(rows):
for j in range(columns):
m.Minimize((x[i,j]-i*j)**2)
#Solver
m.solve(disp = True)
When calling the axb function, if I declare the variable x in the arguments as the following:
m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
I get the error : List x must be composed of GEKKO parameters or variables. I don't really understand why I get this error since x is a gekko variable.
If I don't declare the variable x in the arguments of the axb function:
m.axb(A,B, etype = '<=',sparse=False)
I get the following error: AXB Missing Configuration File, Error: AXB object missing: axb1.txt, Example config file: axb1.txt
I was thinking maybe the issue is that x is not defined as an array. Therefore, considering x[i,j], I tried to explicit the equation Ax<=b by coding the matrix product A.x in a loop to avoid calling m.axb but I am not sure how to declare the equations after. My code is the following:
Ax = []
for i in range(rows):
temp = []
for j in range(columns):
temp.append(A[i,j]*x[j,0])
Ax.append(sum(temp))
for i in range(rows):
m.Equations(Ax[i] <= B[i])
I get the error: 'int' object is not subscriptable
Is anyone able to help me figure out how to solve this problem?
Is there a way of defining x as an array? (Since some of its elements are integers and some aren't)
Thanks a lot !
Here is a solution that works with the newer version of Gekko that is not yet released but is available on GitHub. You'll need to put the newest version of gekko.py (v1.0) in the Lib/site_packages/gekko folder and the local executable (apm.exe for Windows, apm_mac for MacOS, apm for Linux) in the Lib/site_packages/gekko/bin folder to use remote=False.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote = False)
m.options.SOLVER = 3
nb_phases = 2
b_max = 3
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
# Array Variable
rows = nb_phases + 3*b_max*(nb_phases+1)#48
columns = 1
xinit = np.ones(rows)
LB = np.zeros(rows)
UB = np.ones(rows)*10.0
#x = m.Array(m.Var,(rows))
x = np.empty(rows,dtype=object)
for i in range(3*nb_phases*b_max+nb_phases+1):
x[i] = m.Var(value = xinit[i], lb = LB[i], ub = UB[i], integer = False)
for i in range(3*nb_phases*b_max+nb_phases+1, (3*nb_phases+3)*b_max+nb_phases):
x[i] = m.Var(value = xinit[i], lb = LB[i], ub = UB[i], integer = True)
# Constraints
#m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
A = np.ones((1,rows)); B = np.zeros(1)
m.axb(A,B,x,etype = '<=',sparse=False)
#m.axb(A = A_eq,b = B_eq, x = x, etype = '=', sparse = False)
m.axb(A,B,x,etype = '=',sparse=False)
for i in range(rows):
m.Minimize((x[i]-i)**2)
#Solver
m.options.SOLVER = 1
m.solve(disp = True)
This produces the solution:
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
--------- APM Model Size ------------
Each time step contains
Objects : 2
Constants : 0
Variables : 29
Intermediates: 0
Connections : 58
Equations : 29
Residuals : 29
Number of state variables: 29
Number of total equations: - 2
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 27
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: -0.00 NLPi: 2 Dpth: 0 Lvs: 0 Obj: 7.71E+03 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 0.019000000000000003 sec
Objective : 7714.
Successful solution
---------------------------------------------------

Numpy: Construct Slice A La Carte

Suppose I have the following:
# in pseudo code
# function input 1
chord = [0,1,17,35,47,0]
dims = [0,1,2,4,5,6]
x_axis = 3
t_axis = 7
# what I'd like to return
np.squeeze(arr[0,1,17,:,35,47,0,:])
# function input 2
chord = [0,3,4,5,6,7]
dims = [0,2,3,4,5,6]
x_axis = 1
t_axis = 7
# desired return
np.squeeze(arr[0,:,3,4,5,6,7,:])
How do I construct these numpy slices given input that I can arbitrarily specify a pair of axes and a chord coordinate?
I implemented a reflection-based solution:
def reflection_window(arr:np.ndarray,chord:list,dim0,dim1):
var = "arr"
bra = "["
ket = "]"
coord = [str(i) for int(i) in chord]
coord.insert(dim0,':')
coord.insert(dim1,':')
chordstr = ','.join(coord)
slicer = var+bra+chordstr+ket
return eval(slicer)
Staying native to numpy is probably better, but since python is a shell scripting language, it probably makes sense to treat it that way if necessary.

How to optimize the linear coefficients for numpy arrays in a maximization function?

I have to optimize the coefficients for three numpy arrays which maximizes my evaluation function.
I have a target array called train['target'] and three predictions arrays named array1, array2 and array3.
I want to put the best linear coefficients i.e., x,y,z for these three arrays which will maximize the function
roc_aoc_curve(train['target'], xarray1 + yarray2 +z*array3)
the above function would be maximum when prediction is closer to the target.
i.e, xarray1 + yarray2 + z*array3 should be closer to train['target'].
The range of x,y,z >=0 and x,y,z <= 1
Basically I am trying to put the weights x,y,z for each of the three arrays which would make the function
xarray1 + yarray2 +z*array3 closer to the train['target']
Any help in getting this would be appreciated.
I used pulp.LpProblem('Giapetto', pulp.LpMaximize) to do the maximization. It works for normal numbers, integers etc, however failing while trying to do with arrays.
import numpy as np
import pulp
# create the LP object, set up as a maximization problem
prob = pulp.LpProblem('Giapetto', pulp.LpMaximize)
# set up decision variables
x = pulp.LpVariable('x', lowBound=0)
y = pulp.LpVariable('y', lowBound=0)
z = pulp.LpVariable('z', lowBound=0)
score = roc_auc_score(train['target'],x*array1+ y*array2 + z*array3)
prob += score
coef = x+y+z
prob += (coef==1)
# solve the LP using the default solver
optimization_result = prob.solve()
# make sure we got an optimal solution
assert optimization_result == pulp.LpStatusOptimal
# display the results
for var in (x, y,z):
print('Optimal weekly number of {} to produce: {:1.0f}'.format(var.name, var.value()))
Getting error at the line
score = roc_auc_score(train['target'],x*array1+ y*array2 + z*array3)
TypeError: unsupported operand type(s) for /: 'int' and 'LpVariable'
Can't progress beyond this line when using arrays. Not sure if my approach is correct. Any help in optimizing the function would be appreciated.
When you add sums of array elements to a PuLP model, you have to use built-in PuLP constructs like lpSum to do it -- you can't just add arrays together (as you discovered).
So your score definition should look something like this:
score = pulp.lpSum([train['target'][i] - (x * array1[i] + y * array2[i] + z * array3[i]) for i in arr_ind])
A few notes about this:
[+] You didn't provide the definition of roc_auc_score so I just pretended that it equals the sum of the element-wise difference between the target array and the weighted sum of the other 3 arrays.
[+] I suspect your actual calculation for roc_auc_score is nonlinear; more on this below.
[+] arr_ind is a list of the indices of the arrays, which I created like this:
# build array index
arr_ind = range(len(array1))
[+] You also didn't include the arrays, so I created them like this:
array1 = np.random.rand(10, 1)
array2 = np.random.rand(10, 1)
array3 = np.random.rand(10, 1)
train = {}
train['target'] = np.ones((10, 1))
Here is my complete code, which compiles and executes, though I'm sure it doesn't give you the result you are hoping for, since I just guessed about target and roc_auc_score:
import numpy as np
import pulp
# create the LP object, set up as a maximization problem
prob = pulp.LpProblem('Giapetto', pulp.LpMaximize)
# dummy arrays since arrays weren't in OP code
array1 = np.random.rand(10, 1)
array2 = np.random.rand(10, 1)
array3 = np.random.rand(10, 1)
# build array index
arr_ind = range(len(array1))
# set up decision variables
x = pulp.LpVariable('x', lowBound=0)
y = pulp.LpVariable('y', lowBound=0)
z = pulp.LpVariable('z', lowBound=0)
# dummy roc_auc_score since roc_auc_score wasn't in OP code
train = {}
train['target'] = np.ones((10, 1))
score = pulp.lpSum([train['target'][i] - (x * array1[i] + y * array2[i] + z * array3[i]) for i in arr_ind])
prob += score
coef = x + y + z
prob += coef == 1
# solve the LP using the default solver
optimization_result = prob.solve()
# make sure we got an optimal solution
assert optimization_result == pulp.LpStatusOptimal
# display the results
for var in (x, y,z):
print('Optimal weekly number of {} to produce: {:1.0f}'.format(var.name, var.value()))
Output:
Optimal weekly number of x to produce: 0
Optimal weekly number of y to produce: 0
Optimal weekly number of z to produce: 1
Process finished with exit code 0
Now, if your roc_auc_score function is nonlinear, you will have additional troubles. I would encourage you to try to formulate the score in a way that is linear, possibly using additional variables (for example, if you want the score to be an absolute value).

Finding loss mask of variable length in keras tensorflow

Trying to build loss function which captures the below functionality, which mask the output values once 'end of sequence' is encountered.
Given a tensor of shape [BatchSize,MaxSequenceLenght,OutputNodes]
Consider the below example
batch size = 3
Max Sequence Length=4
OutputNodes = 3
predicted = [[[0.1,0.3,0.2],[0.4,0.6,0.8],[0.5,0.2,0.3],[0.0,0.0,0.99]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.9],[0.4,0.6,0.8]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.1],[0.4,0.6,0.1]]]
I am dedicating the last output node to symbolise the 'end of sequence(EOS)' here node=2 . Nodes are labelled as (0, 1 and 2)
Based on the predicted value, I have to return a mask which tries to find the first occurrence of EOS.
In the above example,
first row has following sequence (argmax) => 1,2,0,2
Second row has following sequence => 1,1,2,2
Third row has following sequence => 1,1,9,1
So my mask should be
[[1,0,0,0],
[1,1,0,0],
[1,1,1,1]
The mask will ensure, the values post the EOS is ignored or not considered in calculating the loss.
Below is my code snipped I tried
sequence_cluster_asign = keras.backend.argmax(sequence_values,axis=-1)
loss_mask = []
for seq in K.tf.unstack(sequence_cluster_asign):
##appendEOS- To make sure tf.where is not empty
seq = tf.concat([seq,endOfSequenceTensor],axis=0)
endOfSequenceLocation = K.tf.where(K.tf.equal(seq,endOfSequence))[0][0]
loss_mask.append(tf.sequence_mask(endOfSequenceLocation,max_decoder_seq_length,dtype=tf.float32))
final_mask = K.stack(loss_mask)
Error encountered : ValueError: Cannot infer num from shape (?,?)
If you want to get mask in your question, you can use the following method.
import tensorflow as tf
import keras
from keras import backend as K
sequence_values = K.placeholder(shape=(None, 4, 3))
sequence_cluster_asign = keras.backend.argmax(sequence_values,axis=-1)
# keras version
result = K.cast(K.less(sequence_cluster_asign,sequence_values.get_shape().as_list()[-1]-1),dtype='int32')
result = K.cumprod(result,axis=-1)
# tensorflow version
# result = tf.cast(tf.less(sequence_cluster_asign,sequence_values.get_shape().as_list()[-1]-1),dtype=tf.int32)
# result = tf.cumprod(result,axis=-1)
predicted = [[[0.1,0.3,0.2],[0.4,0.6,0.8],[0.5,0.2,0.3],[0.0,0.0,0.99]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.9],[0.4,0.6,0.8]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.1],[0.4,0.6,0.1]]]
with tf.Session() as sess:
print(result.eval(feed_dict={sequence_values:predicted}))
[[1 0 0 0]
[1 1 0 0]
[1 1 1 1]]

How to keep calculated values in a Tensorflow graph (on the GPU)?

How can we make sure that a calculated value will not be copied back to CPU/python memory, but is still available for calculations in the next step?
The following code obviously doesn't do it:
import tensorflow as tf
a = tf.Variable(tf.constant(1.),name="a")
b = tf.Variable(tf.constant(2.),name="b")
result = a + b
stored = result
with tf.Session() as s:
val = s.run([result,stored],{a:1.,b:2.})
print(val) # 3
val=s.run([result],{a:4.,b:5.})
print(val) # 9
print(stored.eval()) # 3 NOPE:
Error : Attempting to use uninitialized value _recv_b_0
The answer is to store the value in a tf.Variable by storing to it using the assign operation:
working code:
import tensorflow as tf
with tf.Session() as s:
a = tf.Variable(tf.constant(1.),name="a")
b = tf.Variable(tf.constant(2.),name="b")
result = a + b
stored = tf.Variable(tf.constant(0.),name="stored_sum")
assign_op=stored.assign(result)
val,_ = s.run([result,assign_op],{a:1.,b:2.})
print(val) # 3
val=s.run(result,{a:4.,b:5.})
print(val[0]) # 9
print(stored.eval()) # ok, still 3