I want to use Gurobi to solve for a very simple LP:
minimize z
s.t. x + y <= z
where x, y, z are decision variables generated by gp.Model().addVar() which should be the default variable. The objective of the model is set to be m.setObjective(1.0*z, GRB.MINIMIZE).
Then I solved the model, and the program returns the optimal value for z is 0.000. I don't understand why this is the optimal value? Is there any constraint on the default decision variables of Gurobi, like they are non-positive. Otherwise, why 0.0 is the optimal value for this LP when x, y, and z are unbounded?
The convention for Gurobi and other LP/MIP solvers are that decision variables have a lower bound of zero. If you want another lower bound, then either set the LB attribute, or define it when you call Model.addVar(), ex:
m = Model()
x = m.addVar(lb=-20, name='x')
Related
I'm running functions to create cyclical datetime features, so I have converted timestamps to sine and cosine representations for ML model training.
In one sample, x = 305.2116709309027, giving np.sin(x) = -0.459279 and np.cos(x) = -0.888292, my question is how to retrieve x from these sin and cos features later?
I assumed np.arcsin(-0.459279) == 305.2116709309027 and I could then decode the timestamp used from there but I'm not having any luck.
You should be aware that mathematically, sin(x) and cos(x) are periodic functions, meaning multiple different values as input can yield the same output.
For example, x=0, x=2pi, and x=4pi can all yield the same value. So you can't decode the x from y, except you know that the input is restricted within a period, such as between [0, 2pi].
HOWEVER, for arcsin(x), since the domain of x is limited, and each y corresponds to a unique x, you can get the x from y.
I have curve that initially Y increases linearly with X, then reach a plateau at point C.
In other words, the curve can be defined as:
if X < C:
Y = k * X + b
else:
Y = k * C + b
The training data is a list of X ~ Y values. I need to determine k, b and C through a machine learning approach (or similar), since the data is noisy and refection point C changes over time. I want something more robust than get C through observing the current sample data.
How can I do it using sklearn or maybe scipy?
WLOG you can say the second equation is
Y = C
looks like you have a linear regression to fit the line and then a detection point to find the constant.
You know that in the high values of X, as in X > C you are already at the constant. So just check how far back down the values of X you get the same constant.
Then do a linear regression to find the line with value of X, X <= C
Your model is nonlinear
I think the smartest way to solve this is to do these steps:
find the maximum value of Y which is equal to k*C+b
M=max(Y)
drop this maximum value from your dataset
df1 = df[df.Y != M]
and then you have simple dataset to fit your X to Y and you can use sklearn for that
I'd like to write a LP problem in the standard format with MatOptInterface, e.i.:
min c'*x
S.t A*x .== b
x >= 0
Now, how can one write this problem with MathOptInterface? I'm having many issues, one of them is how to define the variable "model". For example, if I try to run:
x = add_variables(model,3)
I first would need to declare this model variable. But I don't know how one is supposed to do this on MathOptInterface.
IIUC in your situation model has to be an argument to be specified by the user of your function.
The user can then pass GLPK.Optimizer(), Tulip.Optimizer() or any other optimizer inheriting from MathOptInterface.AbstractOptimizer.
See e.g. Manual#A complete example.
Alternatively you can look at MOI.Utilities.Model but I don't know how to get an optimizer to solve that model.
Here is how to implement the LP solver for standard Simplex format:
function SolveLP(c,A,b,model::MOI.ModelLike)
x = MOI.add_variables(model, length(c));
MOI.set(model, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
for xi in x
MOI.add_constraint(model, MOI.SingleVariable(xi), MOI.GreaterThan(0.0))
end
for (i,row) in enumerate(eachrow(A))
row_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(row, x), 0.0);
MOI.add_constraint(model, row_function, MOI.EqualTo(b[i]))
end
MOI.optimize!(model)
p = MOI.get(model, MOI.VariablePrimal(), x);
return p
end
For the model, just choose something like GLPK.Optimizer()
Given variables y and z, both of which depend on a tensor x. By product rule, if I do tf.gradients(yz,x), it would give me y'(x)z(x) + z'(x)y(x). Is there a way I can specify y as a constant with respect to x such that tf.gradients(yz,x) only gives me z'(x)y(x)?
I know y_=tf.constant(sess.run(y)) will give me y as a constant, but I cannot use that solution in my code.
You can use tf.stop_gradient() to block backpropagation. To block gradients in your example:
y = function1(x)
z = function2(x)
blocked_y = tf.stop_gradient(y)
product = blocked_y * z
After you backpropagate through product, the backpropagation will continue to z and not y.
Given x, y are tensors, I know I can do
with tf.name_scope("abc"):
z = tf.add(x, y, name="z")
So that z is named "abc/z".
I am wondering if there exists a function f which assign the name directly in the following case:
with tf.name_scope("abc"):
z = x + y
f(z, name="z")
The stupid f I am using now is z = tf.add(0, z, name="z")
If you want to "rename" an op, there is no way to do that directly, because a tf.Operation (or tf.Tensor) is immutable once it has been created. The typical way to rename an op is therefore to use tf.identity(), which has almost no runtime cost:
with tf.name_scope("abc"):
z = x + y
z = tf.identity(z, name="z")
Note however that the recommended way to structure your name scope is to assign the name of the scope itself to the "output" from the scope (if there is a single output op):
with tf.name_scope("abc") as scope:
# z will get the name "abc". x and y will have names in "abc/..." if they
# are converted to tensors.
z = tf.add(x, y, name=scope)
This is how the TensorFlow libraries are structured, and it tends to give the best visualization in TensorBoard.
It seems it works also without tf.name_scope only with z = tf.identity(z, name="z_name"). If you run additionally z = tf.identity(z, name="z_name_new") then you can access the same tensor using both names: tf.get_default_graph().get_tensor_by_name("z_name:0") or tf.get_default_graph().get_tensor_by_name("z_name_new:0")