With openMDAO, I am using FD derivatives and trying to solve a non-linearly constrained optimization problem with the SLSQP method. Any time the optimizer arrives at a point that violates one of the constraints, it just crashes with the message:
Optimization FAILED. Positive directional derivative for linesearch
For instance, if I intentionally set the initial point to an unfeasible design point, the optimizer performs 1 iteration and exits with the above error (the same happens when I start from a feasible point, but then optimizer arrives at an unfeasible point after a few iterations).
Based on the answer to the question in In OpenMDAO, is there a way to ensure that the constraints are respected before proceeding with a computation?, I'm assuming that raising the AnalysisError exception will not work in my case, is that correct? Is there any other way to prevent the optimizer from going into unfeasible regions or at least backtrack on the linesearch and try a different direction/distance? Or should the SLSQP method be only used when analytic derivatives are available?
Reproducible test case:
import numpy as np
import openmdao.api as om
class d1(om.ExplicitComponent):
def setup(self):
# Global design variables
self.add_input('r', val= [3,3,3])
self.add_input('T', val= 20)
# Coupling output
self.add_output('M', val=0)
self.add_output('cost', val=0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
# define inputs
r = inputs['r']
T = inputs['T'][0]
cost = 174.42 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2])
M = 456.19 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2]) - 599718
outputs['M'] = M
outputs['cost'] = cost
class MDA(om.Group):
class ObjCmp(om.ExplicitComponent):
def setup(self):
# Global Design Variable
self.add_input('cost', val=0)
# Output
self.add_output('obj', val=0.0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
outputs['obj'] = inputs['cost']
class ConCmp(om.ExplicitComponent):
def setup(self):
# Global Design Variable
self.add_input('M', val=0)
# Output
self.add_output('con', val=0.0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
# assemble outputs
outputs['con'] = inputs['M']
def setup(self):
self.add_subsystem('d1', d1(), promotes_inputs=['r','T'],
promotes_outputs=['M','cost'])
self.add_subsystem('con_cmp', self.ConCmp(), promotes_inputs=['M'],
promotes_outputs=['con'])
self.add_subsystem('obj_cmp', self.ObjCmp(), promotes_inputs=['cost'],
promotes_outputs=['obj'])
# Build the model
prob = om.Problem(model=MDA())
model = prob.model
model.add_design_var('r', lower= [3,3,3], upper= [10,10,10])
model.add_design_var('T', lower= 20, upper= 220)
model.add_objective('obj', scaler=1)
model.add_constraint('con', lower=0)
# Setup the optimization
prob.driver = om.ScipyOptimizeDriver(optimizer='SLSQP', tol=1e-3, disp=True)
prob.setup()
prob.set_solver_print(level=0)
prob.run_driver()
# Printout
print('minimum found at')
print(prob.get_val('T')[0])
print(prob.get_val('r'))
print('constraint')
print(prob.get_val('con')[0])
print('minimum objective')
print(prob.get_val('obj')[0])
Based on your provided test case, the problem here is that your have a really poorly scaled objective and constraint (you also have some very strange coding choices ... which I modified).
Running the OpenMDAO scaling report shows that your objective and constraint values are both around 1e6 in magnitude:
This is quite large, and is the source of your problems. A (very rough) rule of thumb is that your objectives and constraints should be around order 1. Thats not hard and fast rule, but is generally a good starting point. Sometimes other scaling will work better, if you have very very larger or small derivatives ... but there are parts of SQP methods that are sensitive to the scaling of objective and constraint values directly. So trying to keep them roughly in the range of 1 is a good idea.
Adding ref=1e6 to both objective and constraints gave enough resolution for the numerical methods to converge the problem:
Current function value: [0.229372]
Iterations: 8
Function evaluations: 8
Gradient evaluations: 8
Optimization Complete
-----------------------------------
minimum found at
20.00006826587515
[3.61138704 3. 3.61138704]
constraint
197.20821903413162
minimum objective
229371.99547899762
Here is the code I modified (including removing the extra class definitions inside your group that didn't seem to be doing anything):
import numpy as np
import openmdao.api as om
class d1(om.ExplicitComponent):
def setup(self):
# Global design variables
self.add_input('r', val= [3,3,3])
self.add_input('T', val= 20)
# Coupling output
self.add_output('M', val=0)
self.add_output('cost', val=0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='cs')
def compute(self, inputs, outputs):
# define inputs
r = inputs['r']
T = inputs['T'][0]
cost = 174.42 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2])
M = 456.19 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2]) - 599718
outputs['M'] = M
outputs['cost'] = cost
class MDA(om.Group):
def setup(self):
self.add_subsystem('d1', d1(), promotes_inputs=['r','T'],
promotes_outputs=['M','cost'])
# self.add_subsystem('con_cmp', self.ConCmp(), promotes_inputs=['M'],
# promotes_outputs=['con'])
# self.add_subsystem('obj_cmp', self.ObjCmp(), promotes_inputs=['cost'],
# promotes_outputs=['obj'])
# Build the model
prob = om.Problem(model=MDA())
model = prob.model
model.add_design_var('r', lower= [3,3,3], upper= [10,10,10])
model.add_design_var('T', lower= 20, upper= 220)
model.add_objective('cost', ref=1e6)
model.add_constraint('M', lower=0, ref=1e6)
# Setup the optimization
prob.driver = om.ScipyOptimizeDriver(optimizer='SLSQP', tol=1e-3, disp=True)
prob.setup()
prob.set_solver_print(level=0)
prob.set_val('r', 7.65)
prob.run_driver()
# Printout
print('minimum found at')
print(prob.get_val('T')[0])
print(prob.get_val('r'))
print('constraint')
print(prob.get_val('M')[0])
print('minimum objective')
print(prob.get_val('cost')[0])
Which SLSQP method are you using? There is one implementation in pyOptSparse and one in ScipyOptimizer. The one in pyoptsparse is older and doesn't respect bounds constraints. The one in Scipy is newer and does. (Yes, its very confusing that they have the same name and share some lineage... but are not the same optimizer any more)
You shouldn't raise an analysis error when you go outside the bounds. If you need strict bounds respecting, I suggest using IPopt from within pyoptsparse (if you can get it to compile) or switching to ScipyOptimizerDriver and its SLSQP implementation.
Based on your question, its not totally clear to me if you're talking about bounds constraints or inequality/equality constraints. If its the latter, then then there isn't any optimizer that would guarantee you remain in a feasible region all the time. Interior point methods like IPopt will stay inside the region much better, but not 100% of the time.
In general, with gradient based optimization its pretty critical that you make your problem smooth and continuous even when its outside the constraint areas. If there are parts of the space that you absolutely can not go into, then you need to make those variables into design variables and use bound constraints. This sometimes requires reformulating your problem formulation a little bit, possibly by adding a kind of compatibility constraint that says "design variable = computed_value". Then you can make sure that the design variable is passed into anything that requires the value to be strictly within a bound, and (hopefully) a converged answer will also satisfy your compatibility constraint.
If you provide some kind of a test case or example, I can amend my answer with a more specific suggestion.
Related
I am researching an application of actor-critic RL in a nonstationary environment and the loss of the Q-network (or, if I also implement a value function network, that loss too) quickly converges to zero, well before the network finds the optimal policy.
The architecture is kind of successful in finding a good policy, even though it is not very robust to perturbations, and I suspect that the Q-loss converging this fast is revealing of its inability to estimate the state-value or value function correctly. The environment being nonstationary makes it even more suspect, since there should always be some degree of error in the estimation. Any ideas as to what might be causing this?
Specifically, I am using soft actor-critic, and my implementation is based on OpenAI's Spinning Up repo. The optimization targets are as described in the paper [0], but I honestly find their code much more understandable - math in RL is usually not rigorous enough to really make sense of it. Anyway these are the expressions for the value function target:
and for the Q-function target:
Where \theta\, \psi and \bar{\psi} are neural networks (Q-function, main value and target value network respectively). I slightly modify these equations to optimize for average reward rate since my task is continuous, see [3], and include entropy regularization when taking the log probability of the action given by policy.
My Q- and value functions are simple MLPs:
# Soft Actor-Critic from OpenAI https://github.com/openai/spinningup/tree/master/spinup/algos/pytorch/sac
def mlp(sizes, activation, output_activation=nn.Identity):
layers = []
for j in range(len(sizes) - 1):
act = activation if j < len(sizes) - 2 else output_activation
layers += [nn.Linear(sizes[j], sizes[j + 1]), act()]
return nn.Sequential(*layers)
class MLPQFunction(nn.Module):
def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
super().__init__()
self.q = mlp([obs_dim + act_dim] + list(hidden_sizes) + [1], activation)
def forward(self, obs, act):
q = self.q(torch.cat([obs, act], dim=-1))
return torch.squeeze(q, -1) # Critical to ensure q has right shape.
class MLPValueFunction(nn.Module):
def __init__(self, obs_dim, hidden_sizes, activation):
super().__init__()
self.v = mlp([obs_dim] + list(hidden_sizes) + [1], activation)
def forward(self, obs):
v = self.v(obs)
return v.squeeze()
and I compute the losses this way, after sampling a tuple of batches (o, a, r, o2) from a replay buffer. Each variable is [batch x dim(S)] if it's an observation, where dim(S) is the dimension of the state space, 2 in my case, or [batch x 1] if it's an action or reward.
q1 = net.q1(o, a)
q2 = net.q2(o, a)
# Bellman backup for Q functions
with torch.no_grad():
# Target actions come from *current* policy
a2, logp_a2 = net.pi(o2)
# Target Q-values
q1_pi_targ = target_net.q1(o2, a2)
q2_pi_targ = target_net.q2(o2, a2)
q_pi_targ = torch.min(q1_pi_targ, q2_pi_targ)
backup = r - avg_reward + (q_pi_targ - temp * logp_a2)
# MSE loss against Bellman backup
loss_q1 = F.smooth_l1_loss(q1, backup)
loss_q2 = F.smooth_l1_loss(q2, backup)
loss_q = loss_q1 + loss_q2
q_optimizer.zero_grad(set_to_none=True)
loss_q.backward()
q_optimizer.step()
# Compute value function loss
v_optimizer.zero_grad(set_to_none=True)
vf = net.v(o)
with torch.no_grad():
vf_target = q_pi - temp * logp_pi
loss_v = F.smooth_l1_loss(vf, vf_target)
loss_v.backward()
v_optimizer.step()
Where avg_reward is estimated as a running average:
avg_reward += AVG_REW_LR * (R - avg_reward + target_net.v(next_state.squeeze()) - target_net.v(state.squeeze()))
[0] Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. Proceedings of the 35th International Conference on Machine Learning, 1861–1870. https://proceedings.mlr.press/v80/haarnoja18b.html
[3] Naik, A., Shariff, R., Yasui, N., Yao, H., & Sutton, R. S. (2019). Discounted Reinforcement Learning Is Not an Optimization Problem. ArXiv:1910.02140 [Cs]. http://arxiv.org/abs/1910.02140
I am working on a repo that make use of the maskrcnn_benchmark repo. I have extensively, explored the bench-marking repo extensively for the cause of its slower performance on a cpu with respect to enter link description here.
In order to create a benchmark for the individual forward passes I have put a time counter for each part and it gives me the time required to calculate each component. I have had a tough time exactly pinpointing as to the slowest component of the entire architecture.I believe it to be BottleneckWithFixedBatchNorm class in the maskrcnn_benchmark/modeling/backbone/resnet.py file.
I will really appreciate any help in localisation of the biggest bottle neck in this architecture.
I have faced the same problem, the best possible solution for the same is to look inside the main code, go through the forward pass of each module and have a timer setup to log the time that is spent in the computations of each module. How we worked in it was to create an architecture where we create the time logger for each class, therefore every instance of the class will now be logging its time of execution, after through comparison, atleast in our case we have found that the reason for the delay was the depth of the Resnet module, (which given the computational cost of resnet is not a surprising factor at all, the only solution to the same is more palatalization so either ensure a bigger GPU for performing the task or reduce the depth of the Resnet network ).
I must inform that the maskrcnn_benchmark has been deprecated and an updated version of the same is available in the form of detectron2. Consider moving your code for significant speed improvements in the architecture.
BottleneckWithFixedBatchNorm is not the most expensive operation in the architecture and certainly not creating the bottleneck as all the operations instead of the name. The class isn't as computationally expensive and is computed in parallel even on a lower end CPU machine (at least in the inference stage).
An example of tracking better the performance of each module can be found with the code taken from the path : maskrcnn_benchmark/modeling/backbone/resnet.py
class ResNet(nn.Module):
def __init__(self, cfg):
super(ResNet, self).__init__()
# If we want to use the cfg in forward(), then we should make a copy
# of it and store it for later use:
# self.cfg = cfg.clone()
# Translate string names to implementations
stem_module = _STEM_MODULES[cfg.MODEL.RESNETS.STEM_FUNC]
stage_specs = _STAGE_SPECS[cfg.MODEL.BACKBONE.CONV_BODY]
transformation_module = _TRANSFORMATION_MODULES[cfg.MODEL.RESNETS.TRANS_FUNC]
# Construct the stem module
self.stem = stem_module(cfg)
# Constuct the specified ResNet stages
num_groups = cfg.MODEL.RESNETS.NUM_GROUPS
width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS
stage2_bottleneck_channels = num_groups * width_per_group
stage2_out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS
self.stages = []
self.return_features = {}
for stage_spec in stage_specs:
name = "layer" + str(stage_spec.index)
stage2_relative_factor = 2 ** (stage_spec.index - 1)
bottleneck_channels = stage2_bottleneck_channels * stage2_relative_factor
out_channels = stage2_out_channels * stage2_relative_factor
stage_with_dcn = cfg.MODEL.RESNETS.STAGE_WITH_DCN[stage_spec.index -1]
module = _make_stage(
transformation_module,
in_channels,
bottleneck_channels,
out_channels,
stage_spec.block_count,
num_groups,
cfg.MODEL.RESNETS.STRIDE_IN_1X1,
first_stride=int(stage_spec.index > 1) + 1,
dcn_config={
"stage_with_dcn": stage_with_dcn,
"with_modulated_dcn": cfg.MODEL.RESNETS.WITH_MODULATED_DCN,
"deformable_groups": cfg.MODEL.RESNETS.DEFORMABLE_GROUPS,
}
)
in_channels = out_channels
self.add_module(name, module)
self.stages.append(name)
self.return_features[name] = stage_spec.return_features
# Optionally freeze (requires_grad=False) parts of the backbone
self._freeze_backbone(cfg.MODEL.BACKBONE.FREEZE_CONV_BODY_AT)
def _freeze_backbone(self, freeze_at):
if freeze_at < 0:
return
for stage_index in range(freeze_at):
if stage_index == 0:
m = self.stem # stage 0 is the stem
else:
m = getattr(self, "layer" + str(stage_index))
for p in m.parameters():
p.requires_grad = False
def forward(self, x):
start_timer=time.time()
outputs = []
x = self.stem(x)
for stage_name in self.stages:
x = getattr(self, stage_name)(x)
if self.return_features[stage_name]:
outputs.append(x)
print("ResNet time :: ", time.time()-start_timer,file=open("timelogger.log","a"))
return outputs
Only change that has to be made is in the forward pass and all the instance created out of this class will inherit the properties and log time (choose to write the same to the file instead of a simple stdout)
I am using the ScipyOptimizerInterface in tensorflow. I provide a minimal example below, where I optimize function f(x)=p*x**2+x for some placeholder p.
Now, I would like to gradually change the value of the placeholder during optimization, i.e. I want to change p in every step of the optimizer. Because I am using ScipyOptimizerInterface however, I only get the final result of the optimization, not after a single step.
Question: How can I gradually change p over time? Of course, I still want the optimization to run efficiently.
Motivation
In my actual use case, I want the final result of my optimization to satisfy some non-linear constraints. To ensure this, I introduce a penalty for violations of those constraints, and weight this penalty with p. By increasing p over time, I allow some violations initially, but ensure that in the end, the constraints are satisfied.
Minimal example
import tensorflow as tf
from tensorflow.contrib.opt import ScipyOptimizerInterface
# setup variables
x=tf.get_variable("x",initializer=[1.0])
p=tf.placeholder(dtype=tf.float32)
val=10.0
# setup optimization
f=p*x**2+x
optimizer = ScipyOptimizerInterface(f, options={'maxiter': 100})
# run
with tf.Session() as session:
init = tf.global_variables_initializer()
session.run(init, feed_dict={p:val})
optimizer.minimize(session, feed_dict={p:val})
ret=session.run(x)
print(ret)
If it matters: My tensorflow version is 1.4.1.
I could not find documentation regarding the aggregation method in tensorflow optimizer
I have the following line of code
train_op = optimizer.minimize(loss, global_step=batch, aggregation_method = tf.AggregationMethod.EXPERIMENTAL_TREE)
However, this property can be changed to be
tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N
Does anyone know what does it do? (I just know that when I used the default with an LSTM it did not have enough memory to run)
For AggregationMethod, EXPERIMENTAL_ACCUMULATE_N is to ADD_N (DEFAULT) as accumulate_n is to add_n. add_n waits for all of its arguments to be available before doing any summation, while accumulate_n sums as soon as its inputs are available. This may save memory, but has some picky shape information limitations because its current implementation requires creating a temporary variable.
There is a bit of documentation in the comments:
# The benefit of using AccumulateN is that its inputs can be combined
# in any order and this can allow the expression to be evaluated with
# a smaller memory footprint. When used with gpu_allocator_retry,
# it is possible to compute a sum of terms which are much larger than
# total GPU memory.
# AccumulateN can currently only be used if we know the shape for
# an accumulator variable. If this is not known, or if we only have
# 2 grads then we fall through to the "tree" case below.
Suppose we have weights
x = tf.Variable(np.random.random((5,10)))
cost = ...
And we use the GD optimizer:
upds = tf.train.GradientDescentOptimizer(lr).minimize(cost)
session.run(upds)
How can we implement for example non-negativity on weights?
I tried clipping them:
upds = tf.train.GradientDescentOptimizer(lr).minimize(cost)
session.run(upds)
session.run(tf.assign(x, tf.clip_by_value(x, 0, np.infty)))
But this slows down my training by a factor of 50.
Does anybody know a good way to implement such constraints on the weights in TensorFlow?
P.S.: in the equivalent Theano algorithm, I had
T.clip(x, 0, np.infty)
and it ran smoothly.
You can take the Lagrangian approach and simply add a penalty for features of the variable you don't want.
e.g. To encourage theta to be non-negative, you could add the following to the optimizer's objective function.
added_loss = -tf.minimum( tf.reduce_min(theta),0)
If any theta are negative, then add2loss will be positive, otherwise zero. Scaling that to a meaningful value is left as an exercise to the reader. Scaling too little will not exert enough pressure. Too much may make things unstable.
As of TensorFlow 1.4, there is a new argument to tf.get_variable that allows to pass a constraint function that is applied after the update of the optimizer. Here is an example that enforces a non-negativity constraint:
with tf.variable_scope("MyScope"):
v1 = tf.get_variable("v1", …, constraint=lambda x: tf.clip_by_value(x, 0, np.infty))
constraint: An optional projection function to be applied to the
variable
after being updated by an Optimizer (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
By running
sess.run(tf.assign(x, tf.clip_by_value(x, 0, np.infty)))
you are consistently adding nodes to the graph and making it slower and slower.
Actually you may just define a clip_op when building the graph and run it each time after updating the weights:
# build the graph
x = tf.Variable(np.random.random((5,10)))
loss = ...
train_op = tf.train.GradientDescentOptimizer(lr).minimize(loss)
clip_op = tf.assign(x, tf.clip(x, 0, np.infty))
# train
sess.run(train_op)
sess.run(clip_op)
I recently had this problem as well. I discovered that you can import keras which has nice weight constraint functions as use them directly in the kernen constraint in tensorflow. Here is an example of my code. You can do similar things with kernel regularizer
from keras.constraints import non_neg
conv1 = tf.layers.conv2d(
inputs=features['x'],
filters=32,
kernel_size=[5,5],
strides = 2,
padding='valid',
activation=tf.nn.relu,
kernel_regularizer=None,
kernel_constraint=non_neg(),
use_bias=False)
There is a practical solution: Your cost function can be written by you, to put high cost onto negative weights. I did this in a matrix factorization model in TensorFlow with python, and it worked well enough. Right? I mean it's obvious. But nobody else mentioned it so here you go. EDIT: I just saw that Mark Borderding also gave another loss and cost-based solution implementation before I did.
And if "the best way" is wanted, as the OP asked, what then? Well "best" might actually be application-specific, in which case you'd need to try a few different ways with your dataset and consider your application requirements.
Here is working code for increasing the cost for unwanted negative solution variables:
cost = tf.reduce_sum(keep_loss) + Lambda * reg # Cost = sum of losses for training set, except missing data.
if prefer_nonneg: # Optionally increase cost for negative values in rhat, if you want that.
negs_indices = tf.where(rhat < tf.constant(0.0))
neg_vals = tf.gather_nd(rhat, negs_indices)
cost += 2. * tf.reduce_sum(tf.abs(neg_vals)) # 2 is a magic number (empirical parameter)
You are free to use my code but please give me some credit if you choose to use it. Give a link to this answer on stackoverflow.com please.
This design would be considered a soft constraint, because you can still get negative weights, if you let it, depending on your cost definition.
It seems that constraint= is also available in TF v1.4+ as a parameter to tf.get_variable(), where you can pass a function like tf.clip_by_value. This seems like another soft constraint, not hard constraint, in my opinion, because it depends on your function to work well or not. It also might be slow, as the other answerer tried the same function and reported it was slow to converge, although they didn't use the constraint= parameter to do this. I don't see any reason why one would be any faster than the other since they both use the same clipping approach. So if you use the constraint= parameter then you should expect slow convergence in the context of the original poster's application.
It would be nicer if also TF provided true hard constraints to the API, and let TF figure out how to both implement that as well as make it efficient on the back end. I mean, I have seen this done in linear programming solvers already for a long time. The application declares a constraint, and the back end makes it happen.