Keras custom loss function with binary (round) with tensorflow backend - tensorflow

I'm currently trying to implement a custom loss function (precision) with a binary outcome but Tensorflow backend refuses to use round function which is necessary to be used in order to generate a '0' or '1'.
As far as I have investigated, this is because Tensorflow defines the gradient of the round as None and the loss function can't return None.
I have currently implemented this custom loss to create as close as is possible '0' or '1' in R Keras interface.
precision_loss<-function(y_true,y_pred){
y_pred_pos = K$clip(y_pred, 0, 1)
#Custom sigmoid to generate '0' '1'
y_pred_pos = K$maximum(0,K$minimum(1,(y_pred_pos+0.0625)/0.125))
y_pred_neg = 1 - y_pred_pos
y_pos = K$clip(y_true, 0, 1)
#Custom sigmoid to generate '0' '1'
y_pos = K$maximum(0,K$minimum(1,(y_pos+0.0625)/0.125))
y_neg = 1 - y_pos
#Generate confusion matrix counts
tp = K$sum(y_pos*y_pred_pos)
tn = K$sum(y_neg*y_pred_neg)
fp = K$sum(y_neg*y_pred_pos)
fn = K$sum(y_pos*y_pred_neg)
return(1-(tp/(tp+fp+K$epsilon())))
}
Notice the "sigmoid" : K$maximum(0,K$minimum(1,(y_pos+0.0625)/0.125))
What I wanted to implement is a workaround for this one:
precision_loss<-function(y_true, y_pred){
y_pred_pos = K$round(K$clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos
y_pos = K$round(K$clip(y_true, 0, 1))
y_neg = 1 - y_pos
#Generate confusion matrix counts
tp = K$sum(K$clip(y_pos * y_pred_pos,0,1))
tn = K$sum(K$clip(y_neg * y_pred_neg,0,1))
fp = K$sum(K$clip(y_neg * y_pred_pos,0,1))
fn = K$sum(K$clip(y_pos * y_pred_neg,0,1))
return(1-(tp/(tp+fp+K$epsilon())))
}
Some of you have an alternative implementation without using round to generate binary outcomes in the loss function?
PD: In custom metrics function the round is allowed

In order to build a binary loss function, it wouldn't be enough to just build the custom loss function itself. You would also have to pre-define the gradients.
Your high-dimensional loss function would be zero for some points and one for all others. For all non-continuous points in this space, it would be impossible to analytically compute a gradient (i.e. the concept of a gradient doesn't even exist for such points), so you would have to just define one. And for all the continuous points in this space (e.g. an open set in which all loss values are 1), the gradient would exist, but it would be zero, so you would also have to pre-define the gradient values, otherwise your weights wouldn't move at all.
That means either way you would have to define your own custom "gradient" computation function that replaces Keras' (i.e. TensorFlow's) automatic differentiation engine for that particular node in the graph (the loss function node).
You could certainly achieve this by modifying your local copy of Keras or TensorFlow, but nothing good can come from it.
Also, even if you managed to do this, consider this: If your loss function returns only 0 or 1, that means it can only distinguish between two states: The model's prediction is either 100% correct (0 loss) or it is not 100% correct (1 loss). The magnitude of the gradient would have to be the same for all non-100% cases. Is that a desirable property?
Your quasi-binary sigmoid solution has the same problem: The gradient will be almost zero almost everywhere, and in the few points where it won't be almost zero, it will be almost infinity. If you try to train a model with that loss function, it won't learn anything.

As you noticed a custom loss function need to be based on functions which have their gradients defined (in order to minimise the loss function), which is not necessary for a simple metric. Some functions like “round” and “sign” are difficult to use in loss function since their gradients are either null all the time or infinite which is not helpful for minimisation. That’s probably why their gradients are not defined, by default.
Then, you have two options:
Option 1: you use the round function but you need to add your custom gradient for round, to substitute it in backend.
Option 2: you define another loss function without using round
You chose option 2, which is the best option I think. But your “sigmoid” is very linear, so probably, not a good approximation of your “round” function. You could use an actual sigmoid which is slower due to the use of exponential but you could obtain a similar result with a modified softsign:
max_gradient=100
K$maximum(0,K$minimum(1,0.5*(1+(max_gradient*y_pos)/(1+ max_gradient*abs(y_pos)))))
The max_gradient coefficient can be used to make your edge more sharp, around 0.5. It defines the maximum gradient at 0.5.

Related

tf.fake_quant_with_min_max_vars is a differentiable function?

Quantization schemes are generally non-differentiable because they pass through the threshold, such as round or sign function. It means that we can not get the gradient of trainable variables due to the nature of chain rule.
Instead, we can use a trick called 'straight-through-estimator', which enable us to back-propagating the gradient of individual trainable variables.
One such method is tf.fake_quant_with_min_max_vars, The advantages of this format are that it can represent arbitrary magnitudes of ranges, they don’t have to be symmetrical, it can represent signed and unsigned values, and the linear spread makes doing multiplications straightforward.Blog, Paper.
So, my question is, can we differentiate the fake_quant function? And if so, does this function apply 'straight-through-estimator'?
I did a little bit of this with some snippet code
x = tf.cast(np.random.normal(0,1,(10,10), tf.float32)
x_q = tf.fake_quant_with_min_max_vars(x, min=tf.reduce_min(x), max=tf.reduce_max(x), num_bits=3)
grad = tf.gradients(x_q, x)
In that case, almost every grad have value 1(i.e, gradient 1), which means it pass through the gradient itself.
However, sometimes a few samples have gradient 0, or other constant, such as 2, 3, 4...
Am I missing what's going on?

Tensorflow: What exact formula is applied in `tf.nn.sparse_softmax_cross_entropy_with_logits`?

I tried to manually recompute the outputs of this function so I created a minimal example:
logits = tf.pack(np.array([[[[0,1,2]]]],dtype=np.float32)) # img of shape (1, 1, 1, 3)
labels = tf.pack(np.array([[[1]]],dtype=np.int32)) # gt of shape (1, 1, 1)
softmaxCrossEntropie = tf.nn.sparse_softmax_cross_entropy_with_logits(logits,labels)
softmaxCrossEntropie.eval() # --> output is [1.41]
Now according to my own calculation I only get [1.23]
When manually calculating, I'm simply applying softmax
and cross-entropy:
where q(x) = sigma(x_j) or (1-sigma(x_j)) depending whether j is the correct ground truth class or not and p(x) = labels which are then one-hot-encoded
I'm not sure where the difference might originate from. I cannot really imagine that some epsilon causes such a big difference. Does someone know where I can lookup, which exact formula is used by tensorflow?
Is the source code of that exact part available?
I could only find nn_ops.py, but it only uses another function called gen_nn_ops._sparse_softmax_cross_entropy_with_logits which I couldn't find on github...
Well, usually p(x) in cross-entropy equation is true distribution, while q(x) is the distribution obtained from softmax. So, if p(x) is one-hot (and this is so, otherwise sparse cross-entropy could not be applied), cross entropy is just negative log for probability of true category.
In your example, softmax(logits) is a vector with values [0.09003057, 0.24472847, 0.66524096], so the loss is -log(0.24472847) = 1.4076059 which is exactly what you got as output.

how tensorflow handles complex gradient?

Let z is a complex variable, C(z) is its conjugation.
In complex analysis theory, the derivative of C(z) w.r.t z don't exist. But in tesnsorflow, we can calculate dC(z)/dz and the result is just 1.
Here is an example:
x = tf.placeholder('complex64',(2,2))
y = tf.reduce_sum(tf.conj(x))
z = tf.gradients(y,x)
sess = tf.Session()
X = np.random.rand(2,2)+1.j*np.random.rand(2,2)
X = X.astype('complex64')
Z = sess.run(z,{x:X})[0]
The input X is
[[0.17014372+0.71475762j 0.57455420+0.00144318j]
[0.57871044+0.61303568j 0.48074263+0.7623235j ]]
and the result Z is
[[1.-0.j 1.-0.j]
[1.-0.j 1.-0.j]]
I don't understand why the gradient is set to be 1?
And I want to know how tensorflow handles the complex gradients in general.
How?
The equation used by Tensorflow for the gradient is:
Where the '*' means conjugate.
When using the definition of the partial derivatives wrt z and z* it uses Wirtinger Calculus. Wirtinger calculus enables to calculate the derivative wrt a complex variable for non-holomorphic functions. The Wirtinger definition is:
Why this definition?
When using for example Complex-Valued Neural Networks (CVNN) the gradients will be used over non-holomorphic, real-valued scalar function of one or several complex variables, tensorflow definition of a gradient can then be written as:
This definition corresponds with the literature of CVNN like for example chapter 4 section 4.3 of this book or Amin et al. (between countless examples).
Bit late, but I came across this issue recently too.
The key point is that TensorFlow defines the "gradient" of a complex-valued function f(z) of a complex variable as "the gradient of the real map F: (x,y) -> Re(f(x+iy)), expressed as a complex number" (the gradient of that real map is a vector in R^2, so we can express it as a complex number in the obvious way).
Presumably the reason for that definition is that in TF one is usually concerned with gradients for the purpose of running gradient descent on a loss function, and in particular for identifying the direction of maximum increase/decrease of that loss function. Using the above definition of gradient means that a complex-valued function of complex variables can be used as a loss function in a standard gradient descent algorithm, and the result will be that the real part of the function gets minimised (which seems to me a somewhat reasonable interpretation of "optimise this complex-valued function").
Now, to your question, an equivalent way to write that definition of gradient is
gradient(f) := dF/dx + idF/dy = conj(df/dz + dconj(f)/dz)
(you can easily verify that using the definition of d/dz). That's how TensorFlow handles complex gradients. As for the case of f(z):=conj(z), we have df/dz=0 (as you mention) and dconj(f)/dz=1, giving gradient(f)=1.
I wrote up a longer explanation here, if you're interested: https://github.com/tensorflow/tensorflow/issues/3348#issuecomment-512101921

Tensorflow: opt.compute_gradients() returns values different from the weight difference of opt.apply_gradients()

Question: What is the most efficient way to get the delta of my weights in the most efficient way in a TensorFlow network?
Background: I've got the operators hooked up as follows (thanks to this SO question):
self.cost = `the rest of the network`
self.rmsprop = tf.train.RMSPropOptimizer(lr,rms_decay,0.0,rms_eps)
self.comp_grads = self.rmsprop.compute_gradients(self.cost)
self.grad_placeholder = [(tf.placeholder("float", shape=grad[1].get_shape(), name="grad_placeholder"), grad[1]) for grad in self.comp_grads]
self.apply_grads = self.rmsprop.apply_gradients(self.grad_placeholder)
Now, to feed in information, I run the following:
feed_dict = `training variables`
grad_vals = self.sess.run([grad[0] for grad in self.comp_grads], feed_dict=feed_dict)
feed_dict2 = `feed_dict plus gradient values added to self.grad_placeholder`
self.sess.run(self.apply_grads, feed_dict=feed_dict2)
The command of run(self.apply_grads) will update the network weights, but when I compute the differences in the starting and ending weights (run(self.w1)), those numbers are different than what is stored in grad_vals[0]. I figure this is because the RMSPropOptimizer does more to the raw gradients, but I'm not sure what, or where to find out what it does.
So back to the question: How do I get the delta on my weights in the most efficient way? Am I stuck running self.w1.eval(sess) multiple times to get the weights and calc the difference? Is there something that I'm missing with the tf.RMSPropOptimizer function.
Thanks!
RMSprop does not subtract the gradient from the parameters but use more complicated formula involving a combination of:
a momentum, if the corresponding parameter is not 0
a gradient step, rescaled non uniformly (on each coordinate) by the square root of the squared average of the gradient.
For more information you can refer to these slides or this recent paper.
The delta is first computed in memory by tensorflow in the slot variable 'momentum' and then the variable is updated (see the C++ operator).
Thus, you should be able to access it and construct a delta node with delta_w1 = self.rmsprop.get_slot(self.w1, 'momentum'). (I have not tried it yet.)
You can add the weights to the list of things to fetch each run call. Then you can compute the deltas outside of TensorFlow since you will have the iterates. This should be reasonably efficient, although it might incur an extra elementwise difference, but to avoid that you might have to hack around in the guts of the optimizer and find where it puts the update before it applies it and fetch that each step. Fetching the weights each call shouldn't do wasteful extra evaluations of part of the graph at least.
RMSProp does complicated scaling of the learning rate for each weight. Basically it divides the learning rate for a weight by a running average of the magnitudes of recent gradients of that weight.

What is the best way to implement weight constraints in TensorFlow?

Suppose we have weights
x = tf.Variable(np.random.random((5,10)))
cost = ...
And we use the GD optimizer:
upds = tf.train.GradientDescentOptimizer(lr).minimize(cost)
session.run(upds)
How can we implement for example non-negativity on weights?
I tried clipping them:
upds = tf.train.GradientDescentOptimizer(lr).minimize(cost)
session.run(upds)
session.run(tf.assign(x, tf.clip_by_value(x, 0, np.infty)))
But this slows down my training by a factor of 50.
Does anybody know a good way to implement such constraints on the weights in TensorFlow?
P.S.: in the equivalent Theano algorithm, I had
T.clip(x, 0, np.infty)
and it ran smoothly.
You can take the Lagrangian approach and simply add a penalty for features of the variable you don't want.
e.g. To encourage theta to be non-negative, you could add the following to the optimizer's objective function.
added_loss = -tf.minimum( tf.reduce_min(theta),0)
If any theta are negative, then add2loss will be positive, otherwise zero. Scaling that to a meaningful value is left as an exercise to the reader. Scaling too little will not exert enough pressure. Too much may make things unstable.
As of TensorFlow 1.4, there is a new argument to tf.get_variable that allows to pass a constraint function that is applied after the update of the optimizer. Here is an example that enforces a non-negativity constraint:
with tf.variable_scope("MyScope"):
v1 = tf.get_variable("v1", …, constraint=lambda x: tf.clip_by_value(x, 0, np.infty))
constraint: An optional projection function to be applied to the
variable
after being updated by an Optimizer (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
By running
sess.run(tf.assign(x, tf.clip_by_value(x, 0, np.infty)))
you are consistently adding nodes to the graph and making it slower and slower.
Actually you may just define a clip_op when building the graph and run it each time after updating the weights:
# build the graph
x = tf.Variable(np.random.random((5,10)))
loss = ...
train_op = tf.train.GradientDescentOptimizer(lr).minimize(loss)
clip_op = tf.assign(x, tf.clip(x, 0, np.infty))
# train
sess.run(train_op)
sess.run(clip_op)
I recently had this problem as well. I discovered that you can import keras which has nice weight constraint functions as use them directly in the kernen constraint in tensorflow. Here is an example of my code. You can do similar things with kernel regularizer
from keras.constraints import non_neg
conv1 = tf.layers.conv2d(
inputs=features['x'],
filters=32,
kernel_size=[5,5],
strides = 2,
padding='valid',
activation=tf.nn.relu,
kernel_regularizer=None,
kernel_constraint=non_neg(),
use_bias=False)
There is a practical solution: Your cost function can be written by you, to put high cost onto negative weights. I did this in a matrix factorization model in TensorFlow with python, and it worked well enough. Right? I mean it's obvious. But nobody else mentioned it so here you go. EDIT: I just saw that Mark Borderding also gave another loss and cost-based solution implementation before I did.
And if "the best way" is wanted, as the OP asked, what then? Well "best" might actually be application-specific, in which case you'd need to try a few different ways with your dataset and consider your application requirements.
Here is working code for increasing the cost for unwanted negative solution variables:
cost = tf.reduce_sum(keep_loss) + Lambda * reg # Cost = sum of losses for training set, except missing data.
if prefer_nonneg: # Optionally increase cost for negative values in rhat, if you want that.
negs_indices = tf.where(rhat < tf.constant(0.0))
neg_vals = tf.gather_nd(rhat, negs_indices)
cost += 2. * tf.reduce_sum(tf.abs(neg_vals)) # 2 is a magic number (empirical parameter)
You are free to use my code but please give me some credit if you choose to use it. Give a link to this answer on stackoverflow.com please.
This design would be considered a soft constraint, because you can still get negative weights, if you let it, depending on your cost definition.
It seems that constraint= is also available in TF v1.4+ as a parameter to tf.get_variable(), where you can pass a function like tf.clip_by_value. This seems like another soft constraint, not hard constraint, in my opinion, because it depends on your function to work well or not. It also might be slow, as the other answerer tried the same function and reported it was slow to converge, although they didn't use the constraint= parameter to do this. I don't see any reason why one would be any faster than the other since they both use the same clipping approach. So if you use the constraint= parameter then you should expect slow convergence in the context of the original poster's application.
It would be nicer if also TF provided true hard constraints to the API, and let TF figure out how to both implement that as well as make it efficient on the back end. I mean, I have seen this done in linear programming solvers already for a long time. The application declares a constraint, and the back end makes it happen.