Based on the example as quoted in tensorflow's website here: https://www.tensorflow.org/api_docs/python/tf/custom_gradient
#tf.custom_gradient
def op_with_fused_backprop(x):
y, x_grad = fused_op(x)
def first_order_gradient(dy):
#tf.custom_gradient
def first_order_custom(unused_x):
def second_order_and_transpose(ddy):
return second_order_for_x(...), gradient_wrt_dy(...)
return x_grad, second_order_and_transpose
return dy * first_order_custom(x)
return y, first_order_gradient
There is a lack of details on why second_order_and_transpose(ddy) returns two objects. Based on the documentation of tf.custom_gradient, the grad_fn (i.e. second_order_and_transpose()) should return a list of Tensors which are the derivatives of dy w.r.t. unused_x. It is also not even clear why did they name it unused_x. Anyone has any idea on this example or in general create custom gradients for higher order derivatives?
There is a lack of details on why second_order_and_transpose(ddy) returns two objects.
Based on what I played with some examples, I believe you are correct. The official doc is somehow ambiguous (or incorrect). The second_order_and_transpose(ddy) should only return the one object, which is the calculated second-order gradient.
It is also not even clear why did they name it unused_x.
That is the tricky part. The unused_x explains why they name it (because you never going to use it...). The goal here is to wrap your second-order calculation function in a function called first_order_custom. You calculate the gradient of x from fused_op, and use that as a return value, instead of unused_x.
To make this more clear, I passed an example extended from the official document to define a second-order gradient of the log1pexp:
NOTE: The second-order gradient is not numerically stable, so let's use (1 - tf.exp(x)) to represent it, just to make our life easier.
#tf.custom_gradient
def log1pexp2(x):
e = tf.exp(x)
y = tf.math.log(1 + e)
x_grad = 1 - 1 / (1 + e)
def first_order_gradient(dy):
#tf.custom_gradient
def first_order_custom(unused_x):
def second_order_gradient(ddy):
# Let's define the second-order graidne to be (1 - e)
return ddy * (1 - e)
return x_grad, second_order_gradient
return dy * first_order_custom(x)
return y, first_order_gradient
To test the script, simply run:
import tensorflow as tf
#tf.custom_gradient
def log1pexp2(x):
e = tf.exp(x)
y = tf.math.log(1 + e)
x_grad = 1 - 1 / (1 + e)
def first_order_gradient(dy):
#tf.custom_gradient
def first_order_custom(unused_x):
def second_order_gradient(ddy):
# Let's define the second-order graidne to be (1 - e)
return ddy * (1 - e)
return x_grad, second_order_gradient
return dy * first_order_custom(x)
return y, first_order_gradient
x1 = tf.constant(1.)
y1 = log1pexp2(x1)
dy1 = tf.gradients(y1, x1)
ddy1 = tf.gradients(dy1, x1)
x2 = tf.constant(100.)
y2 = log1pexp2(x2)
dy2 = tf.gradients(y2, x2)
ddy2 = tf.gradients(dy2, x2)
with tf.Session() as sess:
print('x=1, dy1:', dy1[0].eval(session=sess))
print('x=1, ddy1:', ddy1[0].eval(session=sess))
print('x=100, dy2:', dy2[0].eval(session=sess))
print('x=100, ddy2:', ddy2[0].eval(session=sess))
Result:
x=1, dy1: 0.7310586
x=1, ddy1: -1.7182817
x=100, dy2: 1.0
x=100, ddy2: -inf
Related
I am working on a project where I want to use Nesterov's accelerated gradient method on the Ackley function below
to go from the initial point of (25, 20) to within the distance of the global minimizer and minimum of (2e-4, 5e-4).
def nag(func, x, lr, num_iters, jac, tol, callback, gamma=0.9, *args, **kwargs):
vals = [func(x)]
opt_res = OptimizeResult()
update = np.zeros(x.size)
for i in range(1, num_iters+1):
grad = jac(x - gamma * update)
prev_x = x
update = gamma * update + lr(i) * grad
x = x - update
vals.append(func(x))
callback(x)
if np.linalg.norm(x-prev_x) <= tol:
break
opt_res.x = x
opt_res.nit = i
return opt_res, np.array(vals, dtype=object)
Using gamma = 0.2, num_iters = 5000, and a customized learning rate function
def lr_(t):
if t < 5:
return 1e-4
elif t < 10:
return 1e-2
else:
return 0.1
I was able to get around (0.0008 and 0.0022), but couldn't get closer to the desired global minimizer and minimum as specified despite playing around with different values for a long time. Does anyone know what I could try so that I can get closer to the desired result?
Or are there other optimization methods that would work better than NAG? I heard Adam's or Adagrad should work, but haven't had much success with them.
I am trying to implement a custom loss function in Tensorflow 2.4 using the Keras backend.
The loss function is a ranking loss; I found the following paper with a somewhat log-likelihood loss: Chen et al. Single-Image Depth Perception in the Wild.
Similarly, I wanted to sample some (in this case 50) points from an image to compare the relative order between ground-truth and predicted depth maps using the NYU-Depth dataset. Being a fan of Numpy, I started working with that but came to the following exception:
ValueError: No gradients provided for any variable: [...]
I have learned that this is caused by the arguments not being filled when calling the loss function but instead, a C function is compiled which is then used later. So while I know the dimensions of my tensors (4, 480, 640, 1), I cannot work with the data as wanted and have to use the keras.backend functions on top so that in the end (if I understood correctly), there is supposed to be a path between the input tensors from the TF graph and the output tensor, which has to provide a gradient.
So my question now is: Is this a feasible loss function within keras?
I have already tried a few ideas and different approaches with different variations of my original code, which was something like:
def ranking_loss_function(y_true, y_pred):
# Chen et al. loss
y_true_np = K.eval(y_true)
y_pred_np = K.eval(y_pred)
if y_true_np.shape[0] != None:
num_sample_points = 50
total_samples = num_sample_points ** 2
err_list = [0 for x in range(y_true_np.shape[0])]
for i in range(y_true_np.shape[0]):
sample_points = create_random_samples(y_true, y_pred, num_sample_points)
for x1, y1 in sample_points:
for x2, y2 in sample_points:
if y_true[i][x1][y1] > y_true[i][x2][y2]:
#image_relation_true = 1
err_list[i] += np.log(1 + np.exp(-1 * y_pred[i][x1][y1] + y_pred[i][x2][y2]))
elif y_true[i][x1][y1] < y_true[i][x2][y2]:
#image_relation_true = -1
err_list[i] += np.log(1 + np.exp(y_pred[i][x1][y1] - y_pred[i][x2][y2]))
else:
#image_relation_true = 0
err_list[i] += np.square(y_pred[i][x1][y1] - y_pred[i][x2][y2])
err_list = np.divide(err_list, total_samples)
return K.constant(err_list)
As you can probably tell, the main idea was to first create the sample points and then based on the existing relation between them in y_true/y_pred continue with the corresponding computation from the cited paper.
Can anyone help me and provide some more helpful information or tips on how to correctly implement this loss using keras.backend functions? Trying to include the ordinal relation information really confused me compared to standard regression losses.
EDIT: Just in case this causes confusion: create_random_samples() just creates 50 random sample points (x, y) coordinate pairs based on the shape[1] and shape[2] of y_true (image width and height)
EDIT(2): After finding this variation on GitHub, I have tried out a variation using only TF functions to retrieve data from the tensors and compute the output. The adjusted and probably more correct version still throws the same exception though:
def ranking_loss_function(y_true, y_pred):
#In the Wild ranking loss
y_true_np = K.eval(y_true)
y_pred_np = K.eval(y_pred)
if y_true_np.shape[0] != None:
num_sample_points = 50
total_samples = num_sample_points ** 2
bs = y_true_np.shape[0]
w = y_true_np.shape[1]
h = y_true_np.shape[2]
total_samples = total_samples * bs
num_pairs = tf.constant([total_samples], dtype=tf.float32)
output = tf.Variable(0.0)
for i in range(bs):
sample_points = create_random_samples(y_true, y_pred, num_sample_points)
for x1, y1 in sample_points:
for x2, y2 in sample_points:
y_true_sq = tf.squeeze(y_true)
y_pred_sq = tf.squeeze(y_pred)
d1_t = tf.slice(y_true_sq, [i, x1, y1], [1, 1, 1])
d2_t = tf.slice(y_true_sq, [i, x2, y2], [1, 1, 1])
d1_p = tf.slice(y_pred_sq, [i, x1, y1], [1, 1, 1])
d2_p = tf.slice(y_pred_sq, [i, x2, y2], [1, 1, 1])
d1_t_sq = tf.squeeze(d1_t)
d2_t_sq = tf.squeeze(d2_t)
d1_p_sq = tf.squeeze(d1_p)
d2_p_sq = tf.squeeze(d2_p)
if d1_t_sq > d2_t_sq:
# --> Image relation = 1
output.assign_add(tf.math.log(1 + tf.math.exp(-1 * d1_p_sq + d2_p_sq)))
elif d1_t_sq < d2_t_sq:
# --> Image relation = -1
output.assign_add(tf.math.log(1 + tf.math.exp(d1_p_sq - d2_p_sq)))
else:
output.assign_add(tf.math.square(d1_p_sq - d2_p_sq))
return output/num_pairs
EDIT(3): This is the code for create_random_samples():
(FYI: Because it was weird to get the shape from y_true in this case, I first proceeded to hard-code it here as I know it for the dataset which I am currently using.)
def create_random_samples(y_true, y_pred, num_points=50):
y_true_shape = (4, 480, 640, 1)
y_pred_shape = (4, 480, 640, 1)
if y_true_shape[0] != None:
num_samples = num_points
population = [(x, y) for x in range(y_true_shape[1]) for y in range(y_true_shape[2])]
sample_points = random.sample(population, num_samples)
return sample_points
I would like to implement the following custom loss function, with argument x as the output of the last layer. Until now I implemented function this as Lambda layer, coupled with the keras mae loss, but I do not want that anymore
def GMM_UNC2(self, x):
tmp = self.create_mr(x) # get mr series
mr = k.sum(tmp, axis=1) # sum over time
tmp = k.square((1/self.T_i) * mr)
tmp = k.dot(tmp, k.transpose(self.T_i))
tmp = (1/(self.T * self.N)) * tmp
f = self.create_factor(x) # get factor
std = k.std(f)
mu = k.mean(f)
tmp = tmp + std/mu
def loss(y_true, y_pred=tmp):
return k.abs(y_true-y_pred)
return loss
self.y_true = np.zeros((1,1))
self.sdf_net = Model(inputs=[self.in_ma, self.in_mi, self.in_re, self.in_si], outputs=w)
self.sdf_net.compile(optimizer=self.optimizer, loss=self.GMM_UNC2(w))
self.sdf_net.fit([self.macro, self.micro, self.R, self.R_sign], self.y_true, epochs=epochs, verbose=1)
The code actually runs but it doesn't actually use tmp as input to loss (I multiplied it with some number, but the loss stays the same)
What am I doing wrong?
It is not completely clear from your question if you want to apply GMM_UNC2 function to the predictions, or it is applied only once to build the loss. If it is the first option, then all that code should be inside the loss and apply it over y_pred, like
def GMM_UNC2(self):
def loss(y_true, y_pred):
tmp = self.create_mr(y_pred) # get mr series
mr = k.sum(tmp, axis=1) # sum over time
tmp = k.square((1/self.T_i) * mr)
tmp = k.dot(tmp, k.transpose(self.T_i))
tmp = (1/(self.T * self.N)) * tmp
f = self.create_factor(x) # get factor
std = k.std(f)
mu = k.mean(f)
tmp = tmp + std/mu
return k.abs(y_true-y_pred)
return loss
If it is the second option, in general, passing objects as default values in a Python function definition is not a good idea, because it can be changed in the function definition. Also, you are assuming that the second argument to the loss has a name y_pred, but when called, it is done without a name, as a positional argument. In summary, you could try using a explicit comparison inside the loss, like
def loss(y_true, y_pred):
if y_pred is None:
y_pred = tmp
return k.abs(y_true - y_pred)
If what you like is ignoring the predictions, and forcibly using tmp, then you can ignore the y_pred argument of the loss and only use tmp, like
def loss(y_true, _):
return k.abs(y_true - tmp)
I am trying to train the MNIST data (which I downloaded from Kaggle) with simple multi-class logistic regression, but the scipy.optimize functions hang.
Here's the code:
import csv
from math import exp
from numpy import *
from scipy.optimize import fmin, fmin_cg, fmin_powell, fmin_bfgs
# Prepare the data
def getIiter(ifname):
"""
Get the iterator from a csv file with filename ifname
"""
ifile = open(ifname, 'r')
iiter = csv.reader(ifile)
iiter.__next__()
return iiter
def parseRow(s):
y = [int(x) for x in s]
lab = y[0]
z = y[1:]
return (lab, z)
def getAllRows(ifname):
iiter = getIiter(ifname)
x = []
l = []
for row in iiter:
lab, z = parseRow(row)
x.append(z)
l.append(lab)
return x, l
def cutData(x, y):
"""
70% training
30% testing
"""
m = len(x)
t = int(m * .7)
return [(x[:t], y[:t]), (x[t:], y[t:])]
def num2IndMat(l):
t = array(l)
tt = [vectorize(int)((t == i)) for i in range(10)]
return array(tt).T
def readData(ifname):
x, l = getAllRows(ifname)
t = [[1] + y for y in x]
return array(t), num2IndMat(l)
#Calculate the cost function
def sigmoid(x):
return 1 / (1 + exp(-x))
vSigmoid = vectorize(sigmoid)
vLog = vectorize(log)
def costFunction(theta, x, y):
sigxt = vSigmoid(dot(x, theta))
cm = (- y * vLog(sigxt) - (1 - y) * vLog(1 - sigxt)) / m / N
return sum(cm)
def unflatten(flatTheta):
return [flatTheta[i * N : (i + 1) * N] for i in range(n + 1)]
def costFunctionFlatTheta(flatTheta):
return costFunction(unflatten(flatTheta), trainX, trainY)
def costFunctionFlatTheta1(flatTheta):
return costFunction(flatTheta.reshape(785, 10), trainX, trainY)
x, y = readData('train.csv')
[(trainX, trainY), (testX, testY)] = cutData(x, y)
m = len(trainX)
n = len(trainX[0]) - 1
N = len(trainY[0])
initTheta = zeros(((n + 1), N))
flatInitTheta = ndarray.flatten(initTheta)
flatInitTheta1 = initTheta.reshape(1, -1)
In the last two lines we flatten initTheta because the fmin{,_cg,_bfgs,_powell} functions seem to only take vectors as the initial value argument x0. I also flatten initTheta using reshape in hope this answer can be of help.
There is no problem computing the cost function which takes up less than 2 seconds on my computer:
print(costFunctionFlatTheta(flatInitTheta), costFunctionFlatTheta1(flatInitTheta1))
# 0.69314718056 0.69314718056
But all the fmin functions hang, even if I set maxiter=0.
e.g.
newFlatTheta = fmin(costFunctionFlatTheta, flatInitTheta, maxiter=0)
or
newFlatTheta1 = fmin(costFunctionFlatTheta1, flatInitTheta1, maxiter=0)
When I interrupt the program, it seems to me it all hangs at lines in optimize.py calling the cost functions, lines like this:
return function(*(wrapper_args + args))
For example, if I use fmin_cg, this would be line 292 in optimize.py (Version 0.5).
How do I solve this problem?
OK I found a way to stop fmin_cg from hanging.
Basically I just need to write a function that computes the gradient of the cost function, and pass it to the fprime parameter of fmin_cg.
def gradient(theta, x, y):
return dot(x.T, vSigmoid(dot(x, theta)) - y) / m / N
def gradientFlatTheta(flatTheta):
return ndarray.flatten(gradient(flatTheta.reshape(785, 10), trainX, trainY))
Then
newFlatTheta = fmin_cg(costFunctionFlatTheta, flatInitTheta, fprime=gradientFlatTheta, maxiter=0)
terminates within seconds, and setting maxiter to a higher number (say 100) one can train the model within reasonable amount of time.
The documentation of fmin_cg says the gradient would be numerically computed if no fprime is given, which is what I suspect caused the hanging.
Thanks to this notebook by zgo2016#Kaggle which helped me find the solution.
I have a loss function I would like to try and minimize:
def lossfunction(X,b,lambs):
B = b.reshape(X.shape)
penalty = np.linalg.norm(B, axis = 1)**(0.5)
return np.linalg.norm(np.dot(X,B)-X) + lambs*penalty.sum()
Gradient descent, or similar methods, might be useful. I can't calculate the gradient of this function analytically, so I am wondering how I can numerically calculate the gradient for this loss function in order to implement a descent method.
Numpy has a gradient function, but it requires me to pass a scalar field at pre determined points.
You could try scipy.optimize.minimize
For your case a sample call would be:
import scipy.optimize.minimize
scipy.optimize.minimize(lossfunction, args=(b, lambs), method='Nelder-mead')
You could estimate the derivative numerically by a central difference:
def derivative(fun, X, b, lambs, h):
return (fun(X + 0.5*h,b,lambs) - fun(X - 0.5*h,b,lambs))/h
And use it like this:
# assign values to X, b, lambs
# set the value of h
h = 0.001
print derivative(lossfunction, X, b, lambs, h)
The code above is valid for dimX = 1, some modifications are needed to account for multidimensional vector X:
def gradient(fun, X, b, lambs, h):
res = []
for i in range (0,len(X)):
t1 = list(X)
t1[i] = t1[i] + 0.5*h
t2 = list(X)
t2[i] = t2[i] - 0.5*h
res = res + [(fun(t1,b,lambs) - fun(t2,b,lambs))/h]
return res
Forgive the naivity of the code, I barely know how to write some python :-)