This layer is static, it is a pseudo function. In the forward propagation it doesn't do anything (identity function). In the back propagation however, it multiplies the gradient by -1. There are lots of implementations on github but they don't work with TF 2.0.
Here's one for reference.
import tensorflow as tf
from tensorflow.python.framework import ops
class FlipGradientBuilder(object):
def __init__(self):
self.num_calls = 0
def __call__(self, x, l=1.0):
grad_name = "FlipGradient%d" % self.num_calls
#ops.RegisterGradient(grad_name)
def _flip_gradients(op, grad):
return [tf.negative(grad) * l]
g = tf.get_default_graph()
with g.gradient_override_map({"Identity": grad_name}):
y = tf.identity(x)
self.num_calls += 1
return y
flip_gradient = FlipGradientBuilder()
Dummy op that reverses the gradients
This can be done using the decorator tf.custom_gradient, as described in this example:
#tf.custom_gradient
def grad_reverse(x):
y = tf.identity(x)
def custom_grad(dy):
return -dy
return y, custom_grad
Then, you can just use it as if it is a normal TensorFlow op, for example:
z = encoder(x)
r = grad_reverse(z)
y = decoder(r)
Keras API?
A great convenience of TF 2.0 is it's native support for Keras API. You can define a custom GradReverse op and enjoy the convenience of Keras:
class GradReverse(tf.keras.layers.Layer):
def __init__(self):
super().__init__()
def call(self, x):
return grad_reverse(x)
Then, you can use this layer as any other layers of Keras, for example:
model = Sequential()
conv = tf.keras.layers.Conv2D(...)(inp)
cust = CustomLayer()(conv)
flat = tf.keras.layers.Flatten()(cust)
fc = tf.keras.layers.Dense(num_classes)(flat)
model = tf.keras.models.Model(inputs=[inp], outputs=[fc])
model.compile(loss=..., optimizer=...)
model.fit(...)
Related
this is a similar question to: How to create a keras layer with a custom gradient in TF2.0?
Only, I would like to introduce a learnable parameter into the custom layer that I am training.
Here's a toy example of my current approach here:
# Method for calculation custom gradient
#tf.custom_gradient
def scaler(x, s):
def grad(upstream):
dy_dx = s
dy_ds = x
return dy_dx, dy_ds
return x * s, grad
# Keras Layer with trainable parameter
class TestLayer(tf.keras.layers.Layer):
def build(self, input_shape):
self.scale = self.add_weight("scale",
shape=[1,],
initializer=tf.keras.initializers.Constant(value=2.0),
trainable=True)
def call(self, inputs):
return scaler(inputs, self.scale)
# Creates Keras Model that uses the layer
def Model():
x_in = tf.keras.layers.Input(shape=(1,))
x_out = TestLayer()(x_in)
return tf.keras.Model(inputs=x_in, outputs=x_out, name="fp8_test")
# Create toy dataset, want to learn `scale` such to satisfy 5 = 2 * scale (i.e, `scale` should learn ~2.5)
def Dataset():
inps = tf.ones(shape=(10**5,)) * 2 # inputs
expected = tf.ones(shape=(10**5,)) * 5 # targets
data_in = tf.data.Dataset.from_tensors(inps)
data_exp = tf.data.Dataset.from_tensors(expected)
dataset = tf.data.Dataset.zip((data_in, data_exp))
return dataset
model = Model()
model.summary()
dataset = Dataset()
# Use `MSE` loss and `SGD` optimizer
model.compile(
loss=tf.keras.losses.MSE,
optimizer=tf.keras.optimizers.SGD(),
)
model.fit(dataset, epochs=100)
This is failing with the following shape related error in the optimizer:
ValueError: Shapes must be equal rank, but are 1 and 2 for '{{node SGD/SGD/update/ResourceApplyGradientDescent}} = ResourceApplyGradientDescent[T=DT_FLOAT, use_locking=true](fp8_test/test_layer_1/ReadVariableOp/resource, SGD/Identity, SGD/IdentityN)' with input shapes: [], [], [100000,1].
I've been staring at the docs for a while, I'm a bit stumped as to why this isn't working, I would really appreciate any input on how to fix this toy example.
Thanks in advance.
From tensorflow keras example. I can create a custom layer which contains Linear layer recursively
class MLPBlock(layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
how do i access all the component layers of a custom layer, I want to access weight and biases of all the component layers.
ex:
MLPBlock(Parent Layer):
linear_1
linear_2
linear_3
I have looked into tensorflow keras api version r 1.14 https://www.tensorflow.org/guide/keras
but could not find any way to do this.
I assume that you are following this tutorial. Based on that, here is how you can access the weights:
class MLPBlock(tf.keras.Model):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = tf.keras.layers.Dense(32)
self.linear_2 = tf.keras.layers.Dense(32)
self.linear_3 = tf.keras.layers.Dense(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp_block = MLPBlock()
y = mlp_block(tf.ones(shape=(3, 64)))
for layer in mlp_block.layers:
weights, biases = layer.get_weights()
Please note that I slightly modified the example so that you can access the layer's weights and biases. Namely, what I did is instead of subclassing the class with tf.keras.layers.Layer, I subclassed with tf.keras.Model so that the stack of layers can be treated as a model, and then you can access the layers of that model. Then, instead of using the custom Linear layer, I used the tf.keras.layers.Dense for simplicity, however, using the custom layer should not make a difference.
I am attempting to use the Gamma function from tfp in a custom Keras loss function using the log_prob method, but the function always returns nan when training starts.
I have tested the loss function and seems to work fine:
import tensorflow as tf
import tensorflow_probability as tfp
tf.enable_eager_execution()
def gamma_loss(y_true, alpha, beta):
gamma_distr = tfp.distributions.Gamma(concentration=alpha, rate=beta)
log_lik_gamma = gamma_distr.log_prob(y_true)
return -tf.reduce_mean(log_lik_gamma)
gamma_loss(100, 2, 2).numpy()
# 194.00854
The problem may be related to the parameters (alpha and beta) that I am passing to the function and that are produced by the final (custom) layer of the model I am using.
This is the full snippet:
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input, Dense, Layer, Concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.initializers import glorot_normal
import tensorflow_probability as tfp
from sklearn.datasets import make_regression
class GammaLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(GammaLayer, self).__init__(**kwargs)
def build(self, input_shape):
n_weight_rows = 4
self.kernel_2 = self.add_weight(name='kernel_2',
shape=(n_weight_rows, self.output_dim),
initializer=glorot_normal(),
trainable=True)
self.kernel_3 = self.add_weight(name='kernel_3',
shape=(n_weight_rows, self.output_dim),
initializer=glorot_normal(),
trainable=True)
self.bias_2 = self.add_weight(name='bias_2',
shape=(self.output_dim,),
initializer=glorot_normal(),
trainable=True)
self.bias_3 = self.add_weight(name='bias_3',
shape=(self.output_dim,),
initializer=glorot_normal(),
trainable=True)
super(GammaLayer, self).build(input_shape)
def call(self, x):
# Here i use softplus to make the parameters strictly positive
alpha = tf.math.softplus(K.dot(x, self.kernel_2) + self.bias_2)
beta = tf.math.softplus(K.dot(x, self.kernel_3) + self.bias_3)
return [alpha, beta]
def compute_output_shape(self, input_shape):
"""
The assumption is that the output is always one-dimensional
"""
return [(input_shape[0], self.output_dim), (input_shape[0], self.output_dim)]
def gamma_loss(y_true, y_pred):
alpha, beta = y_pred[0], y_pred[1]
gamma_distr = tfp.distributions.Gamma(concentration=alpha, rate=beta)
return -tf.reduce_mean(gamma_distr.log_prob(y_true))
X, y = make_regression(n_samples=1000, n_features=3, noise=0.1)
inputs = Input(shape=(3,))
x = Dense(6, activation='relu')(inputs)
x = Dense(4, activation='relu')(x)
x = GammaLayer(1, name='main_output')(x)
output_params = Concatenate(1, name="pvec")(x)
model = Model(inputs, output_params)
model.compile(loss=gamma_loss, optimizer='adam')
model.fit(X, y, epochs=30, batch_size=10) ```
Can you try adding an additional 1e-6 or so outside the softplus? For very negative values, softplus becomes quite close to zero.
Thanks for reading my question.
I was using keras to develop my reinforcement learning agent based on keres-rl. But I want to upgrade my agent so that I get some update from open ai base line code for better action exploration. But the code used tensorflow only. It is my first time to use tensorflow. I am so confused. I build keras deep learninng model using its "Model API". I have never concerned about inside of model. But the code I referenced was full of the code that kick in inside of deep learning model and give some change to its weight and get immediate layer output using tf.Session(). The framework is so flexible. Like below, using tf.Session() the tensor, which is recognized tensor and is not callable, can get result feeding feed_dict data. In keras, it is impossible as far as I know.
Once I allow using tf.Session(), my architecture will be complex and nobody wants to understand and use it except that I can adapt reference code more easily.
On the other side, if I don't allow that, I needs to break down my existing model and use tons of K.function to get middle layer's output or something that I can't get from keras model.
import numpy as np
from keras.layers import Dense, Input, BatchNormalization
from keras.models import Model
import tensorflow as tf
import keras.backend as K
import rl2.tf_util as U
def normalize(x, stats):
if stats is None:
return x
return (x - stats.mean) / (stats.std + 1e-8)
class RunningMeanStd(object):
# https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm
def __init__(self, my, epsilon=1e-2, shape=()):
self._sum = K.variable(value=np.zeros(shape), dtype=tf.float32, name=my+"_runningsum")
self._sumsq = K.variable(value=np.zeros(shape) + epsilon, dtype=tf.float32, name=my+"_runningsumsq")
self._count = K.variable(value=np.zeros(()) + epsilon, dtype=tf.float32, name=my+"_count")
self.mean = self._sum / self._count
self.std = K.sqrt(K.maximum((self._sumsq / self._count) - K.square(self.mean), epsilon))
newsum = K.variable(value=np.zeros(shape), dtype=tf.float32, name=my+'_sum')
newsumsq = K.variable(value=np.zeros(shape), dtype=tf.float32, name=my+'_var')
newcount = K.variable(value=np.zeros(()), dtype=tf.float32, name=my+'_count')
self.incfiltparams = K.function([newsum, newsumsq, newcount], [],
updates=[K.update_add(self._sum, newsum),
K.update(self._sumsq, newsumsq),
K.update(self._count, newcount)])
def update(self, x):
x = x.astype('float64')
n = int(np.prod(self.shape))
totalvec = np.zeros(n*2+1, 'float64')
addvec = np.concatenate([x.sum(axis=0).ravel(), np.square(x).sum(axis=0).ravel(), np.array([len(x)],dtype='float64')])
self.incfiltparams(totalvec[0:n].reshape(self.shape),
totalvec[n:2*n].reshape(self.shape),
totalvec[2*n])
i = Input(shape=(1,))
# h = BatchNormalization()(i)
h = Dense(4, activation='relu', kernel_initializer='he_uniform')(i)
h = Dense(10, activation='relu', kernel_initializer='he_uniform')(h)
o = Dense(1, activation='linear', kernel_initializer='he_uniform')(h)
model = Model(i, o)
obs_rms = RunningMeanStd(my='obs', shape=(1,))
normalized_obs0 = K.clip(normalize(i, obs_rms), 0, 100)
tf2 = model(normalized_obs0)
# print(model.predict(np.asarray([2,2,2,2,2]).reshape(5,)))
# print(tf(np.asarray([2,2,2,2,2]).reshape(5,)))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run([tf2], feed_dict={i : U.adjust_shape(i, [np.asarray([2,]).reshape(1,)])}))
I am trying to write some custom TensorFlow functions in python (using tf.py_func) where I want to calculate both the results and the gradients in python. I'm using the gradient_override_map trick (for example from from https://gist.github.com/harpone/3453185b41d8d985356cbe5e57d67342 and How to make a custom activation function with only Python in Tensorflow?).
However, while the function in the forward direction gets a numpy array as an input, the function for the gradient gets Tensors. This is a problem, depending on when the function gets called, because there may not be a default session, and/or there may not be a feed_dict with all the required values yet (for example, in a tf.train optimizer).
How do I do a py_func where both the forward and backward functions get (and return) numpy arrays?
Sample code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def sin_func(x):
return np.sin(x)
def sin_grad_func(op, grad):
x = op.inputs[0].eval()
grad = grad.eval() # <--- this is what I'd like to avoid
output_grad = np.cos(x) * grad
return tf.convert_to_tensor(output_grad)
def py_func(func, inp, Tout, stateful=True, name=None, grad_func=None):
grad_name = 'PyFuncGrad_' + str(np.random.randint(0, 1E+8))
tf.RegisterGradient(grad_name)(grad_func)
g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": grad_name}):
return tf.py_func(func, inp, Tout, stateful=stateful, name=name)
with tf.Session() as sess:
np_x = np.linspace(0, np.pi, num=1000, dtype=np.float32)
x = tf.constant(np_x)
y = py_func(sin_func,
[x],
[tf.float32],
name='np_sin',
grad_func=sin_grad_func)
y = y[0]
gr = tf.gradients(y, [x])
tf.global_variables_initializer().run()
plt.plot(y.eval())
plt.plot(gr[0].eval())
If you want to include arbitrary Python code in your gradient function, the easiest solution is to create another tf.py_func() inside sin_grad_func():
def sin_grad_func_impl(x, grad):
return np.cos(x) * grad
def sin_grad_func(op, grad):
return tf.py_func(sin_grad_func_impl, [x, grad], grad.dtype)