Tensorflow dense layers worse than keras sequential - tensorflow

I try to train an agent on the inverse-pendulum (similar to cart-pole) problem, which is a benchmark of reinforcement learning. I use neural-fitted-Q-iteration algorithm which uses a multi-layer neural network to evaluate the Q function.
I use Keras.Sequential and tf.layers.dense to build the neural network repectively, and leave all other things to be the same. However, Keras gives me a good results and tensorflow does not. In fact, tensorflow doesn't work at all with its loss being increasing and the agent learns nothing from the training.
Here I present the code for Keras as follows
def build_model():
model = Sequential()
model.add(Dense(5, input_dim=3))
model.add(Activation('sigmoid'))
model.add(Dense(5))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
adam = Adam(lr=1E-3)
model.compile(loss='mean_squared_error', optimizer=adam)
return model
and the tensorflow version is
class NFQ_fit(object):
"""
neural network approximator for NFQ iteration
"""
def __init__(self, sess, N_feature, learning_rate=1E-3, batch_size=100):
self.sess = sess
self.N_feature = N_feature
self.learning_rate = learning_rate
self.batch_size = batch_size
# DNN structure
self.inputs = tf.placeholder(tf.float32, [None, N_feature], 'inputs')
self.labels = tf.placeholder(tf.float32, [None, 1], 'labels')
self.l1 = tf.layers.dense(inputs=self.inputs,
units=5,
activation=tf.sigmoid,
use_bias=True,
kernel_initializer=tf.truncated_normal_initializer(0.0, 1E-2),
bias_initializer=tf.constant_initializer(0.0),
kernel_regularizer=tf.contrib.layers.l2_regularizer(1E-4),
name='hidden-layer-1')
self.l2 = tf.layers.dense(inputs=self.l1,
units=5,
activation=tf.sigmoid,
use_bias=True,
kernel_initializer=tf.truncated_normal_initializer(0.0, 1E-2),
bias_initializer=tf.constant_initializer(0.0),
kernel_regularizer=tf.contrib.layers.l2_regularizer(1E-4),
name='hidden-layer-2')
self.outputs = tf.layers.dense(inputs=self.l2,
units=1,
activation=tf.sigmoid,
use_bias=True,
kernel_initializer=tf.truncated_normal_initializer(0.0, 1E-2),
bias_initializer=tf.constant_initializer(0.0),
kernel_regularizer=tf.contrib.layers.l2_regularizer(1E-4),
name='outputs')
# optimization
# self.mean_loss = tf.losses.mean_squared_error(self.labels, self.outputs)
self.mean_loss = tf.reduce_mean(tf.square(self.labels-self.outputs))
self.regularization_loss = tf.losses.get_regularization_loss()
self.loss = self.mean_loss # + self.regularization_loss
self.train_op = tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.loss)
The two models are the same. Both of them has two hidden layers with the same dimension. I expect that the problems may come from the kernel initialization but I don't know how to fix it.

Using Keras is great. If you want better TensorFlow integration check out tf.keras. There's no particular reason to use tf.layers if the Keras (or tf.keras) defaults work better.
In this case glorot_uniform looks like the default initializer. This is also the global TensorFlow default, so consider removing the kernel_initializer argument instead of the explicit truncated normal initialization in your question (or passing Glorot explicitly).

Related

How to merge ReLU after quantization aware training

I have a network which contains Conv2D layers followed by ReLU activations, declared as such:
x = layers.Conv2D(self.hparams['channels_count'], kernel_size=(4,1))(x)
x = layers.ReLU()(x)
And it is ported to TFLite with the following representaiton:
Basic TFLite network without Q-aware training
However, after performing quantization-aware training on the network and porting it again, the ReLU layers are now explicit in the graph:
TFLite network after Q-aware training
This results in them being processed separately on the target instead of during the evaluation of the Conv2D kernel, inducing a 10% performance loss in my overall network.
Declaring the activation with the following implicit syntax does not produce the problem:
x = layers.Conv2D(self.hparams['channels_count'], kernel_size=(4,1), activation='relu')(x)
Basic TFLite network with implicit ReLU activation
TFLite network with implicit ReLU after Q-aware training
However, this restricts the network to basic ReLU activation, whereas I would like to use ReLU6 which cannot be declared in this way.
Is this a TFLite issue? If not, is there a way to prevent the ReLU layer from being split? Or alternatively, is there a way to manually merge the ReLU layers back into the Conv2D layers after the quantization-aware training?
Edit:
QA training code:
def learn_qaware(self):
quantize_model = tfmot.quantization.keras.quantize_model
self.model = quantize_model(self.model)
training_generator = SCDataGenerator(self.training_set)
validate_generator = SCDataGenerator(self.validate_set)
self.model.compile(
optimizer=self.configure_optimizers(qa_learn=True),
loss=self.get_LLP_loss(),
metrics=self.get_metrics(),
run_eagerly=config['eager_mode'],
)
self.model.fit(
training_generator,
epochs = self.hparams['max_epochs'],
batch_size = 1,
shuffle = self.hparams['shuffle_curves'],
validation_data = validate_generator,
callbacks = self.get_callbacks(qa_learn=True),
)
Quantized TFLite model generation code:
def tflite_convert(classifier):
output_file = get_tflite_filename(classifier.model_path)
# Convert the model to the TensorFlow Lite format without quantization
saved_shape = classifier.model.input.shape.as_list()
fixed_shape = saved_shape
fixed_shape[0] = 1
classifier.model.input.set_shape(fixed_shape) # Force batch size to 1 for generation
converter = tf.lite.TFLiteConverter.from_keras_model(classifier.model)
classifier.model.input.set_shape(saved_shape)
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
if config['eager_mode']:
tf.executing_eagerly()
def representative_dataset():
for x in classifier.validate_set.get_all_inputs():
rs = x.reshape(1, x.shape[0], 1, 1).astype(np.float32)
yield([rs])
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(output_file, "wb").write(model_tflite)
return TFLite_model(output_file)
I have found a workaround which works by instantiating a non-trained version of the model, then copying over the weights from the quantization aware trained model before converting to TFLite.
This seems like quite a hack, so I'm still on the lookout for a cleaner solution.
Code for the workaround:
def dequantize(self):
if not hasattr(self, 'fp_model') or not self.fp_model:
self.fp_model = self.get_default_model()
def find_layer_in_model(name, model):
for layer in model.layers:
if layer.name == name:
return layer
return None
def find_weight_group_in_layer(name, layer):
for weight_group in quant_layer.trainable_weights:
if weight_group.name == name:
return weight_group
return None
for layer in self.fp_model.layers:
if 'input' in layer.name or 'quantize_layer' in layer.name:
continue
QUANT_TAG = "quant_"
quant_layer = find_layer_in_model(QUANT_TAG+layer.name,self.model)
if quant_layer is None:
raise RuntimeError('Failed to match layer ' + layer.name)
for i, weight_group in enumerate(layer.trainable_weights):
quant_weight_group = find_weight_group_in_layer(QUANT_TAG+weight_group.name, quant_layer)
if quant_weight_group is None:
quant_weight_group = find_weight_group_in_layer(weight_group.name, quant_layer)
if quant_weight_group is None:
raise RuntimeError('Failed to match weight group ' + weight_group.name)
layer.trainable_weights[i].assign(quant_weight_group)
self.model = self.fp_model
You can pass activation=tf.nn.relu6 to use ReLU6 activation.

<NameError: name 'categorical_crossentropy' is not defined> when trying to load a model

I have a custom keras model built:
def create_model(input_dim,
filters,
kernel_size,
strides,
padding,
rnn_units=256,
output_dim=30,
dropout_rate=0.5,
cell=GRU,
activation='tanh'):
"""
Creates simple Conv-Bi-RNN model used for word classification approach.
:params:
input_dim - Integer, size of inputs (Example: 161 if using spectrogram, 13 for mfcc)
filters - Integer, number of filters for the Conv1D layer
kernel_size - Integer, size of kernel for Conv layer
strides - Integer, stride size for the Conv layer
padding - String, padding version for the Conv layer ('valid' or 'same')
rnn_units - Integer, number of units/neurons for the RNN layer(s)
output_dim - Integer, number of output neurons/units at the output layer
NOTE: For speech_to_text approach, this number will be number of characters that may occur
dropout_rate - Float, percentage of dropout regularization at each RNN layer, between 0 and 1
cell - Keras function, for a type of RNN layer * Valid solutions: LSTM, GRU, BasicRNN
activation - String, activation type at the RNN layer
:returns:
model - Keras Model object
"""
keras.losses.custom_loss = 'categorical_crossentropy'
#Defines Input layer for the model
input_data = Input(name='inputs', shape=input_dim)
#Defines 1D Conv block (Conv layer + batch norm)
conv_1d = Conv1D(filters,
kernel_size,
strides=strides,
padding=padding,
activation='relu',
name='layer_1_conv',
dilation_rate=1)(input_data)
conv_bn = BatchNormalization(name='conv_batch_norm')(conv_1d)
#Defines Bi-Directional RNN block (Bi-RNN layer + batch norm)
layer = cell(rnn_units, activation=activation,
return_sequences=True, implementation=2, name='rnn_1', dropout=dropout_rate)(conv_bn)
layer = BatchNormalization(name='bt_rnn_1')(layer)
#Defines Bi-Directional RNN block (Bi-RNN layer + batch norm)
layer = cell(rnn_units, activation=activation,
return_sequences=True, implementation=2, name='final_layer_of_rnn')(layer)
layer = BatchNormalization(name='bt_rnn_final')(layer)
layer = Flatten()(layer)
#squish RNN features to match number of classes
time_dense = Dense(output_dim)(layer)
#Define model predictions with softmax activation
y_pred = Activation('softmax', name='softmax')(time_dense)
#Defines Model itself, and use lambda function to define output length based on inputs
model = Model(inputs=input_data, outputs=y_pred)
model.output_length = lambda x: cnn_output_length(x, kernel_size, padding, strides)
#Adds categorical crossentropy loss for the classification model
model = add_categorical_loss(model , output_dim)
#compile the model with choosen loss and optimizer
model.compile(loss={'categorical_crossentropy': lambda y_true, y_pred: y_pred},
optimizer=keras.optimizers.RMSprop(), metrics=['accuracy'])
print("\r\ncompile the model with choosen loss and optimizer\r\n")
print(model.summary())
return model
and after training model:
checkpointer = ModelCheckpoint(filepath=save_path+'tst_model.hdf5')
#Train the choosen model with the data generator
hist = model.fit_generator(generator=generator.next_train(), #Calls generators next_train function which generates new batch of training data
steps_per_epoch=steps_per_epoch, #Defines how many training steps are there
epochs=epochs, #Defines how many epochs does a training process takes
validation_data=generator.next_valid(), #Calls generators next_valid function which generates new batch of validation data
validation_steps=validation_steps, #Defines how many validation steps are theere
callbacks=[checkpointer], #Defines all callbacks (In this case we only have molde checkpointer that saves the model)
verbose=verbose)
Adter thet I am trying to load the latest checkpoint model as follows:
from keras.models import load_model
model = load_model(filepath=save_path+'tst_model.hdf5')
and get:
NameError: name 'categorical_crossentropy' is not defined
What i doing wrong?
Using:
Ubuntu 18.04
Python 3.6.8
TensorFlow 2.0
TensorFlow backend 2.3.1
You must import the library.
from tensorflow.keras.losses import categorical_crossentropy
When you load your model, tensorflow will automatically try to compile it (see the compile arguments of tf.keras.load_model). There's 2 ways to give away this warning:
If you provided a custom loss for the model you must include it in the tf.keras.load_model() function (see custom_objects argument; it is a dict object).
Set the compile argument to False.

How to get evaluated gradients from keras model using tensorflow 2?

I'm trying to obtain the gradients from a keras model. The backend function keras.backend.gradients creates a symbolic function which needs to be evaluated on some specific input. The following code does work for this problem but it makes use of the old tensorflow sessions and in particular of feed_dict.
import numpy as np
import keras
from keras import backend as K
import tensorflow as tf
model = keras.Sequential()
model.add(keras.layers.Dense(16, activation='relu', input_shape = (49, )))
model.add(keras.layers.Dense(11, activation='softmax'))
model.compile(optimizer='rmsprop', loss='mse')
trainingExample = np.random.random((1, 49))
gradients = K.gradients(model.output, model.trainable_weights)
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
evaluated_gradients = sess.run(gradients,\
feed_dict={model.input:trainingExample})
sess.close()
How can I rewrite this in tensorflow 2 style, i.e. without the sessions? There is an alternative method described here. However I don't understand why it should be necessary to give some explicit output to evaluate the gradients and how to make the solution work without these outputs.
In tensorflow-2 you can get gradients very easily using gradient tf.GradientTape().
I am citing the official tutorial code here -
#tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
you can find the complete tutorial in tensorflow official website - https://www.tensorflow.org/tutorials/quickstart/advanced

tensorflow - initialize duplicate nets with the same weights

I want to create two models with the same architecture and exactly the same weights using the tf.layers.dense
m1 = tf.layers.dense(input, units=10, activation='relu', name='m1')
m2 = tf.layers.dense(input, units=10, activation='relu', name='m2')
How can I set m2 weights and biases to be the same as m1? (same values, not shared weights, so when training m1, m2 weights will not change and vice versa)
You should firstly create the weights and bias initializers:
import tensorflow as tf
tf.enable_eager_execution()
init_kernel = tf.constant_initializer([[1,2,3],[4,5,6]])
init_bias = tf.constant_initializer([7,8,9])
Then you can create the two dense layers and pass the same initializers to kernel_initializer and bias_initializer.
layer1 = tf.layers.dense(inputs=inputs, units=3, kernel_initializer=init_kernel, bias_initializer=init_bias)
layer2 = tf.layers.dense(inputs=inputs, units=3, kernel_initializer=init_kernel, bias_initializer=init_bias)
If I understood your question correctly this is what you need.
gorjan answer is correct, but I also found another solution that can be easier to use in more complex scenarios:
with tf.Session() as sess:
with tf.variable_scope("m1", reuse=True):
weights_m1 = sess.run(tf.get_variable("kernel"))
with tf.variable_scope("m2", reuse=True):
sess.run(tf.get_variable("kernel").assign(weights_m1))
weights_m2 = sess.run(tf.get_variable("kernel"))
print(np.array_equal(weights_m1, weights_m2) # True

Calling a Keras model on a TensorFlow tensor but keep weights

In Keras as a simplified interface to TensorFlow: tutorial they describe how one can call a Keras model on a TensorFlow tensor.
from keras.models import Sequential
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=784))
model.add(Dense(10, activation='softmax'))
# this works!
x = tf.placeholder(tf.float32, shape=(None, 784))
y = model(x)
They also say:
Note: by calling a Keras model, your are reusing both its architecture and its weights. When you are calling a model on a tensor, you are creating new TF ops on top of the input tensor, and these ops are reusing the TF Variable instances already present in the model.
I interpret this as that the weights of the model will be the same in y as in model. However, for me it seems like the weights in the resulting Tensorflow node are reinitialized. A minimal example can be seen below:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
# Create model with weight initialized to 1
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='ones',
bias_initializer='zeros'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# Save the weights
model.save_weights('file')
# Create another identical model except with weight initialized to 0
model2 = Sequential()
model2.add(Dense(1, input_dim=1, kernel_initializer='zeros',
bias_initializer='zeros'))
model2.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# Load the weight from the first model
model2.load_weights('file')
# Call model with Tensorflow tensor
v = tf.Variable([[1, ], ], dtype=tf.float32)
node = model2(v)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(node), model2.predict(np.array([[1, ], ])))
# Prints (array([[ 0.]], dtype=float32), array([[ 1.]], dtype=float32))
Why I want to do this:
I want to use a trained network in another minimization scheme were the network "punish" places in the search space that are not allowed. So if you have ideas not involving this specific approach, that is also very appreciated.
Finally found the answer. There are two problems in the example from the question.
1:
The first and most obvious was that I called the tf.global_variables_intializer() function which will re-initialize all variables in the session. Instead I should have called the tf.variables_initializer(var_list) where var_list is a list of variables to initialize.
2:
The second problem was that Keras did not use the same session as the native Tensorflow objects. This meant that to be able to run the tensorflow object model2(v) with my session sess it needed to be reinitialized. Again Keras as a simplified interface to tensorflow: Tutorial was able to help
We should start by creating a TensorFlow session and registering it with Keras. This means that Keras will use the session we registered to initialize all variables that it creates internally.
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
If we apply these changes to the example provided in my question we get the following code that does exactly what is expected from it.
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense
sess = tf.Session()
# Register session with Keras
K.set_session(sess)
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='ones',
bias_initializer='zeros'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.save_weights('test')
model2 = Sequential()
model2.add(Dense(1, input_dim=1, kernel_initializer='zeros',
bias_initializer='zeros'))
model2.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model2.load_weights('test')
v = tf.Variable([[1, ], ], dtype=tf.float32)
node = model2(v)
init = tf.variables_initializer([v, ])
sess.run(init)
print(sess.run(node), model2.predict(np.array([[1, ], ])))
# prints: (array([[ 1.]], dtype=float32), array([[ 1.]], dtype=float32))
Conclusion:
The lesson is that when mixing Tensorflow and Keras, make sure everything uses the same session.
Thanks for asking this question, and answering it, it helped me! In addition to setting the same tf session in the Keras backend, it is also important to note that if you want to load a Keras model from a file, you need to run a global variable initializer op before you load the model.
sess = tf.Session()
# make sure keras has the same session as this code
tf.keras.backend.set_session(sess)
# Do this BEFORE loading a keras model
init_op = tf.global_variables_initializer()
sess.run(init_op)
model = models.load_model('path/to/your/model.h5')