What is the structure of a Keras model if input_shape is omitted and why does it perform better? - tensorflow

I omitted the input_shape in the first layer of my Keras model by mistake. Eventually I noticed this and fixed it – and my model's performance dropped dramatically.
Looking at the structure of the model with and without input_shape, I discovered that the better-performing model has the output shape of multiple. Moreover, plotting it with plot_model shows no connections between the layers:
When it comes to performance, the model I understand (with input_shape) achieves a validation loss of 4.0513 (MSE) after 10 epochs with my test code (below), while the "weird" model manages 1.3218 – and the difference only increases with more epochs.
Model definition:
model = keras.Sequential()
model.add(keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(1001,)))
# add or remove this ^^^^^^^^^^^^^^^^^^^
model.add(keras.layers.Dropout(0.05))
...
(never mind the details, this is just a model that demonstrates the difference in performance with and without input_shape)
So what is happening in the better-performing model? What is multiple? How are the layers really connected? How could I build this same model while also specifying input_shape?
Complete script:
import tensorflow as tf
from tensorflow import keras
import numpy as np
from collections import deque
import math, random
def func(x):
return math.sin(x)*5 + math.sin(x*1.8)*4 + math.sin(x/4)*5
def get_data():
x = 0
dx = 0.1
q = deque()
r = 0
data = np.zeros((100000, 1002), np.float32)
while True:
x = x + dx
sig = func(x)
q.append(sig)
if len(q) < 1000:
continue
arr = np.array(q, np.float32)
for k in range(10):
xx = random.uniform(0.1, 9.9)
data[r, :1000] = arr[:1000]
data[r, 1000] = 5*xx #scale for easier fitting
data[r, 1001] = func(x + xx)
r = r + 1
if r >= data.shape[0]:
break
if r >= data.shape[0]:
break
q.popleft()
inputs = data[:, :1001]
outputs = data[:, 1001]
return (inputs, outputs)
np.random.seed(1)
tf.set_random_seed(1)
random.seed(1)
model = keras.Sequential()
model.add(keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(1001,)))
# add or remove this ^^^^^^^^^^^^^^^^^^^
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(1))
model.compile(
loss = 'mse',
optimizer = tf.train.RMSPropOptimizer(0.0005),
metrics = ['mae', 'mse'])
inputs, outputs = get_data()
hist = model.fit(inputs, outputs, epochs=10, validation_split=0.1)
print("Final val_loss is", hist.history['val_loss'][-1])

TL;DR
The reason that the results are different is because the two models have different initial weights. The fact that one performs (significantly) better than the other is purely by chance and as #today mentioned the results they obtain are approximately similar.
Details
As the documentation for tf.set_random_seed explains, random operations use two seeds, the graph-level seed and the operation specific seed; tf.set_random_seed sets the graph-level seed:
Operations that rely on a random seed actually derive it from two seeds: the graph-level and operation-level seeds. This sets the graph-level seed.
Taking a look at the definition for Dense we see that the default kernel initializer is 'glorot_uniform' (let's only consider the kernel initializer here but the same holds for the bias initializer). Walking farther through the source code we'll eventually find out that this fetches the GlorotUniform with default arguments. Specifically the random number generator seed for that specific operation (namely weight initialization) is set to None. Now if we check where this seed is used, we find it is passed to random_ops.truncated_normal for example. This in turn (as do all random operations) fetches now the two seeds, one being the graph-level seed and the other the operation specific seed: seed1, seed2 = random_seed.get_seed(seed). We can check the definition of the get_seed function and we find that if the operation specific seed is not given (which is our case) then it is derived from properties of the current graph: op_seed = ops.get_default_graph()._last_id. The corresponding part of the tf.set_random_seed docs read:
If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.
Now coming back to original problem, it makes a difference for the graph structure if input_shape is defined or not. Again looking at a bit of source code we find that Sequential.add builds the inputs and outputs of the network incrementally only if input_shape was specified; otherwise it just stores a list of layers (model._layers); compare model.inputs, model.outputs for the two definitions. The output is incrementally built by calling the layers directly which dispatches to Layer.__call__. This wrapper builds the layer, sets the layer's inputs and outputs and adds some metadata to the outputs; also it uses an ops.name_scope to group operations. We can see this from the visualization provided by Tensorboard (example for the simplified model architecture of Input -> Dense -> Dropout -> Dense):
Now in the case we didn't specify input_shape all the model has is a list of layers. Even after having called compile the model is actually not compiled (just attributes such as the optimizer are set). Instead it is compiled "on the fly" when for the first time data is passed in to the model. This happens in in model._standardize_weights: the model output is obtained via self.call(dummy_input_values, training=training). Checking this method we find that it builds the layers (note that the model is not yet built) and then computes the output incrementally by using Layer.call (not __call__). This leaves out all the meta data and also the grouping of operations and hence results in a different structure of the graph (though its computational operations are all the same). Again checking Tensorboard we find:
Expanding both graphs we would find that they contain the same operations, grouped differently together. However this has the effect that the keras.backend.get_session().graph._last_id is different for both definitions and hence results in a different seed for the random operations:
# With `input_shape`:
>>> keras.backend.get_session().graph._last_id
303
# Without `input_shape`:
>>> keras.backend.get_session().graph._last_id
7
Performance results
I used the OP's code with some modifications in order to have similar random operations:
Added the steps described here to ensure reproducibility in terms of randomization,
Set random seeds for Dense and Dropout variable initialization,
Removed validation_split since the splitting happens before "on the fly" compilation of the model without input_shape and hence might interfere with the seed,
Set shuffle = False since this might use a separate operation specific seed.
This is the complete code (in addition I performed export PYTHONHASHSEED=0 before running the script):
from collections import deque
from functools import partial
import math
import random
import sys
import numpy as np
import tensorflow as tf
from tensorflow import keras
seed = int(sys.argv[1])
np.random.seed(1)
tf.set_random_seed(seed)
random.seed(1)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
keras.backend.set_session(sess)
def func(x):
return math.sin(x)*5 + math.sin(x*1.8)*4 + math.sin(x/4)*5
def get_data():
x = 0
dx = 0.1
q = deque()
r = 0
data = np.zeros((100000, 1002), np.float32)
while True:
x = x + dx
sig = func(x)
q.append(sig)
if len(q) < 1000:
continue
arr = np.array(q, np.float32)
for k in range(10):
xx = random.uniform(0.1, 9.9)
data[r, :1000] = arr[:1000]
data[r, 1000] = 5*xx #scale for easier fitting
data[r, 1001] = func(x + xx)
r = r + 1
if r >= data.shape[0]:
break
if r >= data.shape[0]:
break
q.popleft()
inputs = data[:, :1001]
outputs = data[:, 1001]
return (inputs, outputs)
Dense = partial(keras.layers.Dense, kernel_initializer=keras.initializers.glorot_uniform(seed=1))
Dropout = partial(keras.layers.Dropout, seed=1)
model = keras.Sequential()
model.add(Dense(64, activation=tf.nn.relu,
# input_shape=(1001,)
))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(1))
model.compile(
loss = 'mse',
optimizer = tf.train.RMSPropOptimizer(0.0005)
)
inputs, outputs = get_data()
shuffled = np.arange(len(inputs))
np.random.shuffle(shuffled)
inputs = inputs[shuffled]
outputs = outputs[shuffled]
hist = model.fit(inputs, outputs[:, None], epochs=10, shuffle=False)
np.save('without.{:d}.loss.npy'.format(seed), hist.history['loss'])
With this code I'd actually expect to obtain similar results for both approaches however it turns out that they are not equal:
for i in $(seq 1 10)
do
python run.py $i
done
Plot the mean loss +/- 1 std. dev.:
Initial weights and initial prediction
I verified that the initial weights and an initial prediction (before fitting) is the same for the two versions:
inputs, outputs = get_data()
mode = 'without'
pred = model.predict(inputs)
np.save(f'{mode}.prediction.npy', pred)
for i, layer in enumerate(model.layers):
if isinstance(layer, keras.layers.Dense):
w, b = layer.get_weights()
np.save(f'{mode}.{i:d}.kernel.npy', w)
np.save(f'{mode}.{i:d}.bias.npy', b)
and
for i in 0 2 4 8
do
for data in bias kernel
do
diff -q "with.$i.$data.npy" "without.$i.$data.npy"
done
done
Influence of Dropout
[ ! ] I checked the performance after removing all Dropout layers and in that case the performance is actually equal. So the crux seems to lie with the Dropout layers. Actually the performance of the models without Dropout layers is the same as for the model with Dropout layers but without specifying input_shape. So it seems that without input_shape the Dropout layers are not effective.
Basically the difference between the two versions is that one uses __call__ and the other uses call to compute the outputs (as explained above). Since performance is similar to without Dropout layers a possible explanation could be that the Dropout layers don't drop when input_shape is not specified. This could by caused by training=False, i.e. the layers don't recognize they are in training mode. However I don't see a reason why this would happen. Also we can consider again the Tensorboard graphs.
Specifying input_shape:
Not specifying input_shape:
where the switch also depends on the learning phase (as before):
To verify the training kwarg let's subclass Dropout:
class Dropout(keras.layers.Dropout):
def __init__(self, rate, noise_shape=None, seed=None, **kwargs):
super().__init__(rate, noise_shape=noise_shape, seed=1, **kwargs)
def __call__(self, inputs, *args, **kwargs):
training = kwargs.get('training')
if training is None:
training = keras.backend.learning_phase()
print('[__call__] training: {}'.format(training))
return super().__call__(inputs, *args, **kwargs)
def call(self, inputs, training=None):
if training is None:
training = keras.backend.learning_phase()
print('[call] training: {}'.format(training))
return super().call(inputs, training)
I obtain similar outputs for both version, however the calls to __call__ are missing when input_shape is not specified:
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
So I suspect that the problem lies somewhere within __call__ but right now I can't figure out what it is.
System
I'm using Ubuntu 16.04, Python 3.6.7 and Tensorflow 1.12.0 via conda (no GPU support):
$ uname -a
Linux MyPC 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ python --version
Python 3.6.7 :: Anaconda, Inc.
$ conda list | grep tensorflow
tensorflow 1.12.0 mkl_py36h69b6ba0_0
tensorflow-base 1.12.0 mkl_py36h3c3e929_0
Edit
I also had keras and keras-base installed (keras-applications and keras-preprocessing are required by tensorflow):
$ conda list | grep keras
keras 2.2.4 0
keras-applications 1.0.6 py36_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.0.5 py36_0
After removing all, keras* and tensorflow*, then reinstalling tensorflow, the discrepancy vanished. Even after reinstalling keras the results remain similar. I also checked with a different virtualenv where tensorflow is installed via pip; also no discrepancy here. Right now I can't reproduce this discrepancy anymore. It must've been a broken installation of tensorflow.

Related

tensorflow, compute gradients with respect to weights that come from two models (encoder, decoder)

I have a encoder model and a decoder model (RNN).
I want to compute the gradients and update the weights.
I'm somewhat confused by what I've seen so far on the web.
Which block is the best practice? Is there any difference between the two options? Gradients seems to converge faster in Block 1, I do not know why?
# BLOCK 1, in two operations
encoder_gradients,decoder_gradients = tape.gradient(loss,[encoder_model.trainable_variables,decoder_model.trainable_variables])
myoptimizer.apply_gradients(zip(encoder_gradients,encoder_model.trainable_variables))
myoptimizer.apply_gradients(zip(decoder_gradients,decoder_model.trainable_variables))
# BLOCK 2, in one operation
gradients = tape.gradient(loss,encoder_model.trainable_variables + decoder_model.trainable_variables)
myoptimizer.apply_gradients(zip(gradients,encoder_model.trainable_variables +
decoder_model.trainable_variables))
You can manually verify this.
First, let's simplify the model. Let the encoder and decoder both be a single dense layer. This is mostly for simplicity and you can print out the weights being applying the gradients, gradients and weights after applying the gradients.
import tensorflow as tf
import numpy as np
from copy import deepcopy
# create a simple model with one encoder and one decoder layer.
class custom_net(tf.keras.Model):
def __init__(self):
super().__init__()
self.encoder = tf.keras.layers.Dense(3, activation='relu')
self.decoder = tf.keras.layers.Dense(3, activation='relu')
def call(self, inp):
return self.decoder(self.encoder(inp))
net = model()
# create dummy input/output
inp = np.random.randn(1,1)
gt = np.random.randn(3,1)
# set persistent to true since we will be accessing the gradient 2 times
with tf.GradientTape(persistent=True) as tape:
out = custom_model(inp)
loss = tf.keras.losses.mean_squared_error(gt, out)
# get the gradients as mentioned in the question
enc_grad, dec_grad = tape.gradient(loss,
[net.encoder.trainable_variables,
net.decoder.trainable_variables])
gradients = tape.gradient(loss,
net.encoder.trainable_variables + net.decoder.trainable_variables)
First, let's use a stateless optimizer like SGD which updates the weights based on the following formula and compare it to the 2 approaches mentioned in the question.
new_weights = weights - learning_rate * gradients.
# Block 1
myoptimizer = tf.keras.optimizers.SGD(learning_rate=1)
# store weights before updating the weights based on the gradients
old_enc_weights = deepcopy(net.encoder.get_weights())
old_dec_weights = deepcopy(net.decoder.get_weights())
myoptimizer.apply_gradients(zip(enc_grad, net.encoder.trainable_variables))
myoptimizer.apply_gradients(zip(dec_grad, net.decoder.trainable_variables))
# manually calculate the weights after gradient update
# since the learning rate is 1, new_weights = weights - grad
cal_enc_weights = []
for weights, grad in zip(old_enc_weights, enc_grad):
cal_enc_weights.append(weights-grad)
cal_dec_weights = []
for weights, grad in zip(old_dec_weights, dec_grad):
cal_dec_weights.append(weights-grad)
for weights, man_calc_weight in zip(net.encoder.get_weights(), cal_enc_weights):
print(np.linalg.norm(weights-man_calc_weight))
for weights, man_calc_weight in zip(net.decoder.get_weights(), cal_dec_weights):
print(np.linalg.norm(weights-man_calc_weight))
# block 2
old_weights = deepcopy(net.encoder.trainable_variables + net.decoder.trainable_variables)
myoptimizer.apply_gradients(zip(gradients, net.encoder.trainable_variables + \
net.decoder.trainable_variables))
cal_weights = []
for weight, grad in zip(old_weights, gradients):
cal_weights.append(weight-grad)
for weight, man_calc_weight in zip(net.encoder.trainable_variables + net.decoder.trainable_variables, cal_weights):
print(np.linalg.norm(weight-man_calc_weight))
You will see that both the methods update the weights in the exact same way.
I think you used an optimizer like Adam/RMSProp which is stateful. For such optimizers invoking apply_gradients will update the optimizer parameters based on the gradient value and sign. In the first case, the optimizer parameters are updated twice and in the second case only once.
I would stick to the second option if I were you, since you are performing just one step of optimization here.

tf.keras.layers.BatchNormalization with trainable=False appears to not update its internal moving mean and variance

I am trying to find out, how exactly does BatchNormalization layer behave in TensorFlow. I came up with the following piece of code which to the best of my knowledge should be a perfectly valid keras model, however the mean and variance of BatchNormalization doesn't appear to be updated.
From docs https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).
I expect the model to return a different value with each subsequent predict call.
What I see, however, are the exact same values returned 10 times.
Can anyone explain to me why does the BatchNormalization layer not update its internal values?
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(3, 5) * 5 + 0.3
bn = tf.keras.layers.BatchNormalization(trainable=False, epsilon=1e-9)
z = input = tf.keras.layers.Input([5])
z = bn(z)
model = tf.keras.Model(inputs=input, outputs=z)
for i in range(10):
print(x)
print(model.predict(x))
print()
I use TensorFlow 2.1.0
Okay, I found the mistake in my assumptions. The moving average is being updated during training not during inference as I thought. This makes perfect sense, as updating the moving averages during inference would likely result in an unstable production model (for example a long sequence of highly pathological input samples [e.g. such that their generating distribution differs drastically from the one on which the network was trained] could potentially bias the network and result in worse performance on valid input samples).
The trainable parameter is useful when you're fine-tuning a pretrained model and want to freeze some of the layers of the network even during training. Because when you call model.predict(x) (or even model(x) or model(x, training=False)), the layer automatically uses the moving averages instead of batch averages.
The code below demonstrates this clearly
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(10, 5) * 5 + 0.3
z = input = tf.keras.layers.Input([5])
z = tf.keras.layers.BatchNormalization(trainable=True, epsilon=1e-9, momentum=0.99)(z)
model = tf.keras.Model(inputs=input, outputs=z)
# a dummy loss function
model.compile(loss=lambda x, y: (x - y) ** 2)
# a dummy fit just to update the batchnorm moving averages
model.fit(x, x, batch_size=3, epochs=10)
# first predict uses the moving averages from training
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# outputs the same thing as previous predict
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here calling the model with training=True results in update of moving averages
# furthermore, it uses the batch mean and variance as in training,
# so the result is very different
pred = model(x, training=True).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here we see again that the moving averages are used but they differ slightly after
# the previous call, as expected
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
In the end, I found that the documentation (https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) mentions this:
When performing inference using a model containing batch normalization, it is generally (though not always) desirable to use accumulated statistics rather than mini-batch statistics. This is accomplished by passing training=False when calling the model, or using model.predict.
Hopefully this will help someone with similar misunderstanding in the future.

Shouldn't same neural network weights produce same results?

So I am working with different deep learning frameworks as part of my research and have observed something weird (at least I cannot explain the cause of it).
I trained a fairly simple MLP model (on mnist dataset) in Tensorflow, extracted trained weights, created the same model architecture in PyTorch and applied the trained weights to PyTorch model. Now my expectation is to get same test accuracy from both Tensorflow and PyTorch models but this isn't the case. I get different results.
So my question is: If a model is trained to some optimal value, shouldn't the trained weights produce same results every time testing is done on the same dataset (regardless of the framework used)?
PyTorch Model:
class Net(nn.Module):
def __init__(self) -> None:
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 24)
self.fc2 = nn.Linear(24, 10)
def forward(self, x: Tensor) -> Tensor:
x = torch.flatten(x, 1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
Tensorflow Model:
def build_model() -> tf.keras.Model:
# Build model layers
model = models.Sequential()
# Flatten Layer
model.add(layers.Flatten(input_shape=(28,28)))
# Fully connected layer
model.add(layers.Dense(24, activation='relu'))
model.add(layers.Dense(10))
# compile the model
model.compile(
optimizer='sgd',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']
)
# return newly built model
return model
To extract weights from Tensorflow model and apply them to Pytorch model I use following functions:
Extract Weights:
def get_weights(model):
# fetch latest weights
weights = model.get_weights()
# transpose weights
t_weights = []
for w in weights:
t_weights.append(np.transpose(w))
# return
return t_weights
Apply Weights:
def set_weights(model, weights):
"""Set model weights from a list of NumPy ndarrays."""
state_dict = OrderedDict(
{k: torch.Tensor(v) for k, v in zip(model.state_dict().keys(), weights)}
)
self.load_state_dict(state_dict, strict=True)
Providing solution in answer section for the benefit of community. From comments
If you are using the same weights in the same manner then results
should be the same, though float rounding error should also be
accounted. Also it doesn't matter if model is trained at all. You can
think of your model architecture as a chain of matrix multiplications
with element-wise nonlinearities in between. How big is the
difference? Are you comparing model outputs, our metrics computed over
dataset? As a suggestion, intialize model with some random values in
Keras, do a forward pass for a single batch (paraphrased from jdehesa and Taras Sereda)

How to properly quantize CNN into 4-bit using Tensorflow QAT?

I am trying to make 4-bit quantization and used this example
First of all I received the following warnings:
WARNING:tensorflow:AutoGraph could not transform <bound method Default8BitQuantizeConfig.set_quantize_activations of <tensorflow_model_optimization.python.core.quantization.keras.default_8bit.default_8bit_quantize_registry.Default8BitQuantizeConfig object at 0x7fb0208015c0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: expected an indented block (<unknown>, line 14)
WARNING: AutoGraph could not transform <bound method Default8BitQuantizeConfig.set_quantize_activations of <tensorflow_model_optimization.python.core.quantization.keras.default_8bit.default_8bit_quantize_registry.Default8BitQuantizeConfig object at 0x7fb020806550>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: expected an indented block (<unknown>, line 14)
Than after reading this doc I found that it is possible to quantize my network into 4 bit but I couldn't understand is it possible for only Dense layer or for all (like Conv2D)?
I also don't understand how to work with weights since numpy can work only with float32.
UPD: I finally figure out how to perform quantization aware training
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=4, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
QAT_model = tfmot.quantization.keras.quantize_annotate_model( keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Dense(2, activation='relu', input_shape= x_train.shape[1:]), DefaultDenseQuantizeConfig() ),
tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Dense(2, activation='relu'), DefaultDenseQuantizeConfig() ),
tfmot.quantization.keras.quantize_annotate_layer( tf.keras.layers.Dense(10, activation='softmax'), DefaultDenseQuantizeConfig() )
]) )
with tfmot.quantization.keras.quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quantized_model = tfmot.quantization.keras.quantize_apply(QAT_model)
quantized_model.summary()
quantized_model.compile(optimizer='adam', # Good default optimizer to start with
loss='sparse_categorical_crossentropy', # how will we calculate our "error." Neural network aims to minimize loss.
metrics=['accuracy']) # what to track
quantized_model.fit(x_train, y_train, epochs=3)
val_loss, val_acc = quantized_model.evaluate(x_test, y_test)
But I still can't understand how to access the 4-bit quantized weights.
I used np.array( quantized_model.get_weights() ) but of course it gave me float32 moreover the number of elements in the quantized array is less than in original model. How this can be explained?

Can not save model using model.save following multi_gpu_model in Keras

Following the upgrade to Keras 2.0.9, I have been using the multi_gpu_model utility but I can't save my models or best weights using
model.save('path')
The error I get is
TypeError: can’t pickle module objects
I suspect there is some problem gaining access to the model object. Is there a work around this issue?
To be honest, the easiest approach to this is to actually examine the multi gpu parallel model using
parallel_model.summary()
(The parallel model is simply the model after applying the multi_gpu function). This clearly highlights the actual model (in I think the penultimate layer - I am not at my computer right now). Then you can use the name of this layer to save the model.
model = parallel_model.get_layer('sequential_1)
Often its called sequential_1 but if you are using a published architecture, it may be 'googlenet' or 'alexnet'. You will see the name of the layer from the summary.
Then its simple to just save
model.save()
Maxims approach works, but its overkill I think.
Rem: you will need to compile both the model, and the parallel model.
Workaround
Here's a patched version that doesn't fail while saving:
from keras.layers import Lambda, concatenate
from keras import Model
import tensorflow as tf
def multi_gpu_model(model, gpus):
if isinstance(gpus, (list, tuple)):
num_gpus = len(gpus)
target_gpu_ids = gpus
else:
num_gpus = gpus
target_gpu_ids = range(num_gpus)
def get_slice(data, i, parts):
shape = tf.shape(data)
batch_size = shape[:1]
input_shape = shape[1:]
step = batch_size // parts
if i == num_gpus - 1:
size = batch_size - step * i
else:
size = step
size = tf.concat([size, input_shape], axis=0)
stride = tf.concat([step, input_shape * 0], axis=0)
start = stride * i
return tf.slice(data, start, size)
all_outputs = []
for i in range(len(model.outputs)):
all_outputs.append([])
# Place a copy of the model on each GPU,
# each getting a slice of the inputs.
for i, gpu_id in enumerate(target_gpu_ids):
with tf.device('/gpu:%d' % gpu_id):
with tf.name_scope('replica_%d' % gpu_id):
inputs = []
# Retrieve a slice of the input.
for x in model.inputs:
input_shape = tuple(x.get_shape().as_list())[1:]
slice_i = Lambda(get_slice,
output_shape=input_shape,
arguments={'i': i,
'parts': num_gpus})(x)
inputs.append(slice_i)
# Apply model on slice
# (creating a model replica on the target device).
outputs = model(inputs)
if not isinstance(outputs, list):
outputs = [outputs]
# Save the outputs for merging back together later.
for o in range(len(outputs)):
all_outputs[o].append(outputs[o])
# Merge outputs on CPU.
with tf.device('/cpu:0'):
merged = []
for name, outputs in zip(model.output_names, all_outputs):
merged.append(concatenate(outputs,
axis=0, name=name))
return Model(model.inputs, merged)
You can use this multi_gpu_model function, until the bug is fixed in keras. Also, when loading the model, it's important to provide the tensorflow module object:
model = load_model('multi_gpu_model.h5', {'tf': tf})
How it works
The problem is with import tensorflow line in the middle of multi_gpu_model:
def multi_gpu_model(model, gpus):
...
import tensorflow as tf
...
This creates a closure for the get_slice lambda function, which includes the number of gpus (that's ok) and tensorflow module (not ok). Model save tries to serialize all layers, including the ones that call get_slice and fails exactly because tf is in the closure.
The solution is to move import out of multi_gpu_model, so that tf becomes a global object, though still needed for get_slice to work. This fixes the problem of saving, but in loading one has to provide tf explicitly.
It's something that need a little work around by loading the multi_gpu_model weight to the regular model weight.
e.g.
#1, instantiate your base model on a cpu
with tf.device("/cpu:0"):
model = create_model()
#2, put your model to multiple gpus, say 2
multi_model = multi_gpu_model(model, 2)
#3, compile both models
model.compile(loss=your_loss, optimizer=your_optimizer(lr))
multi_model.compile(loss=your_loss, optimizer=your_optimizer(lr))
#4, train the multi gpu model
# multi_model.fit() or multi_model.fit_generator()
#5, save weights
model.set_weights(multi_model.get_weights())
model.save(filepath=filepath)
`
refrence: https://github.com/fchollet/keras/issues/8123