When I trained a Keras model using QAT (Quantization aware training),
There are some non-compatible problems like not support BatchNormalization, or UpSampling2D, etc.
How to prevent it directly without apply each layer with tfmot.quantization.keras.quantize_annotate_layer on each layer? (especially when building model with tensorflow keras functional API (instead of tf.keras.Sequential))
Supported layers for QAT module can be found here
Then, to quantize some layers instead of whole Model, just followed the official tutorial, then add what layer you wanna use to quantize.
#added layers here
supported_layers = [tf.keras.layers.Conv2D, tf.keras.layers.Dense, tf.keras.layers.ReLU]
def apply_quantization_to_dense(layer):
for supported_layer in supported_layers:
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
Related
I have keras pretrained model(model.h5). And I want to prune that model with tensorflow Magnitude-based weight pruning with Keras. One curious things is that my pretrained model is built with original keras model > I mean that is not from tensorflow.keras. Inside tensorflow Magnitude-based weight pruning with Keras example, they show how to do with tensorflow.keras model. I want to ask is that can I use their tool to prune my original keras pretrained model?
inside their weight pruning toolkit ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
inside their weight pruning toolkit ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
For my original pretrained model, I load the weight from model.h5 and can call model.summary() after I apply prune_low_magnitude() none of the method from model cannot call including model.summary() method. And show the error like AttributeError: 'NoneType' object has no attribute 'summary'
model = get_training_model(weight_decay)
model.load_weights('model/keras/model.h5')
model.summary()
epochs = 1
end_step = np.ceil(1.0 * 100 / 2).astype(np.int32) * epochs
print(end_step)
new_pruning_params = {
'pruning_schedule': tfm.sparsity.keras.PolynomialDecay(initial_sparsity=0.1,
final_sparsity=0.90,
begin_step=40,
end_step=end_step,
frequency=30)
}
new_pruned_model = tfm.sparsity.keras.prune_low_magnitude(model, **new_pruning_params)
print(new_pruned_model.summary())
Inside their weight pruning toolkit
enter link description here ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
For my original pretrained model, I load the weight from model.h5 and can call model.summary() after I apply prune_low_magnitude() none of the method from model cannot call including model.summary() method. And show the error like
AttributeError: 'NoneType' object has no attribute 'summary'
I hope this answer still helps, but I recently had the same issue that prune_low_magnitude() returns an object of type 'None'. Also new_pruned_model.compile() would not work.
The model I had been using was a pretrained model that could be imported from tensorflow.python.keras.applications.
For me this worked:
(0) Import the libraries:
from tensorflow_model_optimization.python.core.api.sparsity import keras as sparsity
from tensorflow.python.keras.applications.<network_type> import <network_type>
(1) Define the pretrained model architecture
# define model architecture
loaded_model = <model_type>()
loaded_model.summary()
(2) Compile the model architecture and load the pretrained weights
# compile model
opt = SGD(lr=learn_rate, momentum=momentum)
loaded_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
loaded_model.load_weights('weight_file.h5')
(3) set pruning parameters and assign pruning schedule
# set pruning parameters
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(...)
}
# assign pruning schedule
model_pruned = sparsity.prune_low_magnitude(loaded_model, **pruning_params)
(4) compile model and show summary
# compile model
model_pruned.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='SGD',
metrics=['accuracy'])
model_pruned.summary()
It was important to import the libraries specifically from tensorflow.python.keras and use this keras model from the TensorFlow library.
Also, it was important to use the TensorFlow Beta Release (pip install tensorflow==2.0.0b1), otherwise still an object with type 'None' would be returned by prune_low_magnitude.
I am using PyCharm 2019.1.3 (x64) as IDE. Here is the link that led me to this solution: https://github.com/tensorflow/model-optimization/issues/12#issuecomment-526338458
Is it possible to define a graph in native TensorFlow and then convert this graph to a Keras model?
My intention is simply combining (for me) the best of the two worlds.
I really like the Keras model API for prototyping and new experiments, i.e. using the awesome multi_gpu_model(model, gpus=4) for training with multiple GPUs, saving/loading weights or whole models with oneliners, all the convenience functions like .fit(), .predict(), and others.
However, I prefer to define my model in native TensorFlow. Context managers in TF are awesome and, in my opinion, it is much easier to implement stuff like GANs with them:
with tf.variable_scope("Generator"):
# define some layers
with tf.variable_scope("Discriminator"):
# define some layers
# model losses
G_train_op = ...AdamOptimizer(...)
.minimize(gloss,
var_list=tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
scope="Generator")
D_train_op = ...AdamOptimizer(...)
.minimize(dloss,
var_list=tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
scope="Discriminator")
Another bonus is structuring the graph this way. In TensorBoard debugging complicated native Keras models are hell since they are not structured at all. With heavy use of variable scopes in native TF you can "disentangle" the graph and look at a very structured version of a complicated model for debugging.
By utilizing this I can directly setup custom loss function and do not have to freeze anything in every training iteration since TF will only update the weights in the correct scope, which is (at least in my opinion) far easier than the Keras solution to loop over all the existing layers and set .trainable = False.
TL;DR:
Long story short: I like the direct access to everything in TF, but most of the time a simple Keras model is sufficient for training, inference, ... later on. The model API is much easier and more convenient in Keras.
Hence, I would prefer to set up a graph in native TF and convert it to Keras for training, evaluation, and so on. Is there any way to do this?
I don't think it is possible to create a generic automated converter for any TF graph, that will come up with a meaningful set of layers, with proper namings etc. Just because graphs are more flexible than a sequence of Keras layers.
However, you can wrap your model with the Lambda layer. Build your model inside a function, wrap it with Lambda and you have it in Keras:
def model_fn(x):
layer_1 = tf.layers.dense(x, 100)
layer_2 = tf.layers.dense(layer_1, 100)
out_layer = tf.layers.dense(layer_2, num_classes)
return out_layer
model.add(Lambda(model_fn))
That is what sometimes happens when you use multi_gpu_model: You come up with three layers: Input, model, and Output.
Keras Apologetics
However, integration between TensorFlow and Keras can be much more tighter and meaningful. See this tutorial for use cases.
For instance, variable scopes can be used pretty much like in TensorFlow:
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
with tf.name_scope('block1'):
y = LSTM(32, name='mylstm')(x)
The same for manual device placement:
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops / variables in the LSTM layer will live on GPU:0
Custom losses are discussed here: Keras: clean implementation for multiple outputs and custom loss functions?
This is how my model defined in Keras looks in Tensorboard:
So, Keras is indeed only a simplified frontend to TensorFlow so you can mix them quite flexibly. I would recommend you to inspect source code of Keras model zoo for clever solutions and patterns that allows you to build complex models using clean API of Keras.
You can insert TensorFlow code directly into your Keras model or training pipeline! Since mid-2017, Keras has fully adopted and integrated into TensorFlow. This article goes into more detail.
This means that your TensorFlow model is already a Keras model and vice versa. You can develop in Keras and switch to TensorFlow whenever you need to. TensorFlow code will work with Keras APIs, including Keras APIs for training, inference and saving your model.
Is there a way to load a pretrained model in Tensorflow and remove the top layers in the network? I am looking at Tensorflow release r1.10
The only documentation I could find is with tf.keras.Sequential.pop
https://www.tensorflow.org/versions/r1.10/api_docs/python/tf/keras/Sequential#pop
I want to manually prune a pretrained network by removing bunch of top convolution layers and add a custom fully convoluted layer.
EDIT:
The model is ssd_mobilenet_v1_coco downloaded from Tensorflow Model Zoo. I have access to both the frozen_inference_graph.pb model file and checkpoint file.
I donot have access to the python code which is used to construct the model.
Thanks.
From inspecting the code, SSDMobileNetV1FeatureExtractor.extract_features redirects research.slim.nets:
from nets import mobilenet_v1 # nets will have to be on your PYTHONPATH
with tf.variable_scope('MobilenetV1',
reuse=self._reuse_weights) as scope:
with slim.arg_scope(
mobilenet_v1.mobilenet_v1_arg_scope(
is_training=None, regularize_depthwise=True)):
with (slim.arg_scope(self._conv_hyperparams_fn())
if self._override_base_feature_extractor_hyperparams
else context_manager.IdentityContextManager()):
_, image_features = mobilenet_v1.mobilenet_v1_base(
ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
final_endpoint='Conv2d_13_pointwise',
min_depth=self._min_depth,
depth_multiplier=self._depth_multiplier,
use_explicit_padding=self._use_explicit_padding,
scope=scope)
The mobilenet_v1_base function takes a final_endpoint argument. Rather than prune the constructed graph, just construct the graph up until the endpoint you want.
I am using keras to build a multi-output classification model. My dataset is such as
[x1,x2,x3,x4,y1,y2,y3]
x1,x2,x3 are the features, and y1,y2,y3 are the labels, the y1,y2,y3 are multi-classes.
And I already built a model (I ingore some hidden layers):
def baseline_model(input_dim=23,output_dim=3):
model_in = Input(shape=(input_dim,))
model = Dense(input_dim*5,kernel_initializer='uniform',input_dim=input_dim)(model_in)
model = Activation(activation='relu')(model)
model = Dropout(0.5)(model)
...................
model = Dense(output_dim,kernel_initializer='uniform')(model)
model = Activation(activation='sigmoid')(model)
model = Model(model_in,model)
model.compile(optimizer='adam',loss='binary_crossentropy', metrics=['accuracy'])
return model
And then I try to use the method of keras to make it support classification:
estimator = KerasClassifier(build_fn=baseline_model)
estimator.fit()
estimator.predict(df[0:10])
But I found that the result is not multi-output, only one dimension is output.
[0,0,0,0,0,0,0,0,0,0]
So for the multi-output classification problem, we can not use KerasClassifier function to learn it?
You do not need to wrap the model in KerasClassifier. That wrapper is so that you can use the Keras model with Scikit-Learn. The type of model (classifier, regression, multiclass classifier, etc) is ultimately determined by the shape and activation of the final layer of your model.
You can simply use model.fit() function that is part of Keras. Make sure that you pass the data into the function. You can see more info on the fit function here: https://keras.io/models/model/#fit
Also your loss is setup as binary_crossentropy. For a multi-class problem you will want to use categorical_crossentropy.
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])
This model isn't really what Keras refers to as multi-output as far as I can tell. With multi-output you are trying to get the output from several different layers and possibly apply different loss functions to them.
Base on the setup in your question you would be able to use the Keras Sequential model instead of the Functional model if you wanted. Keras recommends using the Sequential model if you can because its simpler. https://keras.io/getting-started/sequential-model-guide/
I'm working on facial expression recognition using CNN. I'm using Keras and Tensorflow as backend. My model is saved to h5 format.
I want to retrain my network, and fine-tune my model with the VGG model.
How can I do that with keras ?
Save your models architecture and weights:
json_string = model.to_json()
model.save_weights('model_weights.h5')
Load model architecture and weights:
from keras.models import model_from_json
model = model_from_json(json_string)
model.load_weights('model_weights.h5')
Start training again from here for finetuning. I hope this helps.
You can use the Keras model.save(filepath) function.
Details for the various Keras saving and loading techniques are discussed with examples in this YouTube video: Save and load a Keras model
model.save(filepath)saves:
The architecture of the model, allowing to re-create the model.
The weights of the model.
The training configuration (loss, optimizer).
The state of the optimizer, allowing to resume training exactly where you left off.
To load this saved model, you would use the following:
from keras.models import load_model
new_model = load_model(filepath)
If you used model.to_json(), you would only be saving the architecture of the model. Additionally, if you used model.save_weights(), you would only be saving the weights of the model. With both of these alternative saving techniques, you would not be saving the training configuration (loss, optimizer), nor would you be saving the state of the optimizer.