Tensorflow remove layers from pretrained model - tensorflow

Is there a way to load a pretrained model in Tensorflow and remove the top layers in the network? I am looking at Tensorflow release r1.10
The only documentation I could find is with tf.keras.Sequential.pop
https://www.tensorflow.org/versions/r1.10/api_docs/python/tf/keras/Sequential#pop
I want to manually prune a pretrained network by removing bunch of top convolution layers and add a custom fully convoluted layer.
EDIT:
The model is ssd_mobilenet_v1_coco downloaded from Tensorflow Model Zoo. I have access to both the frozen_inference_graph.pb model file and checkpoint file.
I donot have access to the python code which is used to construct the model.
Thanks.

From inspecting the code, SSDMobileNetV1FeatureExtractor.extract_features redirects research.slim.nets:
from nets import mobilenet_v1 # nets will have to be on your PYTHONPATH
with tf.variable_scope('MobilenetV1',
reuse=self._reuse_weights) as scope:
with slim.arg_scope(
mobilenet_v1.mobilenet_v1_arg_scope(
is_training=None, regularize_depthwise=True)):
with (slim.arg_scope(self._conv_hyperparams_fn())
if self._override_base_feature_extractor_hyperparams
else context_manager.IdentityContextManager()):
_, image_features = mobilenet_v1.mobilenet_v1_base(
ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
final_endpoint='Conv2d_13_pointwise',
min_depth=self._min_depth,
depth_multiplier=self._depth_multiplier,
use_explicit_padding=self._use_explicit_padding,
scope=scope)
The mobilenet_v1_base function takes a final_endpoint argument. Rather than prune the constructed graph, just construct the graph up until the endpoint you want.

Related

Training a keras model on pretrained weights using load_weights()

I am using a custom keras model in Databricks environment.
For a custom keras model, model.save(model.h5) does not work, because custom model is not serializable. Instead it is recommended to use model.save_weights(path) as an alternate.
model.save_weights(pathDirectory) works. This yields 3 files checkpoint,.data-00000-of-00001,.index in the pathDirectory
For loading weights, Following mechanism is working fine.
model = Model()
model.load_weights(path)
But I want to train my model on pretrained weights I just saved. Like I saved model weights, and continue training on these saved weights afterwards.
So, when I load model weights and apply training loop, I get this error, TypeError: 'CheckpointLoadStatus' object is not callable
After much research, I have found a workaround,
we can also save model using
model.save("model.hpy5") and read it the saved model in databricks.
model.h5 not work for customized models, but it works for standard models.

Freezing BERT layers after importing via TF-hub and training them?

I will describe my intention here. I want to import BERT pretrained model via tf-hub function hub.module(bert_url, trainable = True) and utilize it for text classification task. I plan to use a large corpus to fine-tune weights of BERT as well as a few dense layers whose inputs are the BERT outputs. I would then like to freeze layers of BERT and train only the dense layers following BERT. How can I do this efficiently?
You mention Hub's TF1 API hub.Module, so I suppose you are writing TF1 code and using the TF1-compatible Hub assets google/bert/..., such as https://tfhub.dev/google/bert_cased_L-12_H-768_A-12/1
Are you going to have separate run of your program for the two phases of training? If so, maybe you can just drop trainable=True from the hub.Module call in the second run. This doesn't affect variable names, so you can restore the training result from the first run, including BERT's adjusted weights. (To be clear: the pre-trained weights shipped with the hub.Module are only used for initialization at the very start of training; restoring a checkpoint overrides them.)

Can I make pruning to keras pretrained model with tensorflow keras model optimization tool kit?

I have keras pretrained model(model.h5). And I want to prune that model with tensorflow Magnitude-based weight pruning with Keras. One curious things is that my pretrained model is built with original keras model > I mean that is not from tensorflow.keras. Inside tensorflow Magnitude-based weight pruning with Keras example, they show how to do with tensorflow.keras model. I want to ask is that can I use their tool to prune my original keras pretrained model?
inside their weight pruning toolkit ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
inside their weight pruning toolkit ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
For my original pretrained model, I load the weight from model.h5 and can call model.summary() after I apply prune_low_magnitude() none of the method from model cannot call including model.summary() method. And show the error like AttributeError: 'NoneType' object has no attribute 'summary'
model = get_training_model(weight_decay)
model.load_weights('model/keras/model.h5')
model.summary()
epochs = 1
end_step = np.ceil(1.0 * 100 / 2).astype(np.int32) * epochs
print(end_step)
new_pruning_params = {
'pruning_schedule': tfm.sparsity.keras.PolynomialDecay(initial_sparsity=0.1,
final_sparsity=0.90,
begin_step=40,
end_step=end_step,
frequency=30)
}
new_pruned_model = tfm.sparsity.keras.prune_low_magnitude(model, **new_pruning_params)
print(new_pruned_model.summary())
Inside their weight pruning toolkit
enter link description here ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
For my original pretrained model, I load the weight from model.h5 and can call model.summary() after I apply prune_low_magnitude() none of the method from model cannot call including model.summary() method. And show the error like
AttributeError: 'NoneType' object has no attribute 'summary'
I hope this answer still helps, but I recently had the same issue that prune_low_magnitude() returns an object of type 'None'. Also new_pruned_model.compile() would not work.
The model I had been using was a pretrained model that could be imported from tensorflow.python.keras.applications.
For me this worked:
(0) Import the libraries:
from tensorflow_model_optimization.python.core.api.sparsity import keras as sparsity
from tensorflow.python.keras.applications.<network_type> import <network_type>
(1) Define the pretrained model architecture
# define model architecture
loaded_model = <model_type>()
loaded_model.summary()
(2) Compile the model architecture and load the pretrained weights
# compile model
opt = SGD(lr=learn_rate, momentum=momentum)
loaded_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
loaded_model.load_weights('weight_file.h5')
(3) set pruning parameters and assign pruning schedule
# set pruning parameters
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(...)
}
# assign pruning schedule
model_pruned = sparsity.prune_low_magnitude(loaded_model, **pruning_params)
(4) compile model and show summary
# compile model
model_pruned.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='SGD',
metrics=['accuracy'])
model_pruned.summary()
It was important to import the libraries specifically from tensorflow.python.keras and use this keras model from the TensorFlow library.
Also, it was important to use the TensorFlow Beta Release (pip install tensorflow==2.0.0b1), otherwise still an object with type 'None' would be returned by prune_low_magnitude.
I am using PyCharm 2019.1.3 (x64) as IDE. Here is the link that led me to this solution: https://github.com/tensorflow/model-optimization/issues/12#issuecomment-526338458

Deep-Dream - load Re-trained Inception model obtained with transfer learning

I have repurposed an Inception V3 network using the transfer learning method, following this article.
For that, I removed the final network layer, and fed hundreds of images of my face into the network.
A new model was then sucessfully generated: inceptionv3-ft.model
Now I would like to load this model and use its fixed weights to apply my face as a 'theme' on a input image, like google-dream.
For that I am using a keras program, which loads models like so:
from keras.applications import inception_v3
# Build the InceptionV3 network with our placeholder.
# The model will be loaded with pre-trained ImageNet weights.
model = inception_v3.InceptionV3(weights='imagenet',
include_top=False)
dream = model.input
Full code here: https://github.com/keras-team/keras/blob/master/examples/deep_dream.py
So, how do I load and pass not a pre-trained but rather my RE-trained model weights?
simply:
from keras.models import load_model
model = load_model('inceptionv3-ft.model')
dream = model.input

how to connect the pretrained model's input to the output of tf.train.shuffle_batch?

In classify_image.py, the input image is fed with a loaded image in
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
What if I want to add new layers to the inception model and train the whole model again? Are the variables loaded from classify_image_graph_def.pb trainable? I saw that freeze_graph.py used convert_variables_to_constants to produce freezed graph. So can those loaded weights be trained again, are they constants? And how can I connect the input('shuffle_batch:0') to the inception model to the output of tf.train.shuffle_batch?
The model used in classify_image.py has its variables frozen into constants, and doesn't have any gradient ops, so it's not easy to turn it back into something trainable. You can see how we remove one layer and replace it with something trainable here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
It's hard to generalize though. You'd be better off looking at some examples of fine-tuning here:
https://github.com/tensorflow/models/tree/master/inception#how-to-fine-tune-a-pre-trained-model-on-a-new-task