Import Weights from Keras Classifier into TF Object Detection API - tensorflow

I have a classifier that I trained using keras that is working really well. It uses keras.applications.MobileNetV2.
This classifier is well trained on around 200 categories, and has a high accuracy.
However, I would like to use the feature extraction layers from this classifier as part of an object detection model.
I have been using the Tensorflow Object Detection API, and looking into the SSDLite+MobileNetV2 model. I can start to run training, but the training is very slow and the bulk of the loss comes from the classification stage.
What I would like to do is assign the weights from my keras .h5 model to the Feature Extraction layer of MobileNetV2 in Tensorflow, but I'm not sure of the best way to do that.
I can load the h5 file easily, and get a list of layer names:
import keras
keras_model = keras.models.load_model("my_classifier.h5")
keras_names = [l.name for l in keras_model.layers]
print(keras_names)
I can also restore the tensorflow checkpoint from the object detection API and export the layers with weights:
tf.reset_default_graph()
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('models/model.ckpt.meta')
what = new_saver.restore(sess, 'models/model.ckpt')
tf_names = []
for op in sess.graph.get_operations():
if "MobilenetV2" in op.name and "Assign" in op.name:
tf_names.append(op.name)
print(tf_names)
I cannot seem to get a good match-up between layer names from keras and from tensorflow. Even if I could I'm not sure of the next steps.
If anyone could give me some advice about the best way to approach this I would be very grateful.
Update:
I followed Sharky's suggestion below, with a slight modification:
new_saver = tf.train.import_meta_graph(os.path.join(keras_checkpoint_dir, 'keras_model.ckpt.meta'))
new_saver.restore(sess, os.path.join(keras_checkpoint_dir, tf.train.latest_checkpoint(keras_checkpoint_dir)))
However unfortunately I now get this error:
NotFoundError (see above for traceback): Restoring from checkpoint
failed. This is most likely due to a Variable name or other graph key
that is missing from the checkpoint. Please ensure that you have not
altered the graph expected based on the checkpoint. Original error:
Key
FeatureExtractor/MobilenetV2/expanded_conv_6/project/BatchNorm/gamma
not found in checkpoint [[node save/RestoreV2_295 (defined at
:7) = RestoreV2[dtypes=[DT_FLOAT],
_device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0,
save/RestoreV2_295/tensor_names,
save/RestoreV2_295/shape_and_slices)]] [[{{node
save/RestoreV2_196/_393}} = _Recvclient_terminated=false,
recv_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device_incarnation=1, tensor_name="edge_789_save/RestoreV2_196",
tensor_type=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Any ideas on how to get rid of this error?

You can use tf.keras.estimator.model_to_estimator
estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir=path)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, os.path.join(path/keras, tf.train.latest_checkpoint(path/keras)))
print(tf.global_variables())
This should do the job. Note that it will create a subdirectory inside originally specified path.

Related

How to create a Keras model using a frozen_interference_graph.pb?

I want to use a pre-trained model and add a segmentation head at the end of that, but the problem is that I just have the 'frozen_inference_graph.pb'. These are the files that I have from the model:
I have tried several ways:
1. loading the pre-trained model into a Keras model:
It seems to be impossible with the files that I have. It just gives me an AutoTrackable object instead of a model.
2. Accessing the Tensor Objects of the frozen model and make the model with tensors:
I found out how to access the tensors but couldn't make a Keras model with Tensor objects.
with self.graph.as_default():
graph = tf.import_graph_def(graph_def, name='')
graph = tf.compat.v1.import_graph_def(graph_def)
tf.compat.v1.Graph.as_default(graph)
self.sess = tf.Session(graph=self.graph)
self.tensors = [tensor for op in tf.compat.v1.get_default_graph().get_operations() for tensor in op.values()]
Here I can get the tensors but I can't use the tensors in the model:
model = tf.keras.models.Model(inputs=self.tensors[0], outputs=self.tensors[-1])
Is there any way to convert this frozen graph to a Keras model?
Or If there is another approach which I can train the model, I would be glad to know.
P.S. The pre-trained model is 'ssd_mobilenet_v3_small_coco_2020_01_14' which can be found Here.
You can use two methods:
The file 'frozen_inference_graph.pb' contains all necessary information about the weights and the model architecture. Use the following snippet to read the model and add a new layer:a
customModel = tf.keras.models.load_model('savedModel')
# savedModel is the folder with .pb data
pretrainedOutput = customModel.layers[-1].output
newOutput = tf.keras.layers.Dense(2)(pretrainedOutput) # change layer as needed
new_model = tf.keras.Model(inputs=customModel.inputs, outputs=[newOutput])
# create a new model with input of old model and new output tensors
where 'savedModel' is the name of the folder with 'frozen_inference_graph.pb' and other meta data. See details about using .pb files and finetuning the custom models in the TFguide.
Try using .meta file with model architecture and .ckpt to restore the weights in TF 1.x:
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('my_test_model-1000.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
Refer to the tutorial on how to load and customize restored models in TF 1.x.

tensorflow2: keras: model.fit() callbacks and eager mode

I am running Tensorflow 2.1 with keras API. I am following the following coding style:
model = tf.keras.Sequential()
...
model.fit(..., callbacks=callbacks)
Now, I would like to save some intermediate layer tensor value as image summary (as a sample what is happening at n-th training step). In order to do this, I've implemented my own callback class. I've also learned how keras.callbacks.TensorBoard is implemented, since it can save layer weights as image summaries.
I do the following in my on_epoch_end:
tensor = self.model.get_layer(layer_name).output
with context.eager_mode():
with ops.init_scope():
tensor = tf.keras.backend.get_value(tensor)
tf.summary.image(layer_name, tensor, step=step, max_outputs=1)
Unfortunately, I am still getting issue related to eager/graph modes:
tensor = tf.keras.backend.get_value(tensor)
File "/home/matwey/lab/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3241, in get_value
return x.numpy()
AttributeError: 'Tensor' object has no attribute 'numpy'
Unfortunately, there is a little to no documentation on how to correctly combine keras callbacks and tf.summary.image. How could I overcome this issue?
upd: tf_nightly-2.2.0.dev20200427 has the same behaviour.

Saving the weights of a single neural network in a tensorflow graph

How does one save the weights of a single neural network in a tensorflow graph so that it can be loaded in a different program into a network with the same architecture?
My training code requires 3 other neural networks for the training process alone. If I were to use saver.save(sess, 'my-model)', wouldn't it save all the variables in the tensorflow graph? This doesn't seem correct for my use case.
Maybe this comes from my misunderstanding of how tensorflow should work. Am I approaching this problem correctly?
The best approach would be to use tensorflow variables scope. Say you have model_1, model_2, and model_3 and you only want to save model_1:
First, define the models in your training code:
with tf.variable_scope('model_1'):
model one declaration here
...
with tf.variable_scope('model_2'):
model one declaration here
...
with tf.variable_scope('model_3'):
model one declaration here
...
Next, define saver over the variables of model_1:
model_1_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="model_1")
saver = tf.train.Saver(model_1_variables)
While training you can save a checkpoint just like you mentioned:
saver.save(sess, 'my-model')
After your training is done and you want to restore the weights in your evaluation code, make sure you define model_1 and saver the same way:
with tf.variable_scope('model_1'):
model one declaration here
...
model_1_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="model_1")
saver = tf.train.Saver(model_1_variables)
sess = tf.Session()
saver.restore(sess, 'my-model')`

How can I convert a trained Tensorflow model to Keras?

I have a trained Tensorflow model and weights vector which have been exported to protobuf and weights files respectively.
How can I convert these to JSON or YAML and HDF5 files which can be used by Keras?
I have the code for the Tensorflow model, so it would also be acceptable to convert the tf.Session to a keras model and save that in code.
I think the callback in keras is also a solution.
The ckpt file can be saved by TF with:
saver = tf.train.Saver()
saver.save(sess, checkpoint_name)
and to load checkpoint in Keras, you need a callback class as follow:
class RestoreCkptCallback(keras.callbacks.Callback):
def __init__(self, pretrained_file):
self.pretrained_file = pretrained_file
self.sess = keras.backend.get_session()
self.saver = tf.train.Saver()
def on_train_begin(self, logs=None):
if self.pretrian_model_path:
self.saver.restore(self.sess, self.pretrian_model_path)
print('load weights: OK.')
Then in your keras script:
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
restore_ckpt_callback = RestoreCkptCallback(pretrian_model_path='./XXXX.ckpt')
model.fit(x_train, y_train, batch_size=128, epochs=20, callbacks=[restore_ckpt_callback])
That will be fine.
I think it is easy to implement and hope it helps.
Francois Chollet, the creator of keras, stated in 04/2017 "you cannot turn an arbitrary TensorFlow checkpoint into a Keras model. What you can do, however, is build an equivalent Keras model then load into this Keras model the weights"
, see https://github.com/keras-team/keras/issues/5273 . To my knowledge this hasn't changed.
A small example:
First, you can extract the weights of a tensorflow checkpoint like this
PATH_REL_META = r'checkpoint1.meta'
# start tensorflow session
with tf.Session() as sess:
# import graph
saver = tf.train.import_meta_graph(PATH_REL_META)
# load weights for graph
saver.restore(sess, PATH_REL_META[:-5])
# get all global variables (including model variables)
vars_global = tf.global_variables()
# get their name and value and put them into dictionary
sess.as_default()
model_vars = {}
for var in vars_global:
try:
model_vars[var.name] = var.eval()
except:
print("For var={}, an exception occurred".format(var.name))
It might also be of use to export the tensorflow model for use in tensorboard, see https://stackoverflow.com/a/43569991/2135504
Second, you build you keras model as usually and finalize it by "model.compile". Pay attention that you need to give you define each layer by name and add it to the model after that, e.g.
layer_1 = keras.layers.Conv2D(6, (7,7), activation='relu', input_shape=(48,48,1))
net.add(layer_1)
...
net.compile(...)
Third, you can set the weights with the tensorflow values, e.g.
layer_1.set_weights([model_vars['conv7x7x1_1/kernel:0'], model_vars['conv7x7x1_1/bias:0']])
Currently, there is no direct in-built support in Tensorflow or Keras to convert the frozen model or the checkpoint file to hdf5 format.
But since you have mentioned that you have the code of Tensorflow model, you will have to rewrite that model's code in Keras. Then, you will have to read the values of your variables from the checkpoint file and assign it to Keras model using layer.load_weights(weights) method.
More than this methodology, I would suggest to you to do the training directly in Keras as it claimed that Keras' optimizers are 5-10% times faster than Tensorflow's optimizers. Other way is to write your code in Tensorflow with tf.contrib.keras module and save the file directly in hdf5 format.
Unsure if this is what you are looking for, but I happened to just do the same with the newly released keras support in TF 1.2. You can find more on the API here: https://www.tensorflow.org/api_docs/python/tf/contrib/keras
To save you a little time, I also found that I had to include keras modules as shown below with the additional python.keras appended to what is shown in the API docs.
from tensorflow.contrib.keras.python.keras.models import Sequential
Hope that helps get you where you want to go. Essentially once integrated in, you then just handle your model/weight export as usual.

tensorflow image retraining with serving

I am trying to serve retrained inception graph using tensorflow serving. For retraining, I am using this example. However I need to make changes to this graph to get it working with serving export code.
Since in tensorflow serving, you will receive serialized images as input, graph input should start with this:
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {
'image/encoded': tf.FixedLenFeature(shape=[], dtype=tf.string),
}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
jpegs = tf_example['image/encoded']
images = tf.map_fn(preprocess_image, jpegs, dtype=tf.float32)
This images tensor should be input to retrained inception graph. However I don't know if its possible to prepend one graph to another in tensorflow like you can append easily using placeholder_with_input (which has been done in retraining code).
graph, bottleneck_tensor, jpeg_data_tensor, resized_image_tensor = (
create_inception_graph())
Ideally, in image retraining code, I receive a placeholder tensor jpeg_input_data. I need to append tensor images to this placeholder tensor jpeg_data_tensor and export it as single graph using exporter so that it can be served using tensorflow serving. However I don't any tensorflow instruction that does it. Are there any other alternatives apart from this method?
One way of going about it is:
model_path = 'trained/export.pb'
with tf.Graph().as_default():
with tf.Session() as sess:
with gfile.FastGFile(model_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Your prepending ops here
images_placeholder = tf.placeholder(tf.string, name='tf_example')
...
images = tf.map_fn(preprocess_image, jpegs, dtype=tf.float32)
tf.import_graph_def(graph_def, name='inception', input_map={'ResizeBilinear:0': images})
Notice especially the input_map argument. ResizeBilinear:0 is likely not the correct name of the operation you need - you can list the ops by:
[n.name for n in tf.get_default_graph().as_graph_def().node]
I realize this is not a full answer and perhaps not the most efficient but hopefully it can get you started. Just a heads-up, there is also this blogpost.
So since you have already retrained the model, I'm assuming that the model is a Protobuf, but you can just load that up into a Python object and serve off of that object using a custom function that processes either a batch or an atomic operation.
And to your graph question, as far as I know, when you load a tf.Graph() object you are only working with that object and can't work with other graphs... that being said if you had another graph that is an extension of the existing Inception-V3 Graph then you can easily add that to the computation graph for your custom graph quite easily.