I load a saved h5 model and want to save the model as pb.
The model is saved during training with the tf.keras.callbacks.ModelCheckpoint callback function.
TF version: 2.0.0a
edit: same issue also with 2.0.0-beta1
My steps to save a pb:
I first set K.set_learning_phase(0)
then I load the model with tf.keras.models.load_model
Then, I define the freeze_session() function.
(optional I compile the model)
Then using the freeze_session() function with tf.keras.backend.get_session
The error I get, with and without compiling:
AttributeError: module 'tensorflow.python.keras.api._v2.keras.backend'
has no attribute 'get_session'
My Question:
Does TF2 not have the get_session anymore?
(I know that tf.contrib.saved_model.save_keras_model does not exist anymore and I also tried tf.saved_model.save which not really worked)
Or does get_session only work when I actually train the model and just loading the h5 does not work
Edit: Also with a freshly trained session, no get_session is available.
If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
update:
Since the official release of TF2.x graph/session concept has changed. The savedmodel api should be used.
You can use the tf.compat.v1.disable_eager_execution() with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
I do save the model to pb from h5 model:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I use TF2 to convert model like:
pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
After training, self.model.load_weights(self.checkpoint_path) load checkpoint;
self.model.save(h5_path, overwrite=True, include_optimizer=False) save as h5;
convert h5 to pb just like above;
I'm wondering the same thing, as I'm trying to use get_session() and set_session() to free up GPU memory. These functions seem to be missing and aren't in the TF2.0 Keras documentation. I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
use
from tensorflow.compat.v1.keras.backend import get_session
in keras 2 & tensorflow 2.2
then call
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
Related
I built a custom model in .h5 from Matterport's MaskRCNN implementation. I managed to save the full model and not the weights alone using model.keras_model.save(), and assume it worked correctly.
I need to convert this model to ONNX to inference in Unity Barracuda, and I have been hitting several errors along the way.
I tried:
T1. .h5 to ONNX using this tutorial and the keras2onnx package, and I hit an error at:
model = load_model('model.h5')
Error:
ValueError: Unknown layer: BatchNorm
T2. Defining custom layers using this GitHub code:
model = keras.models.load_model(r'model.h5', custom_objects={'BatchNorm':BatchNorm,
'tf':tf, 'ProposalLayer':ProposalLayer,
'PyramidROIAlign1':PyramidROIAlign1, 'PyramidROIAlign2':PyramidROIAlign2,
'DetectionLayer':DetectionLayer}, compile=False)
Error:
ValueError: No model found in config file.
ValueError: Unknown layer: PyramidROIAlign
T3. .h5 to .pb (frozen graph) and .pbtxt, and then from .pb to ONNX using tf2onnx after finding input and output nodes (seems to be only one of each?):
assert d in name_to_node, "%s is not in graph" % d
AssertionError: output0 is not in graph
T4. .h5 to SavedModel using tf-serving code from here and then python -m tf2onnx.convert --saved-model exported_models\coco_mrcnn\3 --opset 15 --output "model.onnx" to convert to ONNX:
ValueError: make_sure failure: variable mrcnn_detection/map/while/Enter already exists as state variable.
TLDR: Is there a way to convert my .h5 model to ONNX through any direct/indirect means? I have been stuck on this for days!
Thanks in advance.
Edit 1:
It seems that keras.models.load_model() throws the first two errors - wondering if there is a way I can work with the .pb/.pbtxt model, or a way around without using load_model(), or a way to solve the load_model() issue?
Edit 2:
Code for T1:
custom dataset modified from Matterport's MaskRCNN implementation
Code for T4
Try converting it to saved model format and then to onnx.
import numpy as np
import tensorflow as tf
from tensorflow import keras
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
model = keras.models.load_model("my_h5_model.h5")
tf.saved_model.save(model, "tmp_model")
Then convert it using tf2onnx.
python3 -m tf2onnx.convert --saved-model tmp_model --output "model.onnx"
this works for me
via anaconda powershell console (execute as admin) :
pip install tf2onnx
pip install onnxmltools
and in a notebook (for example)
from tensorflow.python.keras.models import load_model
import os
os.environ['TF_KERAS'] = '1'
import onnxmltools
model = load_model('[h5 path]')
onnx_model = onnxmltools.convert_keras(model)
onnxmltools.utils.save_model(onnx_model, '[onnx path]')
I'm trying to load a pretrained retinanet model with keras by running:
# import keras
import keras
# import keras_retinanet
from keras_retinanet import models
from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
from keras_retinanet.utils.visualization import draw_box, draw_caption
from keras_retinanet.utils.colors import label_color
# set tf backend to allow memory to grow, instead of claiming everything
import tensorflow as tf
def get_session():
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
return tf.Session(config=config)
model_path = os.path.join('sample_data/snapshots', sorted(os.listdir('sample_data/snapshots'), reverse=True)[0])
print(model_path)
# load retinanet model
model = models.load_model(model_path, backbone_name='resnet50')
model = models.convert_model(model)
I am facing the following error with both codes:
OSError: SavedModel file does not exist at: sample_data/snapshots/training_5000(640_480).h5/{saved_model.pbtxt|saved_model.pb}
the cause might be some new versions of Keras or tensorflow,
soo I am going to list the versions that I am currently using.
keras.__version__
2.4.3
tf.__version__
2.4.1
Note: I am trying to run this code in my Colab.
I have trained a Keras model based on this repo.
After the training I save the model as checkpoint files like this:
sess=tf.keras.backend.get_session()
saver = tf.train.Saver()
saver.save(sess, current_run_path + '/checkpoint_files/model_{}.ckpt'.format(date))
Then I restore the graph from the checkpoint files and freeze it using the standard tf freeze_graph script. When I want to restore the frozen graph I get the following error:
Input 0 of node Conv_BN_1/cond/ReadVariableOp/Switch was passed float from Conv_BN_1/gamma:0 incompatible with expected resource
How can I fix this issue?
Edit: My problem is related to this question. Unfortunately, I can't use the workaround.
Edit 2:
I have opened an issue on github and created a gist to reproduce the error.
https://github.com/keras-team/keras/issues/11032
Just resolved the same issue. I connected this few answers: 1, 2, 3 and realized that issue originated from batchnorm layer working state: training or learning. So, in order to resolve that issue you just need to place one line before loading your model:
keras.backend.set_learning_phase(0)
Complete example, to export model
import tensorflow as tf
from tensorflow.python.framework import graph_io
from tensorflow.keras.applications.inception_v3 import InceptionV3
def freeze_graph(graph, session, output):
with graph.as_default():
graphdef_inf = tf.graph_util.remove_training_nodes(graph.as_graph_def())
graphdef_frozen = tf.graph_util.convert_variables_to_constants(session, graphdef_inf, output)
graph_io.write_graph(graphdef_frozen, ".", "frozen_model.pb", as_text=False)
tf.keras.backend.set_learning_phase(0) # this line most important
base_model = InceptionV3()
session = tf.keras.backend.get_session()
INPUT_NODE = base_model.inputs[0].op.name
OUTPUT_NODE = base_model.outputs[0].op.name
freeze_graph(session.graph, session, [out.op.name for out in base_model.outputs])
to load *.pb model:
from PIL import Image
import numpy as np
import tensorflow as tf
# https://i.imgur.com/tvOB18o.jpg
im = Image.open("/home/chichivica/Pictures/eagle.jpg").resize((299, 299), Image.BICUBIC)
im = np.array(im) / 255.0
im = im[None, ...]
graph_def = tf.GraphDef()
with tf.gfile.GFile("frozen_model.pb", "rb") as f:
graph_def.ParseFromString(f.read())
graph = tf.Graph()
with graph.as_default():
net_inp, net_out = tf.import_graph_def(
graph_def, return_elements=["input_1", "predictions/Softmax"]
)
with tf.Session(graph=graph) as sess:
out = sess.run(net_out.outputs[0], feed_dict={net_inp.outputs[0]: im})
print(np.argmax(out))
This is bug with Tensorflow 1.1x and as another answer stated, it is because of the internal batch norm learning vs inference state. In TF 1.14.0 you actually get a cryptic error when trying to freeze a batch norm layer.
Using set_learning_phase(0) will put the batch norm layer (and probably others like dropout) into inference mode and thus the batch norm layer will not work during training, leading to reduced accuracy.
My solution is this:
Create the model using a function (do not use K.set_learning_phase(0)):
def create_model():
inputs = Input(...)
...
return model
model = create_model()
Train model
Save weights:
model.save_weights("weights.h5")
Clear session (important so layer names are the same) and set learning phase to 0:
K.clear_session()
K.set_learning_phase(0)
Recreate model and load weights:
model = create_model()
model.load_weights("weights.h5")
Freeze as before
Thanks for pointing the main issue! I found that keras.backend.set_learning_phase(0) to be not working sometimes, at least in my case.
Another approach might be: for l in keras_model.layers: l.trainable = False
Is it possible to convert a keras model (h5 file of network architecture and weights) into a tensorflow model? Or is there an equivalent function to model.save of keras in tensorflow?
Yes, it is possible, because Keras, since it uses Tensorflow as backend, also builds computational graph. You just need to get this graph from your Keras model.
"Keras only uses one graph and one session. You can access the session
via: K.get_session(). The graph associated with it would then be:
K.get_session().graph."
(from fchollet: https://github.com/keras-team/keras/issues/3223#issuecomment-232745857)
Or you can save this graph in checkpoint format (https://www.tensorflow.org/api_docs/python/tf/train/Saver):
import tensorflow as tf
from keras import backend as K
saver = tf.train.Saver()
sess = K.get_session()
retval = saver.save(sess, ckpt_model_name)
By the way, since tensorflow 13 you can use keras right from it:
from tensorflow.python.keras import models, layers
I have a trained Tensorflow model and weights vector which have been exported to protobuf and weights files respectively.
How can I convert these to JSON or YAML and HDF5 files which can be used by Keras?
I have the code for the Tensorflow model, so it would also be acceptable to convert the tf.Session to a keras model and save that in code.
I think the callback in keras is also a solution.
The ckpt file can be saved by TF with:
saver = tf.train.Saver()
saver.save(sess, checkpoint_name)
and to load checkpoint in Keras, you need a callback class as follow:
class RestoreCkptCallback(keras.callbacks.Callback):
def __init__(self, pretrained_file):
self.pretrained_file = pretrained_file
self.sess = keras.backend.get_session()
self.saver = tf.train.Saver()
def on_train_begin(self, logs=None):
if self.pretrian_model_path:
self.saver.restore(self.sess, self.pretrian_model_path)
print('load weights: OK.')
Then in your keras script:
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
restore_ckpt_callback = RestoreCkptCallback(pretrian_model_path='./XXXX.ckpt')
model.fit(x_train, y_train, batch_size=128, epochs=20, callbacks=[restore_ckpt_callback])
That will be fine.
I think it is easy to implement and hope it helps.
Francois Chollet, the creator of keras, stated in 04/2017 "you cannot turn an arbitrary TensorFlow checkpoint into a Keras model. What you can do, however, is build an equivalent Keras model then load into this Keras model the weights"
, see https://github.com/keras-team/keras/issues/5273 . To my knowledge this hasn't changed.
A small example:
First, you can extract the weights of a tensorflow checkpoint like this
PATH_REL_META = r'checkpoint1.meta'
# start tensorflow session
with tf.Session() as sess:
# import graph
saver = tf.train.import_meta_graph(PATH_REL_META)
# load weights for graph
saver.restore(sess, PATH_REL_META[:-5])
# get all global variables (including model variables)
vars_global = tf.global_variables()
# get their name and value and put them into dictionary
sess.as_default()
model_vars = {}
for var in vars_global:
try:
model_vars[var.name] = var.eval()
except:
print("For var={}, an exception occurred".format(var.name))
It might also be of use to export the tensorflow model for use in tensorboard, see https://stackoverflow.com/a/43569991/2135504
Second, you build you keras model as usually and finalize it by "model.compile". Pay attention that you need to give you define each layer by name and add it to the model after that, e.g.
layer_1 = keras.layers.Conv2D(6, (7,7), activation='relu', input_shape=(48,48,1))
net.add(layer_1)
...
net.compile(...)
Third, you can set the weights with the tensorflow values, e.g.
layer_1.set_weights([model_vars['conv7x7x1_1/kernel:0'], model_vars['conv7x7x1_1/bias:0']])
Currently, there is no direct in-built support in Tensorflow or Keras to convert the frozen model or the checkpoint file to hdf5 format.
But since you have mentioned that you have the code of Tensorflow model, you will have to rewrite that model's code in Keras. Then, you will have to read the values of your variables from the checkpoint file and assign it to Keras model using layer.load_weights(weights) method.
More than this methodology, I would suggest to you to do the training directly in Keras as it claimed that Keras' optimizers are 5-10% times faster than Tensorflow's optimizers. Other way is to write your code in Tensorflow with tf.contrib.keras module and save the file directly in hdf5 format.
Unsure if this is what you are looking for, but I happened to just do the same with the newly released keras support in TF 1.2. You can find more on the API here: https://www.tensorflow.org/api_docs/python/tf/contrib/keras
To save you a little time, I also found that I had to include keras modules as shown below with the additional python.keras appended to what is shown in the API docs.
from tensorflow.contrib.keras.python.keras.models import Sequential
Hope that helps get you where you want to go. Essentially once integrated in, you then just handle your model/weight export as usual.