I used Keras to build a model and trained it. Then I saved the model as an h5 file, i.e. model.save('name.h5'). Now I want to reload the model in tensorflow such that I have access to .meta file, for example I want to import the computational graph from the .meta file, i.e., tf.train.import_meta_graph('name_of_the_file.meta').
So, the question is how to convert .h5 file of Keras to the following four files of TensorFlow:
.meta
checkpoint
.data-00000-of-00001
.index
You can use 3rd party packages, for example keras_to_tensorflow
keras_to_tensorflow: General code to convert a trained keras model into an inference tensorflow model
The conversion can be done by
python3 keras_to_tensorflow.py -input_model_file model.h5
Tensorflow 2.x will do that automatically. The function you are using to save (see also) is:
save(
filepath,
overwrite=True,
include_optimizer=True,
save_format=None
)
The save format let's you choose either 'h5' or 'tf'. However, for tensorflow 1.x is not implemented yet (and probably never will).
save_format: Either 'tf' or 'h5', indicating whether to save the model
to Tensorflow SavedModel or HDF5. The default is currently 'h5', but
will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently
disabled (use tf.keras.experimental.export_saved_model instead).
You can do as it says and use the tf.keras.experimental.export_saved_model but it will still not create the .meta file.
Related
What I tried so far:
pre-train a model using unsupervised method in PyTorch, and save off the checkpoint file (using torch.save(state, filename))
convert the checkpoint file to onnx format (using torch.onnx.export)
convert the onnx to tensorflow saved model (using onnx-tf)
trying to load the variables in saved_model folder as checkpoint in my tensorflow training code (using tf.train.init_from_checkpoint) for fine-tuning
But now I am getting stuck at step 4 because I notice that variables.index and variables.data#1 files are basically empty (probably because of this: https://github.com/onnx/onnx-tensorflow/issues/994)
Also, specifically, if I try to use tf.train.NewCheckpointReader to load the files and call ckpt_reader.get_variable_to_shape_map(), _CHECKPOINTABLE_OBJECT_GRAPH is empty
Any suggestions/experience are appreciated :-)
so im facing a problem about deployment my custom sign-language recognition model. I converted my_ssd_mobnet with exporter_main_v2.py to saved_model.pb and then i tried to use the tensorflowjs convertor with this code:
from tensorflow import keras
import tensorflowjs as tfjs
def importModel(modelPath):
model = tf.keras.models.load_model(modelPath)
tfjs.converters.save_tf_model(model, "tfjsmodel")
importModel("saved_model")
#importModel("modelDirectory")
then i got an error like this..
ValueError: Unable to create a Keras model from this SavedModel. This SavedModel was created with tf.saved_model.save, and lacks the Keras metadata.Please save your Keras model by calling model.saveor tf.keras.models.save_model.
Finally i decide to convert my model to h5, but.. i don't know how.
How can i convert my_ssd_mobnet model to h5?
Thanks!
If you're creating a custom Keras layer in python and wanting to export it to tfjs for the browser to predict, then you'll most likely encounter "Unknown layer" and will have to implement them yourself in JS.
Instead of exporting the layers, it's best to export a graph since you're only using it for prediction and not training in the browser.
tf.saved_model.save(model, 'saved_model')
This will save the files in the saved_model folder and contains the .pb file.
Use the tensorflowjs_converter tool to convert the model into a graph tfjs model.
tensorflow_converter --input_format=tf_saved_model saved_model model
This will convert your saved model into the browser-compatible tfjs model without the custom layer. (The Keras layers will be built in.)
Move this folder to your website's public folder.
In the browser:
const model = await tf.loadGraphModel('/model/model.json')
const img = tf.browser.fromPixels(imageData, 3) // imageElement, videoElement, ImageData
.toFloat().resizeBilinear([224, 224]) // mobilenet dims
.div(tf.scalar(255)) // mobilenet [0,1] normalization
.expandDims()
const { values, indices } = model.predict(img).topk()
const label = indices.dataSync()[0]
const confidence = values.dataSync()[0]
NOTE: The .bin files will end up in the 10's of MB so put this inside a webworker. You can send a buffered data from the main thread to the worker thread for processing.
First and foremost, if you have used "exporter_main_v2.py" script to export the model, you will only get the model format in tensorflow model. This way of exporting is mainly used to make inference on the trained model. So the main problem in your code is that you are trying to import a "keras model" with that tf.keras.models.load_model() function. Instead of using "exporter_main_v2.py" you have to use tf.keras.models.save_model() function to export/save your model.
I am also giving you a simple video explanation link to clarify a few things for you
https://www.youtube.com/watch?v=Lx7OCFXPG8o
After watching the video you might want to checkout the following colab notebook
https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l07c01_saving_and_loading_models.ipynb
This is a material provided by Udacity from its introduction to tensorflow training course. That should be very helpful in your case to understand the difference between tensorflow model file and keras model file.
Have a nice day.
Edit:
HDF5 format
Keras provides a basic save format using the HDF5 standard.
Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
Save the entire model to a HDF5 file.
The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
You should add '.h5' extension to filename when calling model.save function, by this way the model will be saved in h5 format.
I have trained a multi_gpu_model using tensorflow 1.13/1.14 and saved them with keras.model.save('<.hdf5>').
Now, after migrating to tensorflow 2.4.1, in which Keras is integrated as tensorflow.keras, I cannot tensorflow.keras.models.load_model as I did before, due to the following error:
AttributeError: module 'tensorflow.python.keras.backend' has no attribute 'slice'
After trying to import keras.models.load_model, and trying different versions of keras (2.2.4 -> 2.4.1) and tensorflow (2.2 -> 2.4.1), I cannot load_model from my .hdf5 file using my TF 2.2+ code.
I do know that in TF 2.X + we can train using distributed machines by implementing the "strategy" scope, and it does work, but I have a lot of "old" models that I need to work on the same code base which is now being migrated to TF 2.4.1
Apparently the problem was not the TF versions, but the way I was saving my models on my TF 1.X code versions.
I used the keras.multi_gpu_model class for both training and saving, while this practice is wrong, as clearly stated on Keras documentation:
"To save the multi-gpu model, use .save(fname) or .save_weights(fname)
with the template model (the argument you passed to multi_gpu_model),
rather than the model returned by multi_gpu_model."
So, after figuring this out a method for model conversion, using TF 1.X code, was adopted:
build you model from scratch, namely new_model
load your pre-trained weights from the multi_gpu_model, namely 'old_model'
copy your old_model's weights, which is old_model.layers[3] (due to the wrong usage of multi_gpu_model) to your new_model
save new_model as .hdf5 file
use new_model.hdf5 everywhere - TF 1.X and TF 2.X
I have downloaded a pre-trained PoseNet model for Tensorflow.js (tfjs) from Google, so its a json file.
However, I want to use it on Android, so I need the .tflite model. Although someone has 'ported' a similar model from tfjs to tflite here, I have no idea what model (there are many variants of PoseNet) they converted. I want to do the steps myself. Also, I don't want to run some arbitrary code someone uploaded into a file in stackOverflow:
Caution: Be careful with untrusted code—TensorFlow models are code. See Using TensorFlow Securely for details. Tensorflow docs
Does anyone know any convenient ways to do this?
You can find out what tfjs format you have by looking in the json file. It often says "graph-model". The difference between them are here.
From tfjs graph model to SavedModel (more common)
Use tfjs-to-tf by Patrick Levin.
import tfjs_graph_converter.api as tfjs
tfjs.graph_model_to_saved_model(
"savedmodel/posenet/mobilenet/float/050/model-stride16.json",
"realsavedmodel"
)
# Code below taken from https://www.tensorflow.org/lite/convert/python_api
converter = tf.lite.TFLiteConverter.from_saved_model("realsavedmodel")
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
From tfjs layers model to SavedModel
Note: This will only work for layers model format, not graph model format as in the question. I've written the difference between them here.
Install and use tensorflowjs-convert to convert the .json file into a Keras HDF5 file (from another SO thread).
On mac, you'll face issues running pyenv (fix) and on Z-shell, pyenv won't load correctly (fix). Also, once pyenv is running, use python -m pip install tensorflowjs instead of pip install tensorflowjs, because pyenv did not change python used by pip for me.
Once you've followed the tensorflowjs_converter guide, run tensorflowjs_converter to verify it works with no errors, and should just warn you about Missing input_path argument. Then:
tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras tfjs_model.json hdf5_keras_model.hdf5
Convert the Keras HDF5 file into a SavedModel (standard Tensorflow model file) or directly into .tflite file using the TFLiteConverter. The following runs in a Python file:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
or to save to a SavedModel:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
tf.keras.models.save_model(
model, filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None
)
I want to use MMdnn to convert a tensorflow ResNet model to other frameworks. It seems that I can only use mmconvert to read from a .pb frozen graph file.
However, when using tf.estimator.Estimator, the .pb file that it creates is a SavedModelDef. I understand this to be a wrapper around the tf GraphDef. Thus the GraphDef .pb file can be extracted from the SavedModel using freeze_graph.py.
From there, I will need the name of the input node in the tf GraphDef. But I'm unsure how to identify the name from looking at the .pbtxt. The tf.Estimator inputs with a tf.Dataset object, according to the framework.
I'm guessing there should be a tf.Placeholder somewhere that accepts the input. But I'm not sure how to find what the input node actually is.
Answering my own question here. The freeze_graph utility that comes with tensorflow is useful for extracting the graphdef from the tf SavedModel format.
To find the name of the input node, make sure to saved the tf SavedModel in pbtxt format. Open it up and look for the first node of your compute graph, e.g. if using tf resnet, the first nodes will be named resnet_model/*. Find the node that feeds this node, and you will have the name of the input node to specify to MMdnn tools. I expected this to be a tf.Placeholder that the Estimator adds for inputs. This node was just named Placeholder, so that's what I specified as the input node.
First extract the compute graph.
freeze_graph --input_saved_model_dir <path/to/saved_model_dir> --output_node_names softmax --output_graph ./graph_def.pb
Then use MMdnn to convert it to caffe.
mmconvert -sf tensorflow -iw ./graph_def.pb --inNodeName Placeholder --inputShape 224,224,3 --dstNodeName softmax -df caffe -om tf_resnet