Keras h5 to Tensorflow serving in 2019? - tensorflow

i tried to follow this tutorial on how to convert a Keras H5 Model zu ProtoBuff and serving it using Tensorflow Serve:
https://towardsdatascience.com/deploying-keras-models-using-tensorflow-serving-and-flask-508ba00f1037
That tutorial among many other resources on the web use "tf.saved_model.simple_save", which is deprecated and removed by now (March 2019).
Converting the h5 into pb using freeze_session as shown here:
How to export Keras .h5 to tensorflow .pb?
Seems to miss a "serve" Tag, as the tensorflow_model_server outputs:
Loading servable: {name: ImageClassifier version: 1} failed: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli
checked it with saved_model_cli, there are no tags.
What is the way to make a h5 model serveable in tensorflow_server nowadays?

NOTE: This applies to TF 2.0+
I'm assuming you have your Keras model in model.h5.
Firstly, just load the model with tensorflow's implementation of Keras:
from tensorflow import keras
model = keras.models.load_model('model.h5')
Then, simply export a SavedModel
keras.experimental.export_saved_model(model, 'path_to_saved_model')
Finally, apply any transformation you nomally d to go from SavedModel to the .pb inference file (e.g.: freezing, optimizing for inference, etc)
You can hve more details and a full example in TF's official guide for saving and serializing models in TF 2.0

Related

Converting tensorflow2.0 model to TensorRT engine (tensorflow2.0)

I have retrained some tensorflow2.0 model, it's working as 1 class object detector, prepared with object detection api v2 (https://tensorflow-object-detection-api-tutorial.readthedocs.io/).
After that I have converted it to onnx (tf2onnx.convert) and tested - got the same inference results.
I have tested all pretrained models (downloaded from tf model zoo https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md):
ssd_mobilenet_v2_320x320_coco17_tpu-8
ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8
ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8
ssd_resnet50_v1_fpn_640x640_coco17_tpu-8
I have retrained it by using some small batch of data.
The problem is with using it with gstreamer/deepstream. As I have seen, gstreamer consumes the onnx model, or model after converting it to TensorRT. (If I will provide onnx - model is also converted to TensorRT of course, but it's done by gstreamer right before running)
I was also trying to same pipeline with train->convert to onnx->convert to trt (or just provide onnx model to gstreamer). Same issue.
Error:
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::519] Error Code
9: Internal Error ((Unnamed Layer* 747) [Recurrence]: IRecurrenceLayer
cannot be used to compute a shape tensor)
TensorRT Version: 8.2.1.8
tf2onnx Version: 1.9.3
Is there any chance to get some help?
Or maybe I should skip the onnx model and just convert it from tensorflow to tensorRT engine? Is it possible?
Of course I can upload the model if it would help.
BR!

keras multi_gpu_model saved_model failed to load model in TF2 code

I have trained a multi_gpu_model using tensorflow 1.13/1.14 and saved them with keras.model.save('<.hdf5>').
Now, after migrating to tensorflow 2.4.1, in which Keras is integrated as tensorflow.keras, I cannot tensorflow.keras.models.load_model as I did before, due to the following error:
AttributeError: module 'tensorflow.python.keras.backend' has no attribute 'slice'
After trying to import keras.models.load_model, and trying different versions of keras (2.2.4 -> 2.4.1) and tensorflow (2.2 -> 2.4.1), I cannot load_model from my .hdf5 file using my TF 2.2+ code.
I do know that in TF 2.X + we can train using distributed machines by implementing the "strategy" scope, and it does work, but I have a lot of "old" models that I need to work on the same code base which is now being migrated to TF 2.4.1
Apparently the problem was not the TF versions, but the way I was saving my models on my TF 1.X code versions.
I used the keras.multi_gpu_model class for both training and saving, while this practice is wrong, as clearly stated on Keras documentation:
"To save the multi-gpu model, use .save(fname) or .save_weights(fname)
with the template model (the argument you passed to multi_gpu_model),
rather than the model returned by multi_gpu_model."
So, after figuring this out a method for model conversion, using TF 1.X code, was adopted:
build you model from scratch, namely new_model
load your pre-trained weights from the multi_gpu_model, namely 'old_model'
copy your old_model's weights, which is old_model.layers[3] (due to the wrong usage of multi_gpu_model) to your new_model
save new_model as .hdf5 file
use new_model.hdf5 everywhere - TF 1.X and TF 2.X

Incorrect freezing of weights maskrcnn Tensorflow 2 in object_detection_API

I am training the maskrcnn inception v2 model on the Tensorflow version for further work with OpenVino. After training the model, I freeze the model using a script in object_detection_API directory:
python exporter_main_v2.py \
--trained_checkpoint_dir training
--output_directory inference_graph
--pipeline_config_path training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config
After this script, I get the saved model and pipeline files, which should be used in OpenVInO in the future
The following error occurs when uploading the received files to model optimizer:
Model Optimizer version:
2020-08-20 11:37:05.425293: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
frozen graph in text or binary format
inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.
I teach the model by following the example from the link article, using my own dataset: https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api
On gpu, the model starts and works, but I need to get the converted model for OpenVINO
Run the mo_tf.py script with a path to the SavedModel directory:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>

Problem converting Keras Models into Layers API format models to use with tensorflow.js

I have a Problem converting Keras Models into Layers API format models to use with tensorflowjs
I use the command:
$ tensorflowjs_converter --input_format keras kerasModels/vgg16_weights_tf_dim_ordering_tf_kernels.h5 convertedModels/
I get an error "KeyError: Can't open attribute (can't locate attribute 'keras version')"
Here is an image of the error log:
I assume you are trying to convert the model downloaded from here, which is possibly outdated now.
You can download the VGG16 model fresh from keras-applications using the following python script:
from keras.applications.vgg16 import VGG16
model = VGG16(include_top=True, weights='imagenet')
model.save("VGG16.h5")

How to export Keras h5 format to TensorFlow .meta?

I used Keras to build a model and trained it. Then I saved the model as an h5 file, i.e. model.save('name.h5'). Now I want to reload the model in tensorflow such that I have access to .meta file, for example I want to import the computational graph from the .meta file, i.e., tf.train.import_meta_graph('name_of_the_file.meta').
So, the question is how to convert .h5 file of Keras to the following four files of TensorFlow:
.meta
checkpoint
.data-00000-of-00001
.index
You can use 3rd party packages, for example keras_to_tensorflow
keras_to_tensorflow: General code to convert a trained keras model into an inference tensorflow model
The conversion can be done by
python3 keras_to_tensorflow.py -input_model_file model.h5
Tensorflow 2.x will do that automatically. The function you are using to save (see also) is:
save(
filepath,
overwrite=True,
include_optimizer=True,
save_format=None
)
The save format let's you choose either 'h5' or 'tf'. However, for tensorflow 1.x is not implemented yet (and probably never will).
save_format: Either 'tf' or 'h5', indicating whether to save the model
to Tensorflow SavedModel or HDF5. The default is currently 'h5', but
will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently
disabled (use tf.keras.experimental.export_saved_model instead).
You can do as it says and use the tf.keras.experimental.export_saved_model but it will still not create the .meta file.