How to print the layers of the tensorflow 2 saved_model - tensorflow

I am using tensorflow 2.6.2 and I downloaded the model from the Tensorflow 2 Model zoo
I am able to load the model using this
import tensorflow as tf
if __name__ == "__main__":
try:
model = tf.saved_model.load("/home/user/git/models_zoo/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/")
But unfortunately I am not able to see all the layers of the model using the below
for v in model.trainable_variables:
print(v.name)
which should ideally print all the layers in the network, but I am getting the following error
print(model.trainable_variables)
AttributeError: '_UserObject' object has no attribute 'trainable_variables'
Can someone please tell, what I am doing wrong here.

I was able to print using this
loaded = tf.saved_model.load("/home/user/git/models_zoo/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/")
infer = loaded.signatures["serving_default"]
for v in infer.trainable_variables:
print(v.name)

Related

Exception encountered when calling layer "feature_extraction_layer" (type KerasLayer). : downloading a model from tensorflow hub

I want create a model from tensorflow hub but when I try to load the model. I am getting following error.
enter image description here
`# lets create a model function
from tensorflow.keras import Sequential
IMAGE_SHAPE = (224,224)
def create_model(model_url,num_classes=10):
"""
create keras sequential
Args:
model_url(str)L a tensorflow hub feature extraction URL
num_classes (int): number of output neurons in the output layer
"""
# download a pretrained model.
feature_extraction_layer = hub.KerasLayer(model_url,trainable = False,name = 'feature_extraction_layer',
input_shape=IMAGE_SHAPE+(3,))
# creat model
model = Sequential([feature_extraction_layer,
layers.Dense(num_classes,activation='softmax',name='output_layer')
])
return model`
I tried running the same code on google colab but it didnt work. there also it gave me the same error.

Unable to convert .h5 model to ONNX for inferencing through any means

I built a custom model in .h5 from Matterport's MaskRCNN implementation. I managed to save the full model and not the weights alone using model.keras_model.save(), and assume it worked correctly.
I need to convert this model to ONNX to inference in Unity Barracuda, and I have been hitting several errors along the way.
I tried:
T1. .h5 to ONNX using this tutorial and the keras2onnx package, and I hit an error at:
model = load_model('model.h5')
Error:
ValueError: Unknown layer: BatchNorm
T2. Defining custom layers using this GitHub code:
model = keras.models.load_model(r'model.h5', custom_objects={'BatchNorm':BatchNorm,
'tf':tf, 'ProposalLayer':ProposalLayer,
'PyramidROIAlign1':PyramidROIAlign1, 'PyramidROIAlign2':PyramidROIAlign2,
'DetectionLayer':DetectionLayer}, compile=False)
Error:
ValueError: No model found in config file.
ValueError: Unknown layer: PyramidROIAlign
T3. .h5 to .pb (frozen graph) and .pbtxt, and then from .pb to ONNX using tf2onnx after finding input and output nodes (seems to be only one of each?):
assert d in name_to_node, "%s is not in graph" % d
AssertionError: output0 is not in graph
T4. .h5 to SavedModel using tf-serving code from here and then python -m tf2onnx.convert --saved-model exported_models\coco_mrcnn\3 --opset 15 --output "model.onnx" to convert to ONNX:
ValueError: make_sure failure: variable mrcnn_detection/map/while/Enter already exists as state variable.
TLDR: Is there a way to convert my .h5 model to ONNX through any direct/indirect means? I have been stuck on this for days!
Thanks in advance.
Edit 1:
It seems that keras.models.load_model() throws the first two errors - wondering if there is a way I can work with the .pb/.pbtxt model, or a way around without using load_model(), or a way to solve the load_model() issue?
Edit 2:
Code for T1:
custom dataset modified from Matterport's MaskRCNN implementation
Code for T4
Try converting it to saved model format and then to onnx.
import numpy as np
import tensorflow as tf
from tensorflow import keras
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
model = keras.models.load_model("my_h5_model.h5")
tf.saved_model.save(model, "tmp_model")
Then convert it using tf2onnx.
python3 -m tf2onnx.convert --saved-model tmp_model --output "model.onnx"
this works for me
via anaconda powershell console (execute as admin) :
pip install tf2onnx
pip install onnxmltools
and in a notebook (for example)
from tensorflow.python.keras.models import load_model
import os
os.environ['TF_KERAS'] = '1'
import onnxmltools
model = load_model('[h5 path]')
onnx_model = onnxmltools.convert_keras(model)
onnxmltools.utils.save_model(onnx_model, '[onnx path]')

error trying to conver tf model into tflite model

import tensorflow as tf
from tensorflow import lite
converter = tf.lite.TFLiteConverter.from_saved_model("C:/tmp")
model = converter.convert()
open("converted_model.lite", "wb").write(model)
it gives
ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.
Is there a way to fix this or converting my .pb model into a keras (h5) model???
Related questions: How do i convert tensorflow 2.0 estimator model to tensorflow lite?
Try to use a concrete function:
import tensorflow as tf
from tensorflow import lite
saved_model_obj = tf.saved_model.load(export_dir="C:/tmp")
concrete_func = saved_model_obj.signatures['serving_default']
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
# print(saved_model_obj.signatures.keys())
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.experimental_new_converter = True
model = converter.convert()
open("converted_model.lite", "wb").write(model)
serving_default is the default key for signatures in a SavedModels.
If not working try to uncomment converter.experimental_new_converter = True and the two lines above it.
Short explanation
Based on Concrete functions guide
Eager execution in TensorFlow 2 evaluates operations immediately, without building graphs.
To save the model you need graph/s which is wrapped in a python callables: a concrete functions.

how to properly saving loaded h5 model to pb with TF2

I load a saved h5 model and want to save the model as pb.
The model is saved during training with the tf.keras.callbacks.ModelCheckpoint callback function.
TF version: 2.0.0a
edit: same issue also with 2.0.0-beta1
My steps to save a pb:
I first set K.set_learning_phase(0)
then I load the model with tf.keras.models.load_model
Then, I define the freeze_session() function.
(optional I compile the model)
Then using the freeze_session() function with tf.keras.backend.get_session
The error I get, with and without compiling:
AttributeError: module 'tensorflow.python.keras.api._v2.keras.backend'
has no attribute 'get_session'
My Question:
Does TF2 not have the get_session anymore?
(I know that tf.contrib.saved_model.save_keras_model does not exist anymore and I also tried tf.saved_model.save which not really worked)
Or does get_session only work when I actually train the model and just loading the h5 does not work
Edit: Also with a freshly trained session, no get_session is available.
If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
update:
Since the official release of TF2.x graph/session concept has changed. The savedmodel api should be used.
You can use the tf.compat.v1.disable_eager_execution() with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
I do save the model to pb from h5 model:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I use TF2 to convert model like:
pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
After training, self.model.load_weights(self.checkpoint_path) load checkpoint;
self.model.save(h5_path, overwrite=True, include_optimizer=False) save as h5;
convert h5 to pb just like above;
I'm wondering the same thing, as I'm trying to use get_session() and set_session() to free up GPU memory. These functions seem to be missing and aren't in the TF2.0 Keras documentation. I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
use
from tensorflow.compat.v1.keras.backend import get_session
in keras 2 & tensorflow 2.2
then call
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")

Using hub.text_embedding_column with tf.contrib.estimator.RNNClassifier

I'm trying to use a module off Tensorflow Hub (a word embedding module) with tf.contrib.estimator.RNNClassifier.
My desired model
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Running that returns the following error:
ValueError: All feature_columns must be of type _SequenceDenseColumn.
You can wrap a sequence_categorical_column with an embedding_column or indicator_column.
Given (
type <class 'tensorflow_hub.feature_column._TextEmbeddingColumn'>):
_TextEmbeddingColumn(key='title_description', module_spec=<tensorflow_hub.native_module._ModuleSpec object at 0x7fb0102a5a90>, trainable=False
)
A working model
Using the TF Hub module works fine with:
estimator = tf.estimator.DNNClassifier(
hidden_units=[32, 16],
feature_columns=[embedded_text_feature_column])
Is it possible to use the nnlm module with RNNClassifier?
The Code corresponding to your Desired Model seems to be working without error in Google Colab with Tensorflow Version 1.15.
Please find the working code below:
!pip install tensorflow==1.15
import tensorflow as tf
import tensorflow_hub as hub
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Here is the Link for Github Colab Gist.