convert saved_model.pbtxt to saved_model.pb in Tensorflow 2 - tensorflow2.0

I have used this script to convert a saved_model.pb to saved_model.pbtxt. I then went an modified some parameters (is_training: b: True). Now I would like to convert back to saved_model.pb for this to be read into tf2onnx.
def convert_saved_model_to_pbtxt(path):
import os, sys
import google.protobuf
from tensorflow.core.protobuf import saved_model_pb2
saved_model = saved_model_pb2.SavedModel()
with open(os.path.join(path, 'saved_model.pb'), 'rb') as f:
saved_model.ParseFromString(f.read())
with open(os.path.join(path, 'saved_model.pbtxt'), 'w') as f:
f.write(google.protobuf.text_format.MessageToString(saved_model))
What is the recommended way to convert this .pbtxt to ONNX?

Related

Is it impossible to quantization the .tflite file? (OSError Occurred)

I have to try the quantization to my model(tflite).
I want to change float32 to float 16 through the dynamic range quantization.
This is my code:
import tensorflow as tf
import json
import sys
import pprint
from tensorflow import keras
import numpy as np
converter = tf.lite.TFLiteConverter.from_saved_model('models')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_quant_model = converter.convert()
open("quant.tflite", "wb").write(tflite_quant_model)
In my MacBook, there is a folder called 'models', which contains two tflite files there.
When I execute the code, the following error occurs:
converter = tf.lite.TFLiteConverter.from_saved_model('quantization')
OSError: SavedModel file does not exist at: models/{saved_model.pbtxt|saved_model.pb}
I checked most of the posts in stack overflow, but I couldn't find a solution.
Please review my code and give me some advice.
I uploaded my tflite file because I guess it would be necessary to check if there was a problem.
This is my model(download link):
https://drive.google.com/file/d/13gft7bREsv2vZYFvfoCiP5ndxHkfGKIM/view?usp=sharing
Thank you so much.
The tf.lite.TFLiteConverter.from_saved_model function takes a tensorflow (.pb) model as a parameter. On the other hand, you give a tensorflowlite (.tflite) model, which necessarily leads to an error. If you want to convert your model to float 16, the only way I know of is to take the original model in ".pb" format and you convert it as you want

How to load a trained model saved by export_inference_graph.py?

I'm folowing an example that uses tensorflow's 1.15.0 object detection API.
The tutorial goes clearly on the following aspects:
how to download a model
how to load a custom database with .xml files, make .cvs files from them, and then .record files
how to configure a training pipeline
how to get tensorboard graphs
how to train the net saving checkpoints (using model_main.py)
how to export (save) the model (using export_inference_graph.py)
What I have not been able to accomplish, however, is loading the saved model to use it.
I tryed with tf.saved_model.loader.load(sess, flags, export_dir, but I get
INFO:tensorflow:Saver not created because there are no variables in the graph to restore.
INFO:tensorflow:The specified SavedModel has no variables; no checkpoints were restored.
the folder given in export_dir has the following structure:
+dir
+saved_model
-saved_model.pb
-model.ckpt.data-00000-of-00001
-model.ckpt.index
-checkpoint
-frozen_inference_graph.pb
-model.ckpt.meta
-pipeline.config
My final goal here is to capture images with a camera, and feed them to the net for real time object detection.\
As an in between step, now I just want to be able to feed a single picture and get the output. I was able to train the net, but now I can't use it.
Thank you in advance.
I found an example on how to download a model that let me go through it.\
Since the folder format of the file that is downloaded in the example is the same I get on my code, I just had to adapt it.
The orifinal function that downloads the model is
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
Then I used that function to create this new one
def load_local_model(model_path):
model_dir = pathlib.Path(model_path)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
At first this didn't worked, since tf.saved_model.load expected 3 arguments, but that was solved by importing the two import blocks on the same example, I stll dont know wich import did the trick and why (I'll edit this answer when I get it), but for the moment this code works and the example lets do more things.
The import blocks are the following
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
and
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
EDIT
What was really needed for this to work was the following block.
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
Otherwhise this import block won't work
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

how to properly saving loaded h5 model to pb with TF2

I load a saved h5 model and want to save the model as pb.
The model is saved during training with the tf.keras.callbacks.ModelCheckpoint callback function.
TF version: 2.0.0a
edit: same issue also with 2.0.0-beta1
My steps to save a pb:
I first set K.set_learning_phase(0)
then I load the model with tf.keras.models.load_model
Then, I define the freeze_session() function.
(optional I compile the model)
Then using the freeze_session() function with tf.keras.backend.get_session
The error I get, with and without compiling:
AttributeError: module 'tensorflow.python.keras.api._v2.keras.backend'
has no attribute 'get_session'
My Question:
Does TF2 not have the get_session anymore?
(I know that tf.contrib.saved_model.save_keras_model does not exist anymore and I also tried tf.saved_model.save which not really worked)
Or does get_session only work when I actually train the model and just loading the h5 does not work
Edit: Also with a freshly trained session, no get_session is available.
If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
update:
Since the official release of TF2.x graph/session concept has changed. The savedmodel api should be used.
You can use the tf.compat.v1.disable_eager_execution() with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
I do save the model to pb from h5 model:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I use TF2 to convert model like:
pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
After training, self.model.load_weights(self.checkpoint_path) load checkpoint;
self.model.save(h5_path, overwrite=True, include_optimizer=False) save as h5;
convert h5 to pb just like above;
I'm wondering the same thing, as I'm trying to use get_session() and set_session() to free up GPU memory. These functions seem to be missing and aren't in the TF2.0 Keras documentation. I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
use
from tensorflow.compat.v1.keras.backend import get_session
in keras 2 & tensorflow 2.2
then call
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")

Error importing keras backend - cannot import name has_arg

i attempt to import keras backend to get_session as follows, but i encounter an error:
There should be no need to import the tensorflow_backend explicitly.
Look at the first lines of an example from the Keras documentation:
# TensorFlow example
>>> from keras import backend as K
>>> tf_session = K.get_session()
[...]
As long as you are using the tensorflow backend, the get_session() function should be available.

Tensor flow Load local CSV file

I am running Tensor flow in a docker container and i am a newbie. This is the code which I just copied from TensorFlow tutorial to load CSV dataset.
import tensorflow as tf
import numpy as np
# Data sets
IRIS_TRAINING = "iris_training.csv"
IRIS_TEST = "iris_test.csv"
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING, target_dtype=np.int)
test_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TEST, target_dtype=np.int)
Anyways this throws the following error :
AttributeError: 'module' object has no attribute 'datasets'
My question is how to load a csv file which i have locally downloaded in my desktop? should be something like this ?
IRIS_TRAINING = "C:\Users\priya\Desktop\iris_training.csv"
IRIS_TEST = "C:\Users\priya\Desktop\iris_test.csv"
How to load CSV? Any documentation available?