Cannot parse file error while .pb file to tflite conversion - tensorflow

Hi I was making tflite for custom albert model with pb file in tf1.15 but raised error of
raise IOError("Cannot parse file %s: %s." % (path_to_pb, str(e)))
OSError: Cannot parse file b'/home/choss/test2/freeze2/saved_model.pb': Error parsing message.
Code below is How I made .pb file
meta_path = 'model.ckpt-400.meta' # Your .meta file
output_node_names = ['loss/Softmax']
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
ckpt ='/home/choss/test2/freeze2/model.ckpt-400'
print(ckpt)
saver.restore(sess, ckpt)
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('saved_model.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
And I tried to make tflite file with code below
saved_model_dir = "/home/choss/test2/freeze2"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
I used f.graph_util.convert_variables_to_constants because of freeze_graph because
freeze_graph.freeze_graph('./graph.pbtxt', saver, False, 'model.ckpt-400', 'loss/ArgMax', "", "", 'frozen.pb', True, "")
gave me an error message
File "/home/pgb/anaconda3/envs/myenv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 2154, in __getitem__
return self._inputs[i]
IndexError: list index out of range
Is it because I did not use freeze_graph?
If so is there any other way aside from freeze_graph?

Instead of freezing the graph by yourself, I recommend exporting as a TF saved model and using the saved model converter with the recent TF version. You can decouple the TensorFlow versions for training and converting. For example, training can be done in the TF 1.15 and the saved model can be exported from it. And then, it is possible to bring the saved model to the TFLite converter API in TensorFlow 2.4.1 version or beyonds.

Related

Cannot convert Tensorflow .pb frozen graph to tensorflow lite due to strange 'utf-8' codec error on Colab

I have am ONNX model that I converted to tensorflow, that conversion went ahead without any problems, but now I want to convert this .pb file to tf lite using the following code
import tensorflow as tf
TF_PATH = "/content/tf_model/saved_model.pb" # where the forzen graph is stored
TFLITE_PATH = "./model.tflite"
# make a converter object from the saved tensorflow file
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(TF_PATH, # TensorFlow freezegraph .pb model file
input_arrays=['input_ids'], # name of input arrays as defined in torch.onnx.export function before.
output_arrays=['logits'], # name of output arrays defined in torch.onnx.export function before.
)
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.compat.v1.lite.OpsSet.TFLITE_BUILTINS,
tf.compat.v1.lite.OpsSet.SELECT_TF_OPS]
tf_lite_model = converter.convert()
# Save the model.
with open(TFLITE_PATH, 'wb') as f:
f.write(tf_lite_model)
But when I run this cell on Colab I get the error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position
3: invalid start byte
And directs towards the line: converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph().
I cant seem to figure out what is causing this..
Looks like your frozen_graph is not frozen graph but saved_model format. If I guess right all you need is to change conversion method: you are looking for convert from SavedModel
Assuming you are using TF2 and it will be:
import tensorflow as tf
TF_PATH = "/content/tf_model/" # where the saved_model is stored - but folder name
TFLITE_PATH = "./model.tflite"
# make a converter object from the saved tensorflow file
converter = tf.lite.TFLiteConverter.from_saved_model(TF_PATH)
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tf_lite_model = converter.convert()
# Save the model.
open(TFLITE_PATH, "wb").write(tf_lite_model)

How to load model back from cpkt, .meta, .index and .pb files for Mobilenet v3?

I have downloaded checkpoints along with model for Mobilenet v3. After extraction of rar file, I get two folders and two other files. Directory looks like following
Main Folder
ema (folder)
checkpoint
model-x.data-00000-of-00001
model-x.index
model-x.meta
pristine (folder)
model.ckpt-y.data-00000-of-00001
model.ckpt-y.index
model.ckpt-y.meta
.pb
.tflite
I have tried many codes among which few are below.
import tensorflow as tf
from tensorflow.python.platform import gfile
model_path = "./weights/v3-large-minimalistic_224_1.0_uint8/model.ckpt-3868848"
detection_graph = tf.Graph()
with tf.Session(graph=detection_graph) as sess:
# Load the graph with the trained states
loader = tf.train.import_meta_graph(model_path+'.meta')
loader.restore(sess, model_path)
The above code results in following error
Node {{node batch_processing/distort_image/switch_case/indexed_case}} of type Case has '_lower_using_switch_merge' attr set but it does not support lowering.
I tried following code:
import tensorflow as tf
import sys
sys.path.insert(0, 'models/research/slim')
from nets.mobilenet import mobilenet_v3
tf.reset_default_graph()
file_input = tf.placeholder(tf.string, ())
image = tf.image.decode_jpeg(tf.read_file('test.jpg'))
images = tf.expand_dims(image, 0)
images = tf.cast(images, tf.float32) / 128. - 1
images.set_shape((None, None, None, 3))
images = tf.image.resize_images(images, (224, 224))
model = mobilenet_v3.wrapped_partial(mobilenet_v3.mobilenet,
new_defaults={'scope': 'MobilenetEdgeTPU'},
conv_defs=mobilenet_v3.V3_LARGE_MINIMALISTIC,
depth_multiplier=1.0)
with tf.contrib.slim.arg_scope(mobilenet_v3.training_scope(is_training=False)):
logits, endpoints = model(images)
ema = tf.train.ExponentialMovingAverage(0.999)
vars = ema.variables_to_restore()
print(vars)
with tf.Session() as sess:
tf.train.Saver(vars).restore(sess, './weights/v3-large-minimalistic_224_1.0_uint8/saved_model.pb')
tf.train.Saver().save(sess, './weights/v3-large-minimalistic_224_1.0_uint8/pristine/model.ckpt')
The above code generates following error:
Unable to open table file ./weights/v3-large-minimalistic_224_1.0_uint8/saved_model.pb: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[node save/RestoreV2 (defined at <ipython-input-11-1531bbfd84bb>:29) ]]
How can I load Mobilenet v3 model along with the checkpoints and use it for my data?
try this
with tf.contrib.slim.arg_scope(mobilenet_v3.training_scope(is_training=False)):
logits, endpoints = mobilenet_v3.large_minimalistic(images)
instead of
model = mobilenet_v3.wrapped_partial(mobilenet_v3.mobilenet,
new_defaults={'scope': 'MobilenetEdgeTPU'},
conv_defs=mobilenet_v3.V3_LARGE_MINIMALISTIC,
depth_multiplier=1.0)
with tf.contrib.slim.arg_scope(mobilenet_v3.training_scope(is_training=False)):
logits, endpoints = model(images)

Creating a Slim classifier using pretrained ResNet V2 model

I am trying to create an image classifier that utilizes the pre-trained ResNet V2 model provided in the slim documentation.
Here is the code so far:
import tensorflow as tf
slim = tf.contrib.slim
from PIL import Image
from inception_resnet_v2 import *
import numpy as np
checkpoint_file = 'inception_resnet_v2_2016_08_30.ckpt'
sample_images = ['carrot.jpg']
input_tensor = tf.placeholder(tf.float32, shape=(None,299,299,3), name='input_image')
scaled_input_tensor = tf.scalar_mul((1.0/255), input_tensor)
scaled_input_tensor = tf.subtract(scaled_input_tensor, 0.5)
scaled_input_tensor = tf.multiply(scaled_input_tensor, 2.0)
variables_to_restore = slim.get_model_variables()
print(variables_to_restore)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_file,
slim.get_model_variables('InceptionResnetV2'))
sess = tf.Session()
init_fn(sess)
arg_scope = inception_resnet_v2_arg_scope()
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(scaled_input_tensor, is_training=False)
for image in sample_images:
im = Image.open(image).resize((299,299))
im = np.array(im)
im = im.reshape(-1,299,299,3)
predict_values, logit_values = sess.run([end_points['Predictions'], logits], feed_dict={input_tensor: im})
print (np.max(predict_values), np.max(logit_values))
print (np.argmax(predict_values), np.argmax(logit_values))
The problem is I keep getting this error:
Traceback (most recent call last):
File "./classify.py", line 21, in <module>
slim.get_model_variables('InceptionResnetV2'))
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/variables.py", line 584, in assign_from_checkpoint_fn
saver = tf_saver.Saver(var_list, reshape=reshape_variables)
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
self.build()
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1061, in build
raise ValueError("No variables to save")
ValueError: No variables to save
So it seems TF/Slim is unable to find any variables and this is made clear when I call:
variables_to_restore = slim.get_model_variables()
print(variables_to_restore)
As it outputs an empty array.
How can I go about using the pre-trained model?
This happens because you haven't constructed the model in your graph yet to have any variables starting with the name "InceptionResnetV2" to be captured and restored by the saver.
I believe you should put the model construction before using slim.get_variables_to_restore().
For instance:
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(scaled_input_tensor, is_training=False)
variables_to_restore = slim.get_model_variables()
This way, the Tensor variables will be constructed and you should see variables_to_restore is no longer empty.
You need to manually add the model variables.
Try this
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(scaled_input_tensor, is_training=False)
# Add model variables
for var in tf.global_variables(scope='inception_resnet_v2'):
slim.add_model_variable(var)

Where is manifest_pb2 inside session_bundle?

I am trying to export a previously trained model (pb) to be used for serving
using the following snippet
from tensorflow_serving.session_bundle import exporter
def create_graph():
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf.gfile.FastGFile(modelFullPath, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
def export():
print 'Exporting trained model to', export_path
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=x, scores_tensor=y)
model_exporter.init(sess.graph.as_graph_def(),default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(FLAGS.export_version), sess)
print 'Done exporting!'
However the manifest_pb2 in exporter.py is not Found.
Am I missing something fundamental in this approach?
manifest_pb2.py is generated from manifest.proto when the session_bundle is built using bazel. On Mac I installed TensorFlow using pip3 and the file is located at /usr/local/lib/python3.5/site-packages/tensorflow/contrib/session_bundle/manifest_pb2.py.
Follow the steps at https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html for your platform and it should fix the problem.

Incompatible GraphDef versions in Extend

I have some code which creates a graph to process some images and then iterates sess.run() in a loop to fetch batches of image tensors of shape [*, 299, 299, 3]. I'd like to then feed these images into the inception model.
So, I added some code to load the inception model:
def create_graph():
""""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
print 'Loading graph...'
with tf.Session() as sess:
with gfile.FastGFile('/web/tensorflow_transfer/resources/classify_image_graph_def.pb', 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
return sess.graph
g = create_graph()
for i in range(training_steps):
sess.run(...)
Now Im getting this error when running run():
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 368, in run
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 420, in _do_run
raise RuntimeError(compat.as_text(tf_session.TF_Message(status)))
RuntimeError: Incompatible GraphDef versions in Extend: 1 != 0
This is most likely from using too old a version of TensorFlow to read in and run the graph -- the graph was created using a newer version of the GraphDef. Try upgrading to 0.7 or to HEAD and then run your code again.