OSError: SavedModel file does not exist, with Tensorflow Lite - tensorflow

I am trying to convert a Keras model found here to TFlite using following snippet
import tensorflow as tf
import os.path as path
pwd = path.dirname(__file__)
model_path = pwd+"/models/old-models/"
print("PWD: ", model_path)
converter = tf.lite.TFLiteConverter.from_saved_model(model_path)
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
But I am getting following error
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/<PyCharm installed directory>/ch-0/211.6693.23/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/<PyCharm installed directory>/ch-0/211.6693.23/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/<Path to project>/myproject/convert_to_tflite.py", line 7, in <module>
converter = tf.lite.TFLiteConverter.from_saved_model(model_path)
File "/<path to miniconda>/miniconda3/envs/myproject/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 1069, in from_saved_model
saved_model = _load(saved_model_dir, tags)
File "/<path to miniconda>/miniconda3/envs/myproject/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py", line 859, in load
return load_internal(export_dir, tags, options)["root"]
File "/<path to miniconda>/miniconda3/envs/myproject/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py", line 871, in load_internal
loader_impl.parse_saved_model_with_debug_info(export_dir))
File "/<path to miniconda>/miniconda3/envs/myproject/lib/python3.8/site-packages/tensorflow/python/saved_model/loader_impl.py", line 56, in parse_saved_model_with_debug_info
saved_model = _parse_saved_model(export_dir)
File "/<path to miniconda>/miniconda3/envs/myproject/lib/python3.8/site-packages/tensorflow/python/saved_model/loader_impl.py", line 111, in parse_saved_model
raise IOError("SavedModel file does not exist at: %s/{%s|%s}" %
OSError: SavedModel file does not exist at: /<path to my project>/models/old-models/{saved_model.pbtxt|saved_model.pb}
I renamed the model file extension from .mlmodel to .pb, since it seems file extension need to be .pb or .pbtxt. I am sure that path to the model directory is correct.
I am using tensorflow~=2.4.1

According to https://apple.github.io/coremltools/coremlspecification, .mlmodel file extension represents a format of CoreML model. Renaming the file extension does not change the given model format to the other model format.
If you would like to generate TensorFlow Lite models, please take a look at the pre-trained model section at the guide page, https://www.tensorflow.org/lite/guide/get_started#1_choose_a_model.

Related

How can make inference graph file using models/research/object_detection/export_inference_graph.py

python : 3.9.13
tensorflow : 2.9.1
I am making a custom dataset with 'tensorflow object detection'
The saved_model.pb file was generated by trainning with the FastRCNN dataset.
I took this file and applied it to Nuclio (Serverless function framework) but failed
It seems to have apply inference graph file type
I find export util pythonf file in models/research/object_detection directory "export_inference_graph.py"
But this file not working .
This is error message
Traceback (most recent call last):
File "export_inference_graph.py", line 211, in <module>
tf.app.run()
File "/home/namu/.local/lib/python3.8/site-packages/tensorflow/python/platform/app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/namu/.local/lib/python3.8/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/namu/.local/lib/python3.8/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "export_inference_graph.py", line 199, in main
exporter.export_inference_graph(
File "/home/namu/myspace/data/models/research/object_detection/exporter.py", line 618, in export_inference_graph
_export_inference_graph(
File "/home/namu/myspace/data/models/research/object_detection/exporter.py", line 521, in _export_inference_graph
profile_inference_graph(tf.get_default_graph())
File "/home/namu/myspace/data/models/research/object_detection/exporter.py", line 649, in profile_inference_graph
contrib_tfprof.model_analyzer.TRAINABLE_VARS_PARAMS_STAT_OPTIONS)
NameError: name 'contrib_tfprof' is not defined
I knew from Google that this did not work on tensorflow 2.x
https://medium.com/#sebastingarcaacosta/how-to-export-a-tensorflow-2-x-keras-model-to-a-frozen-and-optimized-graph-39740846d9eb
I am working on it by referring to the above site
But
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
import numpy as np
#path of the directory where you want to save your model
frozen_out_path = "/home/namu/myspace/data/models/export_graph"
# name of the .pb file
frozen_model = "frozen_graph"
model = tf.keras.models.load_model('/home/namu/myspace/data/models/train_pb/saved_model') # tf_saved_model load
# model = tf.saved_model.load('/home/namu/myspace/data/models/train_pb/saved_model')
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
When i execute this code , error occurs
ValueError: Unable to create a Keras model from SavedModel at /home/namu/myspace/data/models/train_pb/saved_model. This SavedModel was exported with `tf.saved_model.save`, and lacks the Keras metadata file. Please save your Keras model by calling `model.save`or `tf.keras.models.save_model`. Note that you can still load this SavedModel with `tf.saved_model.load`.
How can i create inference graph pb file.

tensorflow Cannot convert a Tensor of dtype resource to a NumPy array. Tflite

I have trained a network(git link below) and saved in the saved model format. And would like to convert it to tflite. I am using python API for tflite coverter(tflite.py in git link below). But I'am not able to do so.
System information:
OS Platform and Distribution: Ubuntu 18.04.3 LTS
TensorFlow version: tensorflow/tensorflow:2.2.0-gpu (docker)
The link to network and save model code.
The output from the converter invocation:
File "tflite.py", line 22, in convert_model
tflite_model = converter.convert()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py", line 459, in convert
self._funcs[0], lower_control_flow=False))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py", line 706, in convert_variables_to_constants_v2_as_graph
func, lower_control_flow, aggressive_inlining)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py", line 457, in _convert_variables_to_constants_v2_impl
tensor_data = _get_tensor_data(func)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py", line 217, in _get_tensor_data
data = val_tensor.numpy()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 961, in numpy
maybe_arr = self._numpy() # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 929, in _numpy
six.raise_from(core._status_to_exception(e.code, e.message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.
The link to the saved model

Tensorflow 2.0 freeze_graph failing on tf.keras saved model

I'm converting a trained tf.keras model into tf frozen model to use with c++ api. I'm running into error while freezing the model on TF 2.0
model_path = '/home/Desktop/model.hdf5'
model = tf.keras.models.load_model(model_path)
tf.keras.experimental.export_savedmodel(model,newdir)
After this a variables folder with files [checkpoint,variables.data-00000-of-00001,variables.index], saved_model.pb and assests folder created in newdir.
I am trying to use saved_model.pb and variables.data-00000-of-00001 files to get single .pb frozen_graph
python /home/tensorflow/python/tools/freeze_graph.py --input_graph=/home/Desktop/tf_models/saved_model.pb --input_checkpoint=/home/Desktop/tf_models/variables/variables.data-00000-of-00001 --output_graph=/home/Desktop/tf_models/frozen_graph.pb --output_node_names=classes --input_binary=true
I expected a single frozen .pb file but instead running into error like below:
Traceback (most recent call last): File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 492, in run_main() File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 489, in run_main app.run(main=my_main, argv=[sys.argv[0]] +
unparsed) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/platform/app.py",
line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/absl/app.py",
line 300, in run
_run_main(main, args) File "/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/absl/app.py",
line 251, in _run_main sys.exit(main(argv)) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 488, in my_main = lambda unused_args: main(unused_args, flags)
File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 382, in main flags.saved_model_tags, checkpoint_version) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 341, in freeze_graph input_graph_def =
_parse_input_graph_proto(input_graph, input_binary) File "/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 252, in _parse_input_graph_proto
input_graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
I'm open to suggestions alternate to using freeze_graph script. Thanks.

How to use a string placeholder for the model_dir in tf.contrib.factorization.KMeansClustering?

I'm using TF version 1.12 with conda and python 3.
My question concerns the model_dir value of tf.contrib.factorization.KMeansClustering : how to use a string placeholder for the model_dir value?
Here is the context: I have pretrained KMeans in different situation, checkpoints are in different model_dir.
I want to use predictions of these pretrained models inside a graph, depending on each situation, so I need to put in this graph the KMeansClustering which can accept different model_dirs.
In the graph I defined :
ckpt_ph = tf.placeholder(tf.string)
...
kmeans = KMeansClustering(5, model_dir=ckpt_ph,distance_metric='cosine')
def input_fn():
return tf.train.limit_epochs(tf.convert_to_tensor(x, dtype=tf.float32), num_epochs=1)
centers_idx = list(kmeans.predict(input_fn,predict_keys='cluster_index',checkpoint_path=ckpt_ph,yield_single_examples=False))[0]['cluster_index']
centers_val = kmeans.cluster_centers()
...
And I run it with:
...
for ind in range(nb_cases):
...
sess.run([...], feed_dict={..., ckpt_ph: km_ckpt[ind]})
...
Where km_ckpt is the list of pretrained KMeansClustering checkpoints pathes that I want to use for each situations.
The error that I get is:
Traceback (most recent call last):
File "main.py", line 28, in <module>
tf.app.run()
File "C:\Users\Denis\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "main.py", line 23, in main
launch_training()
File "main.py", line 14, in launch_training
train_mnist.train_model()
File "C:\Users\Denis\ML\ScatteringReconstruction\src\model\train_mnist.py", line 355, in train_model
X_r = SR(X_tensor)
File "C:\Users\Denis\ML\ScatteringReconstruction\src\model\train_mnist.py", line 316, in __call__
kmeans = KMeansClustering(FLAGS.km_k, model_dir=ckpt_ph, distance_metric='cosine')
File "C:\Users\Denis\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\factorization\python\ops\kmeans.py", line 423, in __init__
config=config)
File "C:\Users\Denis\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 189, in __init__
model_dir)
File "C:\Users\Denis\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1665, in maybe_overwrite_model_dir_and_session_config
if model_dir:
File "C:\Users\Denis\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 671, in __bool__
raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
I think that the problem is that in KMeansClustering and KMeansClustering.predict, model_dir is expecting a Python bool or string, and I'm giving him a Tensor, but then I don't see hos to use pretrained KMeansClustering inside a graph.
Thanks in advance for the help!

tensorflow slim inception_v3 model error

I am trying to use the tensorflow inception_v3 model for a transfer learning project.I get the following error on building the model.
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
The same error does not arise on using the same script for inception_v1 model.
The models are imported from slim.nets
Running on CPU
Tensorflow version : 0.12.1
Script
import tensorflow as tf
slim = tf.contrib.slim
import models.inception_v3 as inception_v3
print("initializing model")
inputs=tf.placeholder(tf.float32, shape=[32,299,299,3])
with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
logits,endpoints = inception_v3.inception_v3(inputs, num_classes=1001, is_training=False)
trainable_vars=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
for tvars in trainable_vars:
print tvars.name
Full Error Message
Traceback (most recent call last):
File "test.py", line 8, in <module>
logits,endpoints = inception_v3.inception_v3(inputs, num_classes=1001, is_training=False)
File "/home/ashish/projects/python/fashion-language/models/inception_v3.py", line 576, in inception_v3
depth_multiplier=depth_multiplier)
File "/home/ashish/projects/python/fashion-language/models/inception_v3.py", line 181, in inception_v3_base
net = array_ops.concat([branch_0, branch_1, branch_2, branch_3], 3)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1075, in concat
dtype=dtypes.int32).get_shape(
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
Found my mistake, i was importing the model from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim/python/slim/nets whereas the updated models are at https://github.com/tensorflow/models/tree/master/slim/nets.
Still haven't understood why there are two different repositories for the same classes.Must be a valid reason.