SavedModel file does not exist at saved_model/{saved_model.pbtxt|saved_model.pb} - tensorflow

I'm try running Tensorflow Object Detection API on Tensorflow 2 and I got that error, can someone have a solution?
The code :
Loader
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
​
model_dir = pathlib.Path(model_dir)/"saved_model"
​
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
​
return model
Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
For the sake of simplicity we will test on 2 images:
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
TEST_IMAGE_PATHS
Detection
Load an object detection model:
model_name = 'ssd_mobilenet_v1_coco_11_06_2017'
detection_model = load_model(model_name)
and i got this error
OSError Traceback (most recent call last)
<ipython-input-7-e89d9e690495> in <module>
1 model_name = 'ssd_mobilenet_v1_coco_11_06_2017'
----> 2 detection_model = load_model(model_name)
<ipython-input-4-f8a3c92a04a4> in load_model(model_name)
9 model_dir = pathlib.Path(model_dir)/"saved_model"
10
---> 11 model = tf.saved_model.load(str(model_dir))
12 model = model.signatures['serving_default']
13
D:\Anaconda\lib\site-packages\tensorflow_core\python\saved_model\load.py in load(export_dir, tags)
515 ValueError: If `tags` don't match a MetaGraph in the SavedModel.
516 """
--> 517 return load_internal(export_dir, tags)
518
519
D:\Anaconda\lib\site-packages\tensorflow_core\python\saved_model\load.py in load_internal(export_dir, tags, loader_cls)
524 # sequences for nest.flatten, so we put those through as-is.
525 tags = nest.flatten(tags)
--> 526 saved_model_proto = loader_impl.parse_saved_model(export_dir)
527 if (len(saved_model_proto.meta_graphs) == 1
528 and saved_model_proto.meta_graphs[0].HasField("object_graph_def")):
D:\Anaconda\lib\site-packages\tensorflow_core\python\saved_model\loader_impl.py in parse_saved_model(export_dir)
81 (export_dir,
82 constants.SAVED_MODEL_FILENAME_PBTXT,
---> 83 constants.SAVED_MODEL_FILENAME_PB))
84
85
OSError: SavedModel file does not exist at: C:\Users\Asus\.keras\datasets\ssd_mobilenet_v1_coco_11_06_2017\saved_model/{saved_model.pbtxt|saved_model.pb}

I assume that you are running detection_model_zoo tutorial here. Note that maybe you can change the model name from ssd_mobilenet_v1_coco_11_06_2017 to ssd_mobilenet_v1_coco_2017_11_17, this will solve the problem in my test.
The content of these files can be seen below:
# ssd_mobilenet_v1_coco_11_06_2017
frozen_inference_graph.pb model.ckpt.data-00000-of-00001 model.ckpt.meta
graph.pbtxt model.ckpt.index
# ssd_mobilenet_v1_coco_2017_11_17
checkpoint model.ckpt.data-00000-of-00001 model.ckpt.meta
frozen_inference_graph.pb model.ckpt.index saved_model
Reference:
Where to find tensorflow pretrained models (list or download link)
detect_model_zoo
Using the SavedModel format official blog

Do not link all the way to the model name. Use the pathname to the folder containing the model.

In my case, this code is worked for me. I gave the path of the folder of my .pd file that was created by model checkpoint module :
import tensorflow as tf
if __name__ == '__main__':
# Update the input name and path for your Keras model
input_keras_model = 'my path/weights/my_trained_model/{the files inside this folder are: assets(folder), variables(folder),keras_metadata.pd,saved_model.pd}'
model = tf.keras.models.load_model(input_keras_model)

I was getting exactly this error when trying to use the saved_model.pb file.
I had gotten the .pb file along with a pre-trained model following some tutorial.
It was happening due to the following reasons:
first your already existing saved_model.pb file might be corrupt
second as the user #Mark Silla has mentioned, you are giving the wrong path to the file, just give the path of folder containing the .pb file excluding the file name
third, it might be due to Tensorflow versioning issues
I had to follow all of the above steps and upgraded Tensorflow from v2.3 to v2.3, and it finally created a new saved_model.pb which was not corrupt and I could run it.

Related

dataset from tf.data.Dataset.save(load) for GCS + TPU got NotFoundError Could not find metadata file. [Op:MakeIterator]

I tried this both on TF 2.8 and Nightly (2.10.0-dev20220616) on Colab. In 2.8, i used the experimental version of this API. I am trying this in the context of trying to train a huggingface bert model. The dataset has gone through several transformations to arrive at a tf dataset proper.
To repro:
In a session without TPU, source/construct the dataset
use tf.data.Dataset.save(train_set, 'gs://ai-tests/data/train_set'). I tried to read it back with load and able to iterate through it (it seems to work here).
In a separate TPU session, i tried to get train_set = tf.data.Dataset.load('gs://ai-tests/data/train_set')
Iterating get error
for x in train_set:
break
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py
in raise_from_not_ok_status(e, name)
7207 def raise_from_not_ok_status(e, name):
7208 e.message += (" name: " + name if name is not None else "")
-> 7209 raise core._status_to_exception(e) from None # pylint: disable=protected-access
7210
7211
NotFoundError: Could not find metadata file. [Op:MakeIterator]

Convert tensor_forest model to tflite model by TFLiteConverter?

How to use tf.lite.TFLiteConverter to get the tflite from tensor_forest model which could be using in the Android smartphone.
I found that the tensor_forest in the contrib directory (link). And there is an example using this model here (link).
I add following code to save the model:
saver = tf.train.Saver(save_relative_paths=True, max_to_keep=10)
checkpoint_prefix = 'checkpoints/model'
for i in range(1, num_steps + 1):
(skip)
if i % 50 == 0 or i == 1:
saver.save(sess, checkpoint_prefix, global_step=i)
(skip)
It generated the 'checkpoint', 'model-2000.data-00000-of-00001', 'model-2000.index', 'model-2000.meta' in the checkpoints directory. But I cannot convert it by referencing the using of tf.lite.TFLiteConverter cause it seems to need the pb file or missing some parameters.
How can I generate the tflite in the correct way?

How can I load a saved model from object detection for inference?

I'm pretty new to Tensorflow and have been running experiments with SSDs with the Tensorflow Object Detection API. I can successfully train a model, but by default, it only save the last n checkpoints. I'd like to instead save the last n checkpoints with the lowest loss (I'm assuming that's the best metric to use).
I found tf.estimator.BestExporter and it exports a saved_model.pb along with variables. However, I have yet to figure out how to load that saved model and run inference on it. After running models/research/object_detection/export_inference_graph.py on the checkpoiont, I can easily load a checkpoint and run inference on it using the object detection jupyter notebook: https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
I've found documentation on loading saved models, and can load a graph like this:
with tf.Session(graph=tf.Graph()) as sess:
tags = [tag_constants.SERVING]
meta_graph = tf.saved_model.loader.load(sess, tags, PATH_TO_SAVED_MODEL)
detection_graph = tf.get_default_graph()
However, when I use that graph with the above jupyter notebook, I get errors:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-17-9e48f0d04df2> in <module>
7 image_np_expanded = np.expand_dims(image_np, axis=0)
8 # Actual detection.
----> 9 output_dict = run_inference_for_single_image(image_np, detection_graph)
10 # Visualization of the results of a detection.
11 vis_util.visualize_boxes_and_labels_on_image_array(
<ipython-input-16-0df86999596e> in run_inference_for_single_image(image, graph)
31 detection_masks_reframed, 0)
32
---> 33 image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
34 # image_tensor = tf.get_default_graph().get_tensor_by_name('serialized_example')
35
~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in get_tensor_by_name(self, name)
3664 raise TypeError("Tensor names are strings (or similar), not %s." %
3665 type(name).__name__)
-> 3666 return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
3667
3668 def _get_tensor_by_tf_output(self, tf_output):
~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
3488
3489 with self._lock:
-> 3490 return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
3491
3492 def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):
~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
3530 raise KeyError("The name %s refers to a Tensor which does not "
3531 "exist. The operation, %s, does not exist in the "
-> 3532 "graph." % (repr(name), repr(op_name)))
3533 try:
3534 return op.outputs[out_n]
KeyError: "The name 'image_tensor:0' refers to a Tensor which does not exist. The operation, 'image_tensor', does not exist in the graph."
Is there a better way to load the saved model or convert it to an inference graph?
Thanks!
Tensorflow detection API supports different input formats during exporting as discribed in documentation of file export_inference_graph.py:
image_tensor: Accepts a uint8 4-D tensor of shape [None, None, None, 3]
encoded_image_string_tensor: Accepts a 1-D string tensor of shape [None]
containing encoded PNG or JPEG images. Image resolutions are expected to be
the same if more than 1 image is provided.
tf_example: Accepts a 1-D string tensor of shape [None] containing
serialized TFExample protos. Image resolutions are expected to be the same
if more than 1 image is provided.
So you should check that you use image_tensor input_type. The chosen input node will be named as "inputs" in exported model. So I suppose that replacing image_tensor:0 with inputs (or maybe inputs:0) will solve your problem.
Also I would like to recommend a useful tool to run exported models with several lines of code: tf.contrib.predictor.from_saved_model. Here is example of how to use it:
import tensorflow as tf
import cv2
img = cv2.imread("test.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_rgb = np.expand_dims(img, 0)
predict_fn = tf.contrib.predictor.from_saved_model("./saved_model")
output_data = predict_fn({"inputs": img_rgb})
print(output_data) # detector output dictionary

How can I use the Tensorflow .pb file?

I have a Tensorflow file AlexNet.pb. I am trying to load it then classify an image that I have. I can't find a way to load it then classify an image.
No-one seems to have a simple example of loading and running the .pb file.
It depends on how the protobuf file has been created.
If the .pb file is the result of:
# Create a builder to export the model
builder = tf.saved_model.builder.SavedModelBuilder("export")
# Tag the model in order to be capable of restoring it specifying the tag set
builder.add_meta_graph_and_variables(sess, ["tag"])
builder.save()
You have to know how that model has been tagged and use the tf.saved_model.loader.load method to load the saved graph in the current, empty, graph.
If the model instead has been frozen you have to load the binary file in memory manually:
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.get_default_graph()
tf.import_graph_def(graph_def, name="prefix")
In both cases, you have to know the name of the input tensor and the name of the node you want to execute:
If, for example, your input tensor is a placeholder named batch_ and the node you want to execute is the node named dense/BiasAdd:0 you have to
batch = graph.get_tensor_by_name('batch:0')
prediction = restored_graph.get_tensor_by_name('dense/BiasAdd:0')
values = sess.run(prediction, feed_dict={
batch: your_input_batch,
})
You can use opencv to load .pb models,
eg.
net = cv2.dnn.readNet("model.pb")
Make sure you are using specific version of opencv - OpenCV 3.4.2 or OpenCV 4

Tensorflow checkpoint models getting deleted

I am using tensorflow checkpointing after every 10 epochs using the following code :
checkpoint_dir = os.path.abspath(os.path.join(out_dir, "checkpoints"))
checkpoint_prefix = os.path.join(checkpoint_dir, "model")
...
if current_step % checkpoint_every == 0:
path = saver.save(sess, checkpoint_prefix, global_step=current_step)
print("Saved model checkpoint to {}\n".format(path))
The problem is that, as the new files are getting generated, previous 5 model files are getting deleted automatically.
This is the expected behavior, the docs for tf.train.Saver say that by default the 5 most recent checkpoint files are kept. To adjust that, set max_to_keep the the desired value.