I want to use MMdnn to convert a tensorflow ResNet model to other frameworks. It seems that I can only use mmconvert to read from a .pb frozen graph file.
However, when using tf.estimator.Estimator, the .pb file that it creates is a SavedModelDef. I understand this to be a wrapper around the tf GraphDef. Thus the GraphDef .pb file can be extracted from the SavedModel using freeze_graph.py.
From there, I will need the name of the input node in the tf GraphDef. But I'm unsure how to identify the name from looking at the .pbtxt. The tf.Estimator inputs with a tf.Dataset object, according to the framework.
I'm guessing there should be a tf.Placeholder somewhere that accepts the input. But I'm not sure how to find what the input node actually is.
Answering my own question here. The freeze_graph utility that comes with tensorflow is useful for extracting the graphdef from the tf SavedModel format.
To find the name of the input node, make sure to saved the tf SavedModel in pbtxt format. Open it up and look for the first node of your compute graph, e.g. if using tf resnet, the first nodes will be named resnet_model/*. Find the node that feeds this node, and you will have the name of the input node to specify to MMdnn tools. I expected this to be a tf.Placeholder that the Estimator adds for inputs. This node was just named Placeholder, so that's what I specified as the input node.
First extract the compute graph.
freeze_graph --input_saved_model_dir <path/to/saved_model_dir> --output_node_names softmax --output_graph ./graph_def.pb
Then use MMdnn to convert it to caffe.
mmconvert -sf tensorflow -iw ./graph_def.pb --inNodeName Placeholder --inputShape 224,224,3 --dstNodeName softmax -df caffe -om tf_resnet
Related
I'm having trouble trying to list the operations of a TFLite model. I know operations can be listed given a frozen graph, but what about a TFLite .tflite model? Can operations be listed?
You can get a list of all used Tensorflow Lite Operations with the visualization script.
wget -O tflite_visualize.py https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/lite/tools/visualize.py
Assuming your model is saved in model.tflite create the html file using the downloaded script.
python tflite_visualize.py model.tflite model_visualization.html
Right in the section labled Ops.
As mentioned in the TensorFlow Lite docs, you need to use a tf.lite.Interpreter to parse a .tflite model.
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
Then use the get_tensor_details method to get the list of Tensors.
interpreter.get_tensor_details()
As per the docs,
Gets tensor details for every tensor with valid tensor details.
Tensors where required information about the tensor is not found are not added to the list. This includes temporary tensors without a name.
Returns: A list of dictionaries containing tensor information.
I'm trying to create a .tflite model from a CycleGAN taken from GitHub (https://github.com/vanhuyz/CycleGAN-TensorFlow).
I am very new in this field and I do not understand how to expose the .pb model (which I have already created from the checkpoints) in a .tflite model.
I tried with tflite_convert but without any result, also because I don't know the parameters to insert as --input_arrays and --output_arrays.
Some idea?
I would recommend using the TFLiteConverter python api here: https://www.tensorflow.org/lite/convert/python_api and use SavedModel as your model input format. Otherwise, you can provide the input and output tensor names or your pb model as input_arrays and output_arrays.
I am aiming to inference tensorflow slim model with Intel OpenVINO optimizer. Using open vino docs and slides for inference and tf slim docs for training model.
It's a multi-class classification problem. I have trained tf slim mobilnet_v2 model from scratch (using sript train_image_classifier.py). Evaluation of trained model on test set gives relatively good results to begin with (using script eval_image_classifier.py):
eval/Accuracy[0.8017]eval/Recall_5[0.9993]
However, single .ckpt file is not saved (even though at the end of train_image_classifier.py run there is a message like "model.ckpt is saved to checkpoint_dir"), there are 3 files (.ckpt-180000.data-00000-of-00001, .ckpt-180000.index, .ckpt-180000.meta) instead.
OpenVINO model optimizer requires a single checkpoint file.
According to docs I call mo_tf.py with following params:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_checkpoint D:/model/model.ckpt-180000 -b 1
It gives the error (same if pass --input_checkpoint D:/model/model.ckpt):
[ ERROR ] The value for command line parameter "input_checkpoint" must be existing file/directory, but "D:/model/model.ckpt-180000" does not exist.
Error message is clear, there are not such files on disk. But as I know most tf utilities convert .ckpt-????.meta to .ckpt under the hood.
Trying to call:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_meta_graph D:/model/model.ckpt-180000.meta -b 1
Causes:
[ ERROR ] Unknown configuration of input model parameters
It doesn't matter for me in which way I will transfer graph to OpenVINO intermediate representation, just need to reach that result.
Thanks a lot.
EDIT
I managed to run OpenVINO model optimizer on frozen graph of tf slim model. However I still have no idea why had my previous attempts (based on docs) failed.
you can try converting the model to frozen format (.pb) and then convert the model using OpenVINO.
.ckpt-meta has the metagraph. The computation graph structure without variable values.
the one you can observe in tensorboard.
.ckpt-data has the variable values,without the skeleton or structure. to restore a model we need both meta and data files.
.pb file saves the whole graph (meta+data)
As per the documentation of OpenVINO:
When a network is defined in Python* code, you have to create an inference graph file. Usually, graphs are built in a form that allows model training. That means that all trainable parameters are represented as variables in the graph. To use the graph with the Model Optimizer, it should be frozen.
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
the OpenVINO optimizes the model by converting the weighted graph passed in frozen form.
I retrained inceptionV3 model on my own data using Tensorflow slim. Below files are generated after training :-
graph.pbtxt, model.ckpt, model.meta, model.index, checkpoint,
events.out.tfevents
I want to freeze the graph files and create a .pb file. I don't know what is input node and output node in inception v3. And using Tensorboard is complex for me.
What are the input/output nodes in inceptionV3?(in slim/nets) OR how can I find the input/output nodes ?
OS : window 7
(A). If you will make it to bottom of this link. You would find this somewhere(specific to inceptionV3) :
input_layer=input
output_layer=InceptionV3/Predictions/Reshape_1
(B). Another way is to print all tensors of the model and get input/output tensor
from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file
ckpt_path="model.ckpt"
print_tensors_in_checkpoint_file(file_name=ckpt_path, tensor_name='', all_tensors=True, all_tensor_names=True)
(C). If you need to print tensor names of .pb file. You can use this simple code.
Check what would work for you.
I used Keras to build a model and trained it. Then I saved the model as an h5 file, i.e. model.save('name.h5'). Now I want to reload the model in tensorflow such that I have access to .meta file, for example I want to import the computational graph from the .meta file, i.e., tf.train.import_meta_graph('name_of_the_file.meta').
So, the question is how to convert .h5 file of Keras to the following four files of TensorFlow:
.meta
checkpoint
.data-00000-of-00001
.index
You can use 3rd party packages, for example keras_to_tensorflow
keras_to_tensorflow: General code to convert a trained keras model into an inference tensorflow model
The conversion can be done by
python3 keras_to_tensorflow.py -input_model_file model.h5
Tensorflow 2.x will do that automatically. The function you are using to save (see also) is:
save(
filepath,
overwrite=True,
include_optimizer=True,
save_format=None
)
The save format let's you choose either 'h5' or 'tf'. However, for tensorflow 1.x is not implemented yet (and probably never will).
save_format: Either 'tf' or 'h5', indicating whether to save the model
to Tensorflow SavedModel or HDF5. The default is currently 'h5', but
will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently
disabled (use tf.keras.experimental.export_saved_model instead).
You can do as it says and use the tf.keras.experimental.export_saved_model but it will still not create the .meta file.