How to create classify_image_graph_def.pb from my model? - tensorflow

I used this repo in order to convert my caffe model into tensorflow. I ended with 2 files : one is python class and the second is npy file with the model weights.
However i want to generate a single file, with the same format as this one (this file, named classify_image_graph_def.pb, can be used to forward the net over any test image).
I'm interested in this format specifically because this is the one required by the script quantize_graph.py.

The right way to do it is to convert all variables in the graph into constant. This could be done using the freeze_graph tool.
But since this tool is not part of the runtime tensorflow library o found it more convenient to do it with the following lines (without having to build freeze_graph):
sess = tf.InteractiveSession()
### create some graph here ###
##############################
graph_def = sess.graph.as_graph_def()
output_node_names = "output0,output1" # put the names of the output nodes here
# freeze all parameters and save
output_graph_def = graph_util.convert_variables_to_constants(
sess, graph_def, output_node_names.split(","))
with tf.gfile.GFile(output_graph_file, "wb") as f:
f.write(output_graph_def.SerializeToString())

Related

How can I use the Tensorflow .pb file?

I have a Tensorflow file AlexNet.pb. I am trying to load it then classify an image that I have. I can't find a way to load it then classify an image.
No-one seems to have a simple example of loading and running the .pb file.
It depends on how the protobuf file has been created.
If the .pb file is the result of:
# Create a builder to export the model
builder = tf.saved_model.builder.SavedModelBuilder("export")
# Tag the model in order to be capable of restoring it specifying the tag set
builder.add_meta_graph_and_variables(sess, ["tag"])
builder.save()
You have to know how that model has been tagged and use the tf.saved_model.loader.load method to load the saved graph in the current, empty, graph.
If the model instead has been frozen you have to load the binary file in memory manually:
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.get_default_graph()
tf.import_graph_def(graph_def, name="prefix")
In both cases, you have to know the name of the input tensor and the name of the node you want to execute:
If, for example, your input tensor is a placeholder named batch_ and the node you want to execute is the node named dense/BiasAdd:0 you have to
batch = graph.get_tensor_by_name('batch:0')
prediction = restored_graph.get_tensor_by_name('dense/BiasAdd:0')
values = sess.run(prediction, feed_dict={
batch: your_input_batch,
})
You can use opencv to load .pb models,
eg.
net = cv2.dnn.readNet("model.pb")
Make sure you are using specific version of opencv - OpenCV 3.4.2 or OpenCV 4

Tensorflow Combining Two Models End to End

In tensorflow it is fairly easy to load trained models back into tensorflow through the use of checkpoints. However, this use case seems oriented towards users that want to either run evaluation or additional training on a checkpointed model.
What is the simplest way in tensorflow to load a pre-trained model and use it (without training) to produce results which will then be used in a new model?
Right now the methods that seem most promising are tf.get_tensor_by_name() and tf.stop_gradient() in order to get the input and output tensors for the trained model loaded from tf.train.import_meta_graph().
What is the best practices setup for this sort of thing?
The most straightforward solution would be to freeze the pre-trained model variables using this function:
def freeze_graph(model_dir, output_node_names):
"""Extract the sub graph defined by the output nodes and convert
all its variables into constant
Args:
model_dir: the root folder containing the checkpoint state file
output_node_names: a string, containing all the output node's names,
comma separated
"""
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exist")
if not output_node_names:
print("You need to supply the name of the output node")
return -1
# We retrieve our checkpoint fullpath
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
# We precise the file fullname of our freezed graph
absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
# We clear devices to allow TensorFlow to control on which device it will load operations
clear_devices = True
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(args.meta_graph_path, clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
frozen_graph = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
return frozen_graph
Then you'd be able to build your new-model on top of the pre-trained model:
# Get the frozen graph
frozen_graph = freeze_graph(YOUR_MODEL_DIR, YOUR_OUTPUT_NODES)
# Set the frozen graph as a default graph
frozen_graph.as_default()
# Get the output tensor from the pre-trained model
pre_trained_model_result = frozen_graph.get_tensor_by_name(OUTPUT_TENSOR_NAME_OF_PRETRAINED_MODEL)
# Let's say you want to get the pre trained model result's square root
my_new_operation_results = tf.sqrt(pre_trained_model_result)

Embedding feature vectors in Tensorflow

In text processing there is embedding to show up (if I understood it correctly) the database words as vector (after dimension reduction).
now, I am wondering, is there any method like this to show extracted features via CNN?
for example: consider we have a CNN and train and test sets. we want to train the CNN with train set and meanwhile see the extracted features (from dense layer) corresponding class labels via CNN in the embedding section of tensorboard.
the purpose of this work is seeing the features of input data in every batch and understand how close or far are they from together. and finally, in the trained model, we can find out accuracy of our classifier (like softmax or etc.).
thank you in advance for your help.
I have taken help of Tensorflow documentation.
For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see TensorBoard: Visualizing Learning.
To visualize your embeddings, there are 3 things you need to do:
1) Setup a 2D tensor that holds your embedding(s).
embedding_var = tf.get_variable(....)
2) Periodically save your model variables in a checkpoint in LOG_DIR.
saver = tf.train.Saver()
saver.save(session, os.path.join(LOG_DIR, "model.ckpt"), step)
3) (Optional) Associate metadata with your embedding.
If you have any metadata (labels, images) associated with your embedding, you can tell TensorBoard about it either by directly storing a projector_config.pbtxt in the LOG_DIR, or use our python API.
For instance, the following projector_config.ptxt associates the word_embedding tensor with metadata stored in $LOG_DIR/metadata.tsv:
embeddings {
tensor_name: 'word_embedding'
metadata_path: '$LOG_DIR/metadata.tsv'
}
The same config can be produced programmatically using the following code snippet:
from tensorflow.contrib.tensorboard.plugins import projector
# Create randomly initialized embedding weights which will be trained.
vocabulary_size = 10000
embedding_size = 200
embedding_var = tf.get_variable('word_embedding', [vocabulary_size,
embedding_size])
# Format: tensorflow/tensorboard/plugins/projector/projector_config.proto
config = projector.ProjectorConfig()
# You can add multiple embeddings. Here we add only one.
embedding = config.embeddings.add()
embedding.tensor_name = embedding_var.name
# Link this tensor to its metadata file (e.g. labels).
embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')
#Use the same LOG_DIR where you stored your checkpoint.
summary_writer = tf.summary.FileWriter(LOG_DIR)
# The next line writes a projector_config.pbtxt in the LOG_DIR. TensorBoard will
# read this file during startup.
projector.visualize_embeddings(summary_writer, config)

Saving tf.trainable_variables() using convert_variables_to_constants

I have a Keras model that I would like to convert to a Tensorflow protobuf (e.g. saved_model.pb).
This model comes from transfer learning on the vgg-19 network in which and the head was cut-off and trained with fully-connected+softmax layers while the rest of the vgg-19 network was frozen
I can load the model in Keras, and then use keras.backend.get_session() to run the model in tensorflow, generating the correct predictions:
frame = preprocess(cv2.imread("path/to/img.jpg")
keras_model = keras.models.load_model("path/to/keras/model.h5")
keras_prediction = keras_model.predict(frame)
print(keras_prediction)
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
tf_prediction = sess.run(output, {input_tensor: frame})
print(tf_prediction) # this matches keras_prediction exactly
If I don't include the line tvars = tf.trainable_variables(), then the tf_prediction variable is completely wrong and doesn't match the output from keras_prediction at all. In fact all the values in the output (single array with 4 probability values) are exactly the same (~0.25, all adding to 1). This made me suspect that weights for the head are just initialized to 0 if tf.trainable_variables() is not called first, which was confirmed after inspecting the model variables. In any case, calling tf.trainable_variables() causes the tensorflow prediction to be correct.
The problem is that when I try to save this model, the variables from tf.trainable_variables() don't actually get saved to the .pb file:
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ['Softmax'])
graph_io.write_graph(constant_graph, './', 'saved_model.pb', as_text=False)
What I am asking is, how can I save a Keras model as a Tensorflow protobuf with the tf.training_variables() intact?
Thanks so much!
So your approach of freezing the variables in the graph (converting to constants), should work, but isn't necessary and is trickier than the other approaches. (more on this below). If your want graph freezing for some reason (e.g. exporting to a mobile device), I'd need more details to help debug, as I'm not sure what implicit stuff Keras is doing behind the scenes with your graph. However, if you want to just save and load a graph later, I can explain how to do that, (though no guarantees that whatever Keras is doing won't screw it up..., happy to help debug that).
So there are actually two formats at play here. One is the GraphDef, which is used for Checkpointing, as it does not contain metadata about inputs and outputs. The other is a MetaGraphDef which contains metadata and a graph def, the metadata being useful for prediction and running a ModelServer (from tensorflow/serving).
In either case you need to do more than just call graph_io.write_graph because the variables are usually stored outside the graphdef.
There are wrapper libraries for both these use cases. tf.train.Saver is primarily used for saving and restoring checkpoints.
However, since you want prediction, I would suggest using a tf.saved_model.builder.SavedModelBuilder to build a SavedModel binary. I've provided some boiler plate for this below:
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY as DEFAULT_SIG_DEF
builder = tf.saved_model.builder.SavedModelBuilder('./mymodel')
with keras.backend.get_session() as sess:
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
sig_def = tf.saved_model.signature_def_utils.predict_signature_def(
{'input': input_tensor},
{'output': output}
)
builder.add_meta_graph_and_variables(
sess, tf.saved_model.tag_constants.SERVING,
signature_def_map={
DEFAULT_SIG_DEF: sig_def
}
)
builder.save()
After running this code you should have a mymodel/saved_model.pb file as well as a directory mymodel/variables/ with protobufs corresponding to the variable values.
Then to load the model again, simply use tf.saved_model.loader:
# Does Keras give you the ability to start with a fresh graph?
# If not you'll need to do this in a separate program to avoid
# conflicts with the old default graph
with tf.Session(graph=tf.Graph()):
meta_graph_def = tf.saved_model.loader.load(
sess,
tf.saved_model.tag_constants.SERVING,
'./mymodel'
)
# From this point variables and graph structure are restored
sig_def = meta_graph_def.signature_def[DEFAULT_SIG_DEF]
print(sess.run(sig_def.outputs['output'], feed_dict={sig_def.inputs['input']: frame}))
Obviously there's a more efficient prediction available with this code through tensorflow/serving, or Cloud ML Engine, but this should work.
It's possible that Keras is doing something under the hood which will interfere with this process as well, and if so we'd like to hear about it (and I'd like to make sure that Keras users are able to freeze graphs as well, so if you want to send me a gist with your full code or something maybe I can find someone who knows Keras well to help me debug.)
EDIT: You can find an end to end example of this here: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/keras/trainer/model.py#L85

How to save a trained tensorflow model for later use for application?

I am a bit of a beginner with tensorflow so please excuse if this is a stupid question and the answer is obvious.
I have created a Tensorflow graph where starting with placeholders for X and y I have optimized some tensors which represent my model. Part of the graph is something where a vector of predictions can be calculated, e.g. for linear regression something like
y_model = tf.add(tf.mul(X,w),d)
y_vals = sess.run(y_model,feed_dict={....})
After training has been completed I have acceptable values for w and d and now I want to save my model for later. Then, in a different python session I want to restore the model so that I can again run
## Starting brand new python session
import tensorflow as tf
## somehow restor the graph and the values here: how????
## so that I can run this:
y_vals = sess.run(y_model,feed_dict={....})
for some different data and get back the y-values.
I want this to work in a way where the graph for calculating the y-values from the placeholders is also stored and restored - as long as the placeholders get fed the correct data, this should work transparently without the user (the one who applies the model) needing to know what the graph looks like).
As far as I understand tf.train.Saver().save(..) only saves the variables but I also want to save the graph. I think that tf.train.export_meta_graph could be relevant here but I do not understand how to use it correctly, the documentation is a bit cryptic to me and the examples do not even use export_meta_graph anywhere.
From the docs, try this:
# Create some variables.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add an op to initialize the variables.
init_op = tf.global_variables_initializer()
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, save the
# variables to disk.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
..
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
You can specify the path.
And if you want to restore the model, try:
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/tmp/model.ckpt.meta')
saver.restore(sess, "/tmp/model.ckpt")
Saving Graph in Tensorflow:
import tensorflow as tf
# Create some placeholder variables
x_pl = tf.placeholder(..., name="x")
y_pl = tf.placeholder(..., name="y")
# Add some operation to the Graph
add_op = tf.add(x, y)
with tf.Session() as sess:
# Add variable initializer
init = tf.global_variables_initializer()
# Add ops to save variables to checkpoints
# Unless var_list is specified Saver will save ALL named variables
# in Graph
# Optionally set maximum of 3 latest models to be saved
saver = tf.train.Saver(max_to_keep=3)
# Run variable initializer
sess.run(init)
for i in range(no_steps):
# Feed placeholders with some data and run operation
sess.run(add_op, feed_dict={x_pl: i+1, y_pl: i+5})
saver.save(sess, "path/to/checkpoint/model.ckpt", global_step=i)
This will save the following files:
1) Meta Graph
.meta file:
MetaGraphDef protocol buffer representation of MetaGraph which saves the complete Tf Graph structure i.e. the GraphDef that describes the dataflow and all metadata associated with it e.g. all variables, operations, collections, etc.
importing the graph structure will recreate the Graph and all its variables, then the corresponding values for these variables can be restored from the checkpoint file
if you don't want to restore the Graph however you can reconstruct all of the information in the MetaGraphDef by re-executing the Python code that builds the model n.b. you must recreate the EXACT SAME variables first before restoring their values from the checkpoint
since Meta Graph file is not always needed, you can switch off writing the file in saver.save using write_meta_graph=False
2) Checkpoint files
.data file:
binary file containing VALUES of all saved variables outlined in tf.train.Saver() (default is all variables)
.index file:
immutable table describing all tensors and their metadata checkpoint file:
keeps a record of latest checkpoint files saved
Restoring Graph in Tensorflow:
import tensorflow as tf
latest_checkpoint = tf.train.latest_checkpoint("path/to/checkpoint")
# Load latest checkpoint Graph via import_meta_graph:
# - construct protocol buffer from file content
# - add all nodes to current graph and recreate collections
# - return Saver
saver = tf.train.import_meta_graph(latest_checkpoint + '.meta')
# Start session
with tf.Session() as sess:
# Restore previously trained variables from disk
print("Restoring Model: {}".format("path/to/checkpoint"))
saver.restore(sess, latest_checkpoint)
# Retrieve protobuf graph definition
graph = tf.get_default_graph()
print("Restored Operations from MetaGraph:")
for op in graph.get_operations():
print(op.name)
# Access restored placeholder variables
x_pl = graph.get_tensor_by_name("x_pl:0")
y_pl = graph.get_tensor_by_name("y_pl:0")
# Access restored operation to re run
accuracy_op = graph.get_tensor_by_name("accuracy_op:0")
This is just a quick example with the basics, for a working implementation see here.
In order to save the graph, you need to freeze the graph.
Here is the python script for freezing the graph : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
Here is a code snippet for freezing graph:
from tensorflow.python.tools import freeze_graph
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path, output_node
restore_op_name, filename_tensor_name,
output_frozen_graph_name, True, "")
where output node corresponds to output tensor variable.
output = tf.nn.softmax(outer_layer_name,name="output")