I have a Tensorflow file AlexNet.pb. I am trying to load it then classify an image that I have. I can't find a way to load it then classify an image.
No-one seems to have a simple example of loading and running the .pb file.
It depends on how the protobuf file has been created.
If the .pb file is the result of:
# Create a builder to export the model
builder = tf.saved_model.builder.SavedModelBuilder("export")
# Tag the model in order to be capable of restoring it specifying the tag set
builder.add_meta_graph_and_variables(sess, ["tag"])
builder.save()
You have to know how that model has been tagged and use the tf.saved_model.loader.load method to load the saved graph in the current, empty, graph.
If the model instead has been frozen you have to load the binary file in memory manually:
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.get_default_graph()
tf.import_graph_def(graph_def, name="prefix")
In both cases, you have to know the name of the input tensor and the name of the node you want to execute:
If, for example, your input tensor is a placeholder named batch_ and the node you want to execute is the node named dense/BiasAdd:0 you have to
batch = graph.get_tensor_by_name('batch:0')
prediction = restored_graph.get_tensor_by_name('dense/BiasAdd:0')
values = sess.run(prediction, feed_dict={
batch: your_input_batch,
})
You can use opencv to load .pb models,
eg.
net = cv2.dnn.readNet("model.pb")
Make sure you are using specific version of opencv - OpenCV 3.4.2 or OpenCV 4
Related
I am a newbie on the TensorFlow object detection library. I have a specific data set what I have to produce myself and labeled it with thousands of jpg. I have run the file to detect the object from these images.. any way.The end of the process i have gotten frozen_graph and from it I exported model.ckpl file to inference graph folder everything goes fine, and I have tested model.ckpl model on the object_detection.ipynb file it works fine. Until this step, there is no problem.
However,Am not able to understand how could convert that model.ckpl file to model.tflite file to use on android studio app.
I have see many things like but I am no idea what is the input_tensors = [...]
output_tensors = [...]
I may already know but what it was actually...
Could you show me how could I convert it?
Use tensorboard to find out your input and output layer. For reference follow these links -
https://heartbeat.fritz.ai/intro-to-machine-learning-on-android-how-to-convert-a-custom-model-to-tensorflow-lite-e07d2d9d50e3
Tensorflow Convert pb file to TFLITE using python
If you don't know your inputs and outputs, use summarize_graph tool and feed it your frozen model.
See command here
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#inspecting-graphs
If you have trained your model from scratch you must be having the .meta file. Also you need to specify the output node names using which you can create a .pb file. Please refer to below link on steps to create this file:
Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file
Once this is created you can further convert your .pb to tflite as below:
import tensorflow as tf
graph_def_file = "model.pb"
input_arrays = ["model_inputs"]
output_arrays = ["model_outputs"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
In tensorflow it is fairly easy to load trained models back into tensorflow through the use of checkpoints. However, this use case seems oriented towards users that want to either run evaluation or additional training on a checkpointed model.
What is the simplest way in tensorflow to load a pre-trained model and use it (without training) to produce results which will then be used in a new model?
Right now the methods that seem most promising are tf.get_tensor_by_name() and tf.stop_gradient() in order to get the input and output tensors for the trained model loaded from tf.train.import_meta_graph().
What is the best practices setup for this sort of thing?
The most straightforward solution would be to freeze the pre-trained model variables using this function:
def freeze_graph(model_dir, output_node_names):
"""Extract the sub graph defined by the output nodes and convert
all its variables into constant
Args:
model_dir: the root folder containing the checkpoint state file
output_node_names: a string, containing all the output node's names,
comma separated
"""
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exist")
if not output_node_names:
print("You need to supply the name of the output node")
return -1
# We retrieve our checkpoint fullpath
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
# We precise the file fullname of our freezed graph
absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
# We clear devices to allow TensorFlow to control on which device it will load operations
clear_devices = True
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(args.meta_graph_path, clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
frozen_graph = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
return frozen_graph
Then you'd be able to build your new-model on top of the pre-trained model:
# Get the frozen graph
frozen_graph = freeze_graph(YOUR_MODEL_DIR, YOUR_OUTPUT_NODES)
# Set the frozen graph as a default graph
frozen_graph.as_default()
# Get the output tensor from the pre-trained model
pre_trained_model_result = frozen_graph.get_tensor_by_name(OUTPUT_TENSOR_NAME_OF_PRETRAINED_MODEL)
# Let's say you want to get the pre trained model result's square root
my_new_operation_results = tf.sqrt(pre_trained_model_result)
In text processing there is embedding to show up (if I understood it correctly) the database words as vector (after dimension reduction).
now, I am wondering, is there any method like this to show extracted features via CNN?
for example: consider we have a CNN and train and test sets. we want to train the CNN with train set and meanwhile see the extracted features (from dense layer) corresponding class labels via CNN in the embedding section of tensorboard.
the purpose of this work is seeing the features of input data in every batch and understand how close or far are they from together. and finally, in the trained model, we can find out accuracy of our classifier (like softmax or etc.).
thank you in advance for your help.
I have taken help of Tensorflow documentation.
For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see TensorBoard: Visualizing Learning.
To visualize your embeddings, there are 3 things you need to do:
1) Setup a 2D tensor that holds your embedding(s).
embedding_var = tf.get_variable(....)
2) Periodically save your model variables in a checkpoint in LOG_DIR.
saver = tf.train.Saver()
saver.save(session, os.path.join(LOG_DIR, "model.ckpt"), step)
3) (Optional) Associate metadata with your embedding.
If you have any metadata (labels, images) associated with your embedding, you can tell TensorBoard about it either by directly storing a projector_config.pbtxt in the LOG_DIR, or use our python API.
For instance, the following projector_config.ptxt associates the word_embedding tensor with metadata stored in $LOG_DIR/metadata.tsv:
embeddings {
tensor_name: 'word_embedding'
metadata_path: '$LOG_DIR/metadata.tsv'
}
The same config can be produced programmatically using the following code snippet:
from tensorflow.contrib.tensorboard.plugins import projector
# Create randomly initialized embedding weights which will be trained.
vocabulary_size = 10000
embedding_size = 200
embedding_var = tf.get_variable('word_embedding', [vocabulary_size,
embedding_size])
# Format: tensorflow/tensorboard/plugins/projector/projector_config.proto
config = projector.ProjectorConfig()
# You can add multiple embeddings. Here we add only one.
embedding = config.embeddings.add()
embedding.tensor_name = embedding_var.name
# Link this tensor to its metadata file (e.g. labels).
embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')
#Use the same LOG_DIR where you stored your checkpoint.
summary_writer = tf.summary.FileWriter(LOG_DIR)
# The next line writes a projector_config.pbtxt in the LOG_DIR. TensorBoard will
# read this file during startup.
projector.visualize_embeddings(summary_writer, config)
Question:
Tensorflow Saver ,Exporter, SavedModelBuilder can all be used for save models. According to
https://stackoverflow.com/questions/41740101/tensorflow-difference-between-saving-model-via-exporter-and-tf-train-write-graph, and tensor flow serving, I understand that Saver is used for saving training checkpoints and Exporter and SavedModelBuilder are used for serving.
However,I don't know the differences of their outputs. Are variable.data-???-of--??? and variable.index files generated by SavedModelBuilder the same as cpkt-xxx.index and cpkt-xxx.data-???-of-??? generated by Saver?
I still feel confused about the meaning of the model files of tensorflow. I've read http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/ and Tensorflow: how to save/restore a model? which makes me feel more confused.
There are 4 files in the model directory:
graph.pbtxt
model.ckpt-number.data-00000-of-00001
model.ckpt-number.meta
model.ckpt-number.index
File 2 and 4 store the weights of variables. File 3 stores the graph. Then what does 1 store?
How can I convert the outputs of Saver to SavedModelBuilder. I have the checkpoints directory and want to export the model for serving. According to https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/saved_model
it should be like this
export_dir = ...
...
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.Session(graph=tf.Graph()) as sess:
...
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.TRAINING],
signature_def_map=foo_signatures,
assets_collection=foo_assets)
...
with tf.Session(graph=tf.Graph()) as sess:
...
builder.add_meta_graph(["bar-tag", "baz-tag"])
...
builder.save()
So, I first need to load the checkpoints with :
saver = tf.train.import_meta_graph('model-number.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
And then use this sess for builder.
Am I right?
SavedModel is the format used for serving, created via SavedModelBuilder. The best practice is to have your training code invoke SavedModelBuilder, and to feed the resulting output files to TF-Serving. If you do that you don't need to understand the details of what files are produced :)
The document at [1] talks about the structure of the files inside a SavedModel directory.
[1] https://www.tensorflow.org/programmers_guide/saved_model
I used this repo in order to convert my caffe model into tensorflow. I ended with 2 files : one is python class and the second is npy file with the model weights.
However i want to generate a single file, with the same format as this one (this file, named classify_image_graph_def.pb, can be used to forward the net over any test image).
I'm interested in this format specifically because this is the one required by the script quantize_graph.py.
The right way to do it is to convert all variables in the graph into constant. This could be done using the freeze_graph tool.
But since this tool is not part of the runtime tensorflow library o found it more convenient to do it with the following lines (without having to build freeze_graph):
sess = tf.InteractiveSession()
### create some graph here ###
##############################
graph_def = sess.graph.as_graph_def()
output_node_names = "output0,output1" # put the names of the output nodes here
# freeze all parameters and save
output_graph_def = graph_util.convert_variables_to_constants(
sess, graph_def, output_node_names.split(","))
with tf.gfile.GFile(output_graph_file, "wb") as f:
f.write(output_graph_def.SerializeToString())