How to freeze an im2txt model? - tensorflow

I fine-tuned im2txt model and obtained the ckpt.data, ckpt.index and ckpt.meta files and a graph.pbtxt file using the procedure in the im2txt github.
The model seems to work well as it produces almost correct captions.
Now I would like to freeze this model to use it on android.
I used the freeze_graph.py script in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py.
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_binary=false --input_checkpoint=/path/to/model.ckpt --output_graph=/path/to/output_graph.pb --output_node_names="softmax,lstm/initial_state,lstm/state"
And I have the following error : AssertionError: softmax is not in graph.
The discussion in https://github.com/tensorflow/models/issues/816 is about the same problem but it did not help me very much.
Indeed, when I look in the graph.pbtxt generated after fine-tuning, I cannot find softmax, lstm/initial_state and lstm/state.
But in the show_and_tell_model.py file of im2txt, the names of the tensors seem to be "softmax", "lstm/initial_state" and "lstm/state". So, I don't know what's happening.
I hope I was clear enough about what I've tried so far. Thanks in advance for any help.
Regards,
Stephane

Found and verified the answer: in inference_wrapper.base.py, just add something like saver.save(sess, "model/ckpt4") after saver.restore(sess, checkpoint_path) in def _restore_fn(sess):. Then rebuild and run_inference and you'll get a model that can be frozen, transformed, and optionally memmapped, to be loaded by iOS and Android apps.
For detailed commands to freeze, transform, and convert to memmapped, see my answer at Error using Model after using optimize_for_inference.py on frozen graph.

Ok,
I think I finally got the solution. In case it is useful for others, here it is:
After training, you obtain ckpt.data, ckpt.index and ckpt.meta files and a graph.pbtxt file.
You then have to load this model in 'inference' mode (see InferenceWrapper in im2txt). It builds a graph with the correct names 'softmax', 'lstm/initial_state' and 'lstm/state'. You save this graph (with the same ckpt format) and then you can apply the freeze_graph script to obtain the frozen model.
Regards,
Stephane

Related

Use Tensorflow2 saved model for object detection

im quite new to object detection but i managed to train my first Tensorflow custom model yesterday. I think it worked fine besides some warnings, at least i got my exported_model folder with checkpoint, saved model and pipeline.config. I built it with exporter_main_v2.py from Tensorflow. I just loaded some images of deers and want to try to detect some on different pictures.
That's what i would like to test now, but i dont know how. I already did an object detection tutorial with pre trained models and it worked fine. I tried to just replace config_file_path, saved_model_path and image_path with the paths linking to my exported model but it didnt work:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\VSCode\Machine_Learning_Tests\Tensorflow\workspace\exported_models\first_model\saved_model\saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'
There are endless tutorials on how to train custom detection but i cant find a good explanation how to manually test my exported model.
Thanks in advance!
EDIT: I need to know how to build a script where i can import a model i saved with Tensorflow exporter_main_v2.py and an image i want to test this model on and get a result, either in text or with rectangels in picture. Seeing many tutorials but none works for me with a model i saved with Tensorflow exporter_main_v2.py
From the error it looks like you have a model saved as .pb. If you want to do inference you can write something like this:
# load the model
model = tf.keras.models.load_model(my_model_dir)
prediction = model.predict(x=x_test, ...)
You'll have to set x which is the only mandatory argument. It is your test dataset (the images you want to obtain predictions from). Also, predict is useful when you have a great amount of images to predict. It handles the prediction in a batched way, avoiding filling up the memory. If you have just a few you can use directly the __call__() method of your model, like this:
prediction = model(x_test, training=False)
More about prediction can be found at the Tensorflow documentation.

INFO:tensorflow:Waiting for new checkpoint at models/faster_rcnn

I used the transfer learning approach to develop a detection model using the faster_rcnn algorithm.
To evaluate my model, I used the following commands-
!python model_main_tf2.py --model_dir=models/faster_rcnn_inception_resnet_v2 --pipeline_config_path=models/faster_rcnn_inception_resnet_v2/pipeline.config --checkpoint_dir=models/faster_rcnn_inception_resnet_v2
However, I have been getting the following error/info message: -
INFO:tensorflow:Waiting for new checkpoint at models/faster_rcnn_inception_resnet_v2
I0331 23:23:11.699681 140426971481984 checkpoint_utils.py:139] Waiting for new checkpoint at models/faster_rcnn_inception_resnet_v2
I checked the path to the checkpoint_dir is correct. What could be the problem and how can I resolve it?
Thanks in advance.
You need to run another script for training to generate new checkpoint. model_main_tf2.py does not do both at once, i.e., it won't train model and evaluate the model at the end of each epoch.
One way to get what you want modifying checkpoint_max_to_keep in https://github.com/tensorflow/models/blob/13ec3c1460b928301d208115aed0c94fb47538b7/research/object_detection/model_lib_v2.py#L445
to keep all checkpoints, then evaluate separately. This does not work exactly same as you want, but it generates the curves.
A similar situation happened to me. I don't know if this is the solution or just a workaround but it did work for me. I simply exported my model and provided the path to that checkpoint folder.
fintune_checkpoint_model_directory
|
\---checkpoint(folder)
|
\---checkpoint(file with no extension)
\---ckpt-1.data0000of0001
\---ckpt-1.index
and then simply run the model_main_tf.py file for evaluation.
if you trained your model with a few number of steps it can be a problem maybe a few checkpoints can affect that TensorFlow can't generate evaluation so try to increase the number of steps

create_training_graph() failed when converted MobileFacenet to quantize-aware model with TF-lite

I am trying to quantize MobileFacenet (code from sirius-ai) according to the suggestion
and I think I met the same issue as this one
When I add tf.contrib.quantize.create_training_graph() into training graph
(train_nets.py ln.187: before train_op = train(...) or in train() utils/common.py ln.38 before gradients)
It did not add quantize-aware ops into the graph to collect dynamic range max\min.
I assume that I should see some additional nodes in tensorboard, but I did not, thus I think I did not successfully add quantize-aware ops in training graph.
And I try to trace tensorflow, found that I got nothing with _FindLayersToQuantize().
However when I add tf.contrib.quantize.create_eval_graph() to refine the training graph. I can see some quantize-aware ops as act_quant...
Since I did not add ops in training graph successfully, I have no weights to load in eval graph.
Thus I got some error message as
Key MobileFaceNet/Logits/LinearConv1x1/act_quant/max not found in checkpoint
or
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value MobileFaceNet/Logits/LinearConv1x1/act_quant/max
Does anyone know how to fix this error? or how to get quantized MobileFacenet with good accuracy?
Thanks!
H,
Unfortunately, the contrib/quantize tool is now deprecated. It won't be able to support newer models, and we are not working on it anymore.
If you are interested in QAT, I would recommend trying the new TF/Keras QAT API. We are actively developing that and providing support for it.

Exporting a frozen graph .pb file in Tensorflow 2

I've beeen trying out the Tensorflow 2 alpha and I have been trying to freeze and export a model to a .pb graphdef file.
In Tensorflow 1 I could do something like this:
# Freeze the graph.
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph to .pb file.
with open('model.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
However this doesn't seem possible anymore as convert_variables_to_constants is removed and use of sessions is discouraged.
I looked and found there is the freeze graph util
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py that works with SavedModel exports.
Is there some way to do it within Python still or I am meant to switch and use this tool now?
I have also faced this same problem while migrating from tensorflow1.x to tensoflow2.0 beta.
This problem can be solved by 2 methods:
1st is to go to the tensflow2.0 docs search for the methods you have used and change the syntax for each line &
To use google's tf_ugrade_v2 script
tf_upgrade_v2 --infile your_tf1_script_file --outfile converted_tf2_file
You try above command to change your tensorflow1.x script to tensorflow2.0, it will solve all your problem.
Also, you can rename the method (Manual step by refering documentation)
Rename 'tf.graph_util.convert_variables_to_constants' to 'tf.compat.v1.graph_util.convert_variables_to_constants'
The measure problem is that in tensorflow2.0 is that many syntax and function has changed try referring the tensoflow2.0 docs or use the google's tf_upgrade_v2 script
Not sure if you've seen this Tensorflow 2.0 issue, but this response seems to be a work-around:
https://github.com/tensorflow/tensorflow/issues/29253#issuecomment-530782763
Note: this hasn't worked for my nlp model but maybe it will work for you. The suggested work-around is to use model.save_weights('weights.h5') while in TF 2.0 environment. Then create new environment with TF 1.14 and do all following steps in TF 1.14 env. Build your model model = create_model() and use model.load_weights('weights.h5') to load weights back into your model. Then save entire model with model.save('final_model.h5'). If you manage to have success with the above steps, then follow the rest of the steps in the link to use freeze_graph.

How to use model.ckpt and inception v-3 to predict images?

Now I'm in front of the problem about inception v-3 and checkpoint data.
I have been tackling with updating inception-v3's checkpoint data by my images, reading the git page below and succeeded to make new checkpoint data.
https://github.com/tensorflow/models/tree/master/inception
I thought at first just by little change of the code, I can use those checkpoint data to recognise new image datas like the below url.
https://www.tensorflow.org/versions/master/tutorials/image_recognition/index.html
I thought at first that "classify.py" or something reads the new check point datas and just by "python classify.py -image something.png", the program recognises the image data. But It doesn't....
I really need a help.
thanks.
To have input .pb file, during training, import also tf.train.write_graph(sess.graph.as_graph_def(), 'path_to_folder', 'input_graph.pb',False)
If you have downloaded the inception v3 source code, in the inception_train.py, add the line I wrote above, under
saver.save(sess, checkpoint_path, global_step=step). (Where you save the checkpoint/s)
Hope this helps!
To use your checkpoints and model in something like the label_image example, you'll need to run the tensorflow/python/tools/freeze_graph script to convert your variables into constants stored inside the GraphDef. That's how we created the graph file used in that sample code, for example.