Tensorflow checkpoint models getting deleted - tensorflow

I am using tensorflow checkpointing after every 10 epochs using the following code :
checkpoint_dir = os.path.abspath(os.path.join(out_dir, "checkpoints"))
checkpoint_prefix = os.path.join(checkpoint_dir, "model")
...
if current_step % checkpoint_every == 0:
path = saver.save(sess, checkpoint_prefix, global_step=current_step)
print("Saved model checkpoint to {}\n".format(path))
The problem is that, as the new files are getting generated, previous 5 model files are getting deleted automatically.

This is the expected behavior, the docs for tf.train.Saver say that by default the 5 most recent checkpoint files are kept. To adjust that, set max_to_keep the the desired value.

Related

Convert tensor_forest model to tflite model by TFLiteConverter?

How to use tf.lite.TFLiteConverter to get the tflite from tensor_forest model which could be using in the Android smartphone.
I found that the tensor_forest in the contrib directory (link). And there is an example using this model here (link).
I add following code to save the model:
saver = tf.train.Saver(save_relative_paths=True, max_to_keep=10)
checkpoint_prefix = 'checkpoints/model'
for i in range(1, num_steps + 1):
(skip)
if i % 50 == 0 or i == 1:
saver.save(sess, checkpoint_prefix, global_step=i)
(skip)
It generated the 'checkpoint', 'model-2000.data-00000-of-00001', 'model-2000.index', 'model-2000.meta' in the checkpoints directory. But I cannot convert it by referencing the using of tf.lite.TFLiteConverter cause it seems to need the pb file or missing some parameters.
How can I generate the tflite in the correct way?

Loading older checkpoint in tensorflow

I trained some models using Tensorflow r0.12 and saved it. Later I updated to r1.0.1. Some models are loading without any problems, yet if the model has RNN cells in it, loading fails with Key layer-5/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_rnn_cell/biases not found in checkpoint.
Also if I check model.index file I see similar entries there, for example: 5/BiRNN/BW/MultiRNNCell/Cell0/BasicRNNCell/Linear/Bias.
Package with RNN cells is now in tf.contrib.rnn (it was tf.nn.rnn_cell in 0.12), so I think some naming has been changed.
The question is:
Is there a way to load my model, re-map its tensors and save so that tensor names would be compatible with r1.0?
P.S. I also have model.meta file if that helps.
Thanks!
If someone gets the same problem, here is the solution I used. It is a modified version of tensor printing function from inspect_checkpoint.py in tensorflow.python.tools.
def resave_tensors(file_name, rename_map, dry_run=False):
"""
Updates checkpoint by renaming tensors in it.
:param file_name: Filename with checkpoint.
:param rename_map: Map from old names to new ones
:param dry_run: If True, just print new tensors.
"""
renames_count = 0
reader = pywrap_tensorflow.NewCheckpointReader(file_name)
var_to_shape_map = reader.get_variable_to_shape_map()
for key in sorted(var_to_shape_map):
print("tensor_name: ", key)
tensor_val = reader.get_tensor(key)
print('shape: {}'.format(tensor_val.shape))
if key in rename_map:
renames_count += 1
key = rename_map[key]
tf.Variable(tensor_val, dtype=tensor_val.dtype, name=key)
saver = tf.train.Saver()
if not dry_run:
with tf.Session() as session:
session.run(tf.global_variables_initializer())
saver.save(session, file_name)
print('Renamed vars: {}'.format(renames_count))

How to restore a model by filename in Tensorflow r12?

I have run the distributed mnist example:
https://github.com/tensorflow/tensorflow/blob/r0.12/tensorflow/tools/dist_test/python/mnist_replica.py
Though I have set the
saver = tf.train.Saver(max_to_keep=0)
In previous release, like r11, I was able to run over each check point model and evaluate the precision of the model. This gave me a plot of the progress of the precision versus global steps (or iterations).
Prior to r12, tensorflow checkpoint models were saved in two files, model.ckpt-1234 and model-ckpt-1234.meta. One could restore a model by passing the model.ckpt-1234 filename like so saver.restore(sess,'model.ckpt-1234').
However, I've noticed that in r12, there are now three output files model.ckpt-1234.data-00000-of-000001, model.ckpt-1234.index, and model.ckpt-1234.meta.
I see that the the restore documentation says that a path such as /train/path/model.ckpt should be given to restore instead of a filename. Is there any way to load one checkpoint file at a time to evaluate it? I have tried passing the model.ckpt-1234.data-00000-of-000001, model.ckpt-1234.index, and model.ckpt-1234.meta files, but get errors like below:
W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open logdir/2016-12-08-13-54/model.ckpt-0.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
NotFoundError (see above for traceback): Tensor name "hid_b" not found in checkpoint files logdir/2016-12-08-13-54/model.ckpt-0.index
[[Node: save/RestoreV2_1 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_1/tensor_names, save/RestoreV2_1/shape_and_slices)]]
W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open logdir/2016-12-08-13-54/model.ckpt-0.meta: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
I'm running on OSX Sierra with tensorflow r12 installed via pip.
Any guidance would be helpful.
Thank you.
I also used Tensorlfow r0.12 and I didn't think there is any issue for saving and restoring model. The following is a simple code that you can have a try:
import tensorflow as tf
# Create some variables.
v1 = tf.Variable(tf.random_normal([784, 200], stddev=0.35), name="v1")
v2 = tf.Variable(tf.random_normal([784, 200], stddev=0.35), name="v2")
# Add an op to initialize the variables.
init_op = tf.global_variables_initializer()
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, save the
# variables to disk.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
# Do some work with the model
although in r0.12, the checkpoint is stored in multiple files, you can restore it by using the common prefix, which is 'model.ckpt' in your case.
The R12 has changed the checkpoint format. You should save the model in the old format.
import tensorflow as tf
from tensorflow.core.protobuf import saver_pb2
...
saver = tf.train.Saver(write_version = saver_pb2.SaverDef.V1)
saver.save(sess, './model.ckpt', global_step = step)
According to the TensorFlow v0.12.0 RC0’s release note:
New checkpoint format becomes the default in tf.train.Saver. Old V1
checkpoints continue to be readable; controlled by the write_version
argument, tf.train.Saver now by default writes out in the new V2
format. It significantly reduces the peak memory required and latency
incurred during restore.
see details in my blog.
You can restore the model like this:
saver = tf.train.import_meta_graph('./src/models/20170512-110547/model-20170512-110547.meta')
saver.restore(sess,'./src/models/20170512-110547/model-20170512-110547.ckpt-250000'))
Where the path '/src/models/20170512-110547/' contains three files:
model-20170512-110547.meta
model-20170512-110547.ckpt-250000.index
model-20170512-110547.ckpt-250000.data-00000-of-00001
And if in one directory there are more than one checkpoints,eg: there are checkpoint files in the path
./20170807-231648/:
checkpoint
model-20170807-231648-0.data-00000-of-00001
model-20170807-231648-0.index
model-20170807-231648-0.meta
model-20170807-231648-100000.data-00000-of-00001
model-20170807-231648-100000.index
model-20170807-231648-100000.meta
you can see that there are two checkpoints, so you can use this:
saver = tf.train.import_meta_graph('/home/tools/Tools/raoqiang/facenet/models/facenet/20170807-231648/model-20170807-231648-0.meta')
saver.restore(sess,tf.train.latest_checkpoint('/home/tools/Tools/raoqiang/facenet/models/facenet/20170807-231648/'))
OK, I can answer my own question. What I found was that my python script was adding an extra '/' to my path so I was executing:
saver.restore(sess,'/path/to/train//model.ckpt-1234')
somehow that was causing a problem with tensorflow.
When I removed it, calling:
saver.restore(sess,'/path/to/trian/model.ckpt-1234')
it worked as expected.
use only model.ckpt-1234
at least it works for me
I'm new to TF and met the same issue. After reading Yuan Ma's comments, I copied the '.index' to the same 'train\ckpt' folder together with '.data-00000-of-00001' file. Then it worked!
So, the .index file is sufficient when restoring the models.
I used TF on Win7, r12.

Value discrepency between saved and restored model in tensorflow

During training I am attempting to save the variables in a checkpoint file. I use the following code:
if step % 1000 == 0 or (step + 1) == FLAGS.max_steps:
checkpoint_path = os.path.join(FLAGS.train_dir, 'model.ckpt')
saver4=tf.train.Saver(tf.all_variables())
saver.save(sess, checkpoint_path, global_step=step)
a1=tf.get_default_graph().get_tensor_by_name("local3/weights:0")
a1eval=a1.eval()
saver4.save(sess, checkpoint_path+"_4", global_step=step)
However, the evaluated variable local3/weights:0 as a1eval and the same variable evaluated subsequently after restoring the checkpoint are different (I use two savers for testing, both of them restore the wrong value).
What can be going on?

How to save a trained tensorflow model for later use for application?

I am a bit of a beginner with tensorflow so please excuse if this is a stupid question and the answer is obvious.
I have created a Tensorflow graph where starting with placeholders for X and y I have optimized some tensors which represent my model. Part of the graph is something where a vector of predictions can be calculated, e.g. for linear regression something like
y_model = tf.add(tf.mul(X,w),d)
y_vals = sess.run(y_model,feed_dict={....})
After training has been completed I have acceptable values for w and d and now I want to save my model for later. Then, in a different python session I want to restore the model so that I can again run
## Starting brand new python session
import tensorflow as tf
## somehow restor the graph and the values here: how????
## so that I can run this:
y_vals = sess.run(y_model,feed_dict={....})
for some different data and get back the y-values.
I want this to work in a way where the graph for calculating the y-values from the placeholders is also stored and restored - as long as the placeholders get fed the correct data, this should work transparently without the user (the one who applies the model) needing to know what the graph looks like).
As far as I understand tf.train.Saver().save(..) only saves the variables but I also want to save the graph. I think that tf.train.export_meta_graph could be relevant here but I do not understand how to use it correctly, the documentation is a bit cryptic to me and the examples do not even use export_meta_graph anywhere.
From the docs, try this:
# Create some variables.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add an op to initialize the variables.
init_op = tf.global_variables_initializer()
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, save the
# variables to disk.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
..
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
You can specify the path.
And if you want to restore the model, try:
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/tmp/model.ckpt.meta')
saver.restore(sess, "/tmp/model.ckpt")
Saving Graph in Tensorflow:
import tensorflow as tf
# Create some placeholder variables
x_pl = tf.placeholder(..., name="x")
y_pl = tf.placeholder(..., name="y")
# Add some operation to the Graph
add_op = tf.add(x, y)
with tf.Session() as sess:
# Add variable initializer
init = tf.global_variables_initializer()
# Add ops to save variables to checkpoints
# Unless var_list is specified Saver will save ALL named variables
# in Graph
# Optionally set maximum of 3 latest models to be saved
saver = tf.train.Saver(max_to_keep=3)
# Run variable initializer
sess.run(init)
for i in range(no_steps):
# Feed placeholders with some data and run operation
sess.run(add_op, feed_dict={x_pl: i+1, y_pl: i+5})
saver.save(sess, "path/to/checkpoint/model.ckpt", global_step=i)
This will save the following files:
1) Meta Graph
.meta file:
MetaGraphDef protocol buffer representation of MetaGraph which saves the complete Tf Graph structure i.e. the GraphDef that describes the dataflow and all metadata associated with it e.g. all variables, operations, collections, etc.
importing the graph structure will recreate the Graph and all its variables, then the corresponding values for these variables can be restored from the checkpoint file
if you don't want to restore the Graph however you can reconstruct all of the information in the MetaGraphDef by re-executing the Python code that builds the model n.b. you must recreate the EXACT SAME variables first before restoring their values from the checkpoint
since Meta Graph file is not always needed, you can switch off writing the file in saver.save using write_meta_graph=False
2) Checkpoint files
.data file:
binary file containing VALUES of all saved variables outlined in tf.train.Saver() (default is all variables)
.index file:
immutable table describing all tensors and their metadata checkpoint file:
keeps a record of latest checkpoint files saved
Restoring Graph in Tensorflow:
import tensorflow as tf
latest_checkpoint = tf.train.latest_checkpoint("path/to/checkpoint")
# Load latest checkpoint Graph via import_meta_graph:
# - construct protocol buffer from file content
# - add all nodes to current graph and recreate collections
# - return Saver
saver = tf.train.import_meta_graph(latest_checkpoint + '.meta')
# Start session
with tf.Session() as sess:
# Restore previously trained variables from disk
print("Restoring Model: {}".format("path/to/checkpoint"))
saver.restore(sess, latest_checkpoint)
# Retrieve protobuf graph definition
graph = tf.get_default_graph()
print("Restored Operations from MetaGraph:")
for op in graph.get_operations():
print(op.name)
# Access restored placeholder variables
x_pl = graph.get_tensor_by_name("x_pl:0")
y_pl = graph.get_tensor_by_name("y_pl:0")
# Access restored operation to re run
accuracy_op = graph.get_tensor_by_name("accuracy_op:0")
This is just a quick example with the basics, for a working implementation see here.
In order to save the graph, you need to freeze the graph.
Here is the python script for freezing the graph : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
Here is a code snippet for freezing graph:
from tensorflow.python.tools import freeze_graph
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path, output_node
restore_op_name, filename_tensor_name,
output_frozen_graph_name, True, "")
where output node corresponds to output tensor variable.
output = tf.nn.softmax(outer_layer_name,name="output")