I am trying to freeze a tensorflow model and I have many files like these:
model.ckp-6517.data-00000-of-00001
model.ckp-6517.index
model.ckp-6517.meta
model.ckp-8145.data-00000-of-00001
model.ckp-8145.index
model.ckp-8145.meta
How can I freeze a model in presence of multiple files like these?
Related
What I tried so far:
pre-train a model using unsupervised method in PyTorch, and save off the checkpoint file (using torch.save(state, filename))
convert the checkpoint file to onnx format (using torch.onnx.export)
convert the onnx to tensorflow saved model (using onnx-tf)
trying to load the variables in saved_model folder as checkpoint in my tensorflow training code (using tf.train.init_from_checkpoint) for fine-tuning
But now I am getting stuck at step 4 because I notice that variables.index and variables.data#1 files are basically empty (probably because of this: https://github.com/onnx/onnx-tensorflow/issues/994)
Also, specifically, if I try to use tf.train.NewCheckpointReader to load the files and call ckpt_reader.get_variable_to_shape_map(), _CHECKPOINTABLE_OBJECT_GRAPH is empty
Any suggestions/experience are appreciated :-)
Do we need these files?, The Tensorflow Doc don't say anything about them
The model.tflite file is the pretrained model in .tflite format. So if you want to use the model out of the box, you can use this file.
The label_map.txt is used to map the output of your network to actual comprehensible results. I.e. both of the files are needed if you want to use the model out of the box. It is not needed for re-training.
I am a newer to tensorflow. I have trained a CNN model , and get
checkpoint files.
Then,I freeze the checkpoint files to "pb" proto. Referenced the website:https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc#.nxb8t2ylh
Next,I load graph from pb file,and predict with it .But, the result is different at each predict(for example:the same picture may got 1,1.04,1.2,1.3 these scores) , why? Did I miss save some variables?
I have implemented the sequence to sequence model in Tensorflow for about 100,000 steps without specifying the summarizing operations required for TensorBoard.
I have the checkpoint log files for every 1000 steps. Is there any way to visualize the data without having to retrain the entire model i.e. extract the summaries from the checkpoint files to feed to TensorBoard?
I tried running TensorBoard directly on the checkpoint files, which obviously said no scalar summaries found. I also tried inserting the summary operations in code but it requires me to completely retrain the model for the summaries to get created.
Is it possible to only load specific layers (convolutional layers) out of one checkpoint file?
I've trained some CNNs fully-supervised and saved my progress (I'm doing object localization). To do auto-labelling I thought of building a weakly-supervised CNNs out of my current model...but since the weakly-supervised version has different fully-connected layers, I would like to select only the convolutional filters of my TensorFlow checkpoint file.
Of course I could manually save the weights of the corresponding layers, but due to the fact that they're already included in TensorFlow's checkpoint file I would like to extract them there, in order to have one single storing file.
TensorFlow 2.1 has many different public facilities for loading checkpoints (model.save, Checkpoint, saved_model, etc), but to the best of my knowledge, none of them has filtering API. So, let me suggest a snippet for hard cases which uses tooling from the TF2.1 internal development tests.
checkpoint_filename = '/path/to/our/weird/checkpoint.ckpt'
model = tf.keras.Model( ... ) # TF2.0 Model to initialize with the above checkpoint
variables_to_load = [ ... ] # List of model weight names to update.
from tensorflow.python.training.checkpoint_utils import load_checkpoint, list_variables
reader = load_checkpoint(checkpoint_filename)
for w in model.weights:
name=w.name.split(':')[0] # See (b/29227106)
if name in variables_to_load:
print(f"Updating {name}")
w.assign(reader.get_tensor(
# (Optional) Handle variable renaming
{'/var_name1/in/model':'/var_name1/in/checkpoint',
'/var_name2/in/model':'/var_name2/in/checkpoint',
# ... and so on
}.get(name,name)))
Note: model.weights and list_variables may help to inspect variables in Model and in the checkpoint
Note also, that this method will not restore model's optimizer state.