Number of layers in a resnet model - tensorflow

I have checkpoint file from a resnet model.
resnet_v1_101.ckpt
Now, I need to find the number of layers in this checkpoint file and detailed information about the layers.
How can I do that in tensorflow?

A checkpoint file doesn't have graph information. Therefore, you cannot get detailed information about the layers. However, if you saved your model by tf.train.Saver().save(...) and you didn't specify write_meta_graph=False, you should have a file like "resnet_v1_101.ckpt.meta". That is the MetaGraphDef file. You can import it like this:
tf.train.import_meta_graph('./resnet_v1_101.ckpt.meta')
You can now visualize the graph in tensorboard or use your own way to get information about the graph.

Related

Is it possible to fine-tune a tensorflow model using pre-trained model from pyTorch?

What I tried so far:
pre-train a model using unsupervised method in PyTorch, and save off the checkpoint file (using torch.save(state, filename))
convert the checkpoint file to onnx format (using torch.onnx.export)
convert the onnx to tensorflow saved model (using onnx-tf)
trying to load the variables in saved_model folder as checkpoint in my tensorflow training code (using tf.train.init_from_checkpoint) for fine-tuning
But now I am getting stuck at step 4 because I notice that variables.index and variables.data#1 files are basically empty (probably because of this: https://github.com/onnx/onnx-tensorflow/issues/994)
Also, specifically, if I try to use tf.train.NewCheckpointReader to load the files and call ckpt_reader.get_variable_to_shape_map(), _CHECKPOINTABLE_OBJECT_GRAPH is empty
Any suggestions/experience are appreciated :-)

Tensorflow remove layers from pretrained model

Is there a way to load a pretrained model in Tensorflow and remove the top layers in the network? I am looking at Tensorflow release r1.10
The only documentation I could find is with tf.keras.Sequential.pop
https://www.tensorflow.org/versions/r1.10/api_docs/python/tf/keras/Sequential#pop
I want to manually prune a pretrained network by removing bunch of top convolution layers and add a custom fully convoluted layer.
EDIT:
The model is ssd_mobilenet_v1_coco downloaded from Tensorflow Model Zoo. I have access to both the frozen_inference_graph.pb model file and checkpoint file.
I donot have access to the python code which is used to construct the model.
Thanks.
From inspecting the code, SSDMobileNetV1FeatureExtractor.extract_features redirects research.slim.nets:
from nets import mobilenet_v1 # nets will have to be on your PYTHONPATH
with tf.variable_scope('MobilenetV1',
reuse=self._reuse_weights) as scope:
with slim.arg_scope(
mobilenet_v1.mobilenet_v1_arg_scope(
is_training=None, regularize_depthwise=True)):
with (slim.arg_scope(self._conv_hyperparams_fn())
if self._override_base_feature_extractor_hyperparams
else context_manager.IdentityContextManager()):
_, image_features = mobilenet_v1.mobilenet_v1_base(
ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
final_endpoint='Conv2d_13_pointwise',
min_depth=self._min_depth,
depth_multiplier=self._depth_multiplier,
use_explicit_padding=self._use_explicit_padding,
scope=scope)
The mobilenet_v1_base function takes a final_endpoint argument. Rather than prune the constructed graph, just construct the graph up until the endpoint you want.

[Tensorflow]Different Graph architecture in COCO pre-trained model and re-train model

I have followed the doc:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md
to retrained the model of ssd_mobilenet_v1_coco_2017_11_17.tar.gz.
I only modify the pipe config with
- finetune to TRUE
- set the model path
- set the trained data path with actual path of tfrecord
then, I try to use the export script with :
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md
Finally I have got the pb with my own data.
but I found that the result of re-train model was not the same architecture with the one in ssd_mobilenet_v1_coco_2017_11_17.tar.gz
Here is the picture which I captured from Tensorboard:
The retrained one of my own data and pipe configure .
https://i.stack.imgur.com/kegcT.jpg
The original one from pb file within ssd_mobilenet_v1_coco_2017_11_17.tar.gz
https://i.stack.imgur.com/IAGi3.jpg
According to the pictures from tensorboard , I found that there were two input tensors in BoxPredictor but there were three with original.
Could anyone who could help me to solve this problem ?
Thanks...
PS: I use the tensorflow with version 1.4.0-GPU

Can I create a TensorBoard visualization from pre-trained unsummarized checkpoint files?

I have implemented the sequence to sequence model in Tensorflow for about 100,000 steps without specifying the summarizing operations required for TensorBoard.
I have the checkpoint log files for every 1000 steps. Is there any way to visualize the data without having to retrain the entire model i.e. extract the summaries from the checkpoint files to feed to TensorBoard?
I tried running TensorBoard directly on the checkpoint files, which obviously said no scalar summaries found. I also tried inserting the summary operations in code but it requires me to completely retrain the model for the summaries to get created.

TensorFlow: load checkpoint, but only parts of it (convolutional layers)

Is it possible to only load specific layers (convolutional layers) out of one checkpoint file?
I've trained some CNNs fully-supervised and saved my progress (I'm doing object localization). To do auto-labelling I thought of building a weakly-supervised CNNs out of my current model...but since the weakly-supervised version has different fully-connected layers, I would like to select only the convolutional filters of my TensorFlow checkpoint file.
Of course I could manually save the weights of the corresponding layers, but due to the fact that they're already included in TensorFlow's checkpoint file I would like to extract them there, in order to have one single storing file.
TensorFlow 2.1 has many different public facilities for loading checkpoints (model.save, Checkpoint, saved_model, etc), but to the best of my knowledge, none of them has filtering API. So, let me suggest a snippet for hard cases which uses tooling from the TF2.1 internal development tests.
checkpoint_filename = '/path/to/our/weird/checkpoint.ckpt'
model = tf.keras.Model( ... ) # TF2.0 Model to initialize with the above checkpoint
variables_to_load = [ ... ] # List of model weight names to update.
from tensorflow.python.training.checkpoint_utils import load_checkpoint, list_variables
reader = load_checkpoint(checkpoint_filename)
for w in model.weights:
name=w.name.split(':')[0] # See (b/29227106)
if name in variables_to_load:
print(f"Updating {name}")
w.assign(reader.get_tensor(
# (Optional) Handle variable renaming
{'/var_name1/in/model':'/var_name1/in/checkpoint',
'/var_name2/in/model':'/var_name2/in/checkpoint',
# ... and so on
}.get(name,name)))
Note: model.weights and list_variables may help to inspect variables in Model and in the checkpoint
Note also, that this method will not restore model's optimizer state.