model saver meta file is huge, can I configure tensorflow to not write it? - tensorflow

I am doing transfer learning in tensorflow with vgg16. I am only training one small layer on top of the 500MB of weights that I got from the numpy npz file on that website. When I save my model, I am specifying just the weights I am training, so the model file is small - however the meta file is 500MB. From reading stackoverflow: what-is-the-tensorflow-checkpoint-meta-file it sounds safe to remove the meta files, but can I configure tensorflow to not write them?

You could try the tf.train.Saver.save with the optional write_meta_graph=False param.
https://www.tensorflow.org/versions/r0.11/api_docs/python/state_ops.html#Saver

Related

Saving and loading tensorflow model as .npy vs. Checkpoints

I am working with this this github repo. They ask us to download and save a pre_trained vgg16 model in nets folder. For this I saved the checkpoints of a pretrained vgg16. Then in the code I noticed they are loading the model this way:
if vgg16_npy_path is None:
vgg16_npy_path = './nets/vgg16.npy'
self.data_dict = np.load(vgg16_npy_path, encoding='latin1').item()
I read a bit about how to save weights of a tensorflow model as .npy but it seemed overly complicated. Is it possible to just save the checkpoints and load it instead of the numpy format. Does it affect anything later?

How to load darknet YOLOv3 model from .cfg file and weights from .weights file, and save the model with the weights to a .h5 file?

I have downloaded the .weights and .cfg file for YOLOv3 from darknet (link: https://pjreddie.com/darknet/yolo/) I want to create a model and assign the weights from these files, and I want to save the model with the assigned weights to a .h5 file so that I can load the .h5 model into Keras by using keras.models.load_model().
Please help.
You should check the instructions given in this repository. This is basically the keras implementation of YOLOv3 (Tensorflow backend).
Download YOLOv3 weights from YOLO website.
Convert the Darknet YOLO model to a Keras model.
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
As you have already downloaded the weights and configuration file, you can skip the first step. Download the convert.py script from repository and simply run the above command.
Note: Above command assumes that yolov3.cfg, yolov3.weights and model_data(folder) are present at the same path as convert.py.
For people getting error from this try changing the layers part in 'convert.py'
Not sure if it was version problem but changing the way converter.py file was loading 'keras.layers' solved all errors for me

Distinguish types of on-disk models

Tensorflow has several types of model formats:
TensorFlow SavedModel 2. Frozen Model 3. Session Bundle 4. Tensorflow Hub module
How can you distinguish between them on-disk? (to later use with tensorflowjs-converter)
And how is each model created?
Yup, there are a LOT of different model types, and they all have good reasons. I'm not going to claim that I have perfect clarity of each, but here's what I know (I think I know).
The .pb file: PB stands for protobuff or Protocol Buffer. This is the model structure, generally without the trained weights and is stored in a binary format.
The .pbtxt file: Nonbinary of the pb file for human reading.
Protobuff files that aren't frozen will need a checkpoint .ckpt file, too. The Checkpoint file is the missing set of weights that the pb needs.
The .h5 file: The model + weights from a Keras save
The .tflite file would be a TensorflowLite model
Frozen Model: A frozen model combines the pb with the weights file, so you don't have to manage two of them. Usually, this means adding the word frozen to the filename. I'm sure this can be inferred when loading the file, but on disk they are a bit more on the honor system and no ckpt file. This strips out extraneous graph info; it's basically like the "Production Ready" version of the model.
Session Bundle: Are a directory. They are nolonger used, and rare.
Tensorflow Hub Module: These are pre-existing popular models that are very likely already exported to TFJS and don't require you to manually convert them. I assume they are supported for Google's benefit, more so than ours. But it's nice to know if you use hub, you can always convert it.
A multi-exported grouping of files looks like this image. From here, you can see quite a few that you could turn into TFJS.

where is the graph file of open images pretrained model?

I need the graph file(.pb file) of pretrained openimages dataset. After downloading the model there are only 3 files
labelmap.txt
model.ckpt
model.ckpt.meta
Where can I find the .pb file?
The model.ckpt.meta contains both the graph and the weights. you have to import it using import_metagraph functionality.
Then you should be able to export the graph defn in pb format, if you really need it.
However I was unable to load the ckpt.meta, due to a google-specific op defined in the model. Please let me know if you can figure that out.

What's the diff between tf.import_graph_def and tf.train.import_meta_graph

When training in model folder there are auto saved meta graph files.
So what's the diff between graph and meta graph.
If I want to load model and do inference witout building graph from scratch,
use tf.train.import_meta_grah is fine?
Well, I find the answer from
https://www.tensorflow.org/versions/r0.9/how_tos/meta_graph/index.html
and
How to load several identical models from save files into one session in Tensorflow
And you can also refer to this:
https://github.com/tensorflow/tensorflow/issues/4658