Tensorflow 2.0 : Variables in Saved model - tensorflow

Is the variables files saved by saved_model API in protocol buffer (pb) format? If not, is there a way to load this without using tensorflow APIs( restore/ load)

There's a pure Python API which doesn't use TensorFlow operations if that's helpful: list variables and load a single variable. For a SavedModel you can point those at the variables/ subdirectory.
There's also TensorBundle which is the implementation in C++.
If neither of those are helpful the answer is probably "no". Theoretically it could be spun off into a separate package; if you're interested in doing that feel free to reach out.

You can use tf.keras.models.load_model to load a model from saved_model and what you get is a tf.keras.Model object.

i am not sure if it is verified. but it seems like pointing The list_variables and load_variable to variables subdirectory of SavedModel is not working. we will see "checkpoint" file missed assert.
a WA is to create a checkpoint file with one line pointed to the data file name.
model_checkpoint_path: "variables"

Related

Tensorflow saved_model reading

I am looking for a method to read and further modify a tensorflow saved_model.
Current saved_model was transformed from a onnx model, containing a model.pb, a fingerprint.pb, an asset folder and a variables folder.
I am new to tensorflow and this might be a stupid question. Is there any method recommendation or guide? Thank you for helping.
I tried tf.saved_model.load() method, but the result it returns seems uneditable:object structure
Expecting to read and modify a tensorflow saved_model.

Using custom StyleGAN2-ada network in GANSpace (.pkl to .pt conversion)

I trained a network using Nvdia's StyleGAN2-ada pytorch implementation. I now have a .pkl file. I would like to use the GANSpace code on my network. However, to use GANSpace with a custom model, you need to be able to give it a checkpoint to your model that should be uploaded somewhere (they suggest Google Drive)(checkpoint required in code here). I am not entirely sure how this works or why it works like this, but either way it seems I need a .pt file of my network, not a .pkl file, which is what I currently have.
I tried following this tutorial. It seems the GANSpace code actually provides a file (models/stylegan2/convert_weight.py) that can do this conversion. However, it seems the file convert_weight.py that was supposed to be there has been replaced by a link to a whole other repo. If I try run the convert_weight.py file as below, it gives me the following error
python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2-pytorch/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
ModuleNotFoundError: No module named 'dnnlib'
This makes sense because there is no such dnnlib module. If I instead change it to look for the dnnlib module somewhere that does have it (here) like this
python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
it previously gave me an error saying TensorFlow had not been installed (which in all fairness it hadn't because I am using PyTorch), much like this error reported here. I then installed TensorFlow, but then it gives me this error.
ModuleNotFoundError: No module named 'torch_utils'
again the same as in the previous issue reported on github. After installed torch_utils I get the same error as SamTransformer (ModuleNotFoundError: No module named 'torch_utils.persistence'). The response was "convert_weight.py does not supports stylegan2-ada-pytorch".
There is a lot I am not sure about, like why I need to convert a .pkl file to .pt in the first place. A lot of the stuff seems to talk about converting Tensorflow models to Pytorch ones, but mine was done in Pytorch originally, so why do I need to convert it? I just need a way to upload my own network to use in GANSpace - I don't really mind how, so any suggestions would be much appreciated.
Long story short, the conversion script provided was to convert weights from the official Tensorflow implementation of StyleGAN2 into Pytorch. As you mentioned, you already have a model in Pytorch, so it's reasonable for the conversion script to not work.
Instead of StyleGAN2 you used StyleGAN2-Ada which isn't mentioned in the GANspace repo. Most probably it didn't exist by the time the GANspace repo was created. As far as I know, StyleGAN2-Ada uses the same architecture as StyleGAN2, so as long as you manually modify your pkl file into the required pt format,you should be able to continue setup.
Looking at the source code for converting to Pytorch, GANspace requires the pt file to be a dict with keys: ['g', 'g_ema', 'd', 'latent_avg']. StyleGAN2-Ada saves a pkl containing a dict with the following keys: ['G', 'G_ema', 'D', 'augment_pipe']. You might be able to get things to work by loading the contents of your pkl file and resaving them in pt using these keys.

Manually writing a tensorboard input

Is there a low-level API to write custom things into the tensorboard input directory?
For instance, this would enable writing summaries into the tensorboard directory without writing them from a tensorflow session, but from a custom executable.
As far as I can see, all the tensorboard inputs are inside a single append-only file where the structure of the file is not declared ahead (ie how many items we expects, what is their type, etc).
And each summary proto is sequentially written to this file through this class : https://github.com/tensorflow/tensorflow/blob/49c20c5814dd80f81ced493d362d374be9ab0b3e/tensorflow/core/lib/io/record_writer.cc
Was it ever attempted to manually create tensorboard input?
Is the format explicitely documented or do I have to reverse-engineer it?
thanks!
The library tensorboardX provides this functionality. It was written by a pytorch user who wanted to use tensorboard, but it doesn't depend on pytorch in any way.
You can install it with pip install tensorboardx.

How to load tensorflow checkponit by myself without c++ api?

I am using tensorflow 1.0.
My production environment cannot build tensorflow-cpp because low gcc&glibc version.
Is there any doc about how to load a checkponit or freezed-graph in C++ without api?
1、 how to save network parameter? (embeding...)
2、 how to save graph structure (layers,weights...)
There is no documentation on doing this that I know of. Loading a checkpoint without the C++ runtime won't be very useful to you because you won't be able to run it.
The checkpoint by default does not include the graph structure, but if you export a metagraph you will get it in a serialized protocol buffer format. Implementing a parser for this (and the weights checkpoint) yourself sounds difficult to get right and likely to break in the future.

How to serialize both the graph and values in a protobuf file?

The Android example that comes with Tensorflow downloads a protobuf file for InceptionV3 which contains both the graph and the values from the model. In the docs, I could only find how to serialize the graph (tf.Graph.as_graph_def) or save the variable values with a tf.train.Saver. How can you save everything to a single file, as done for that example?
I answered a similar question on this topic: Is there an example on how to generate protobuf files holding trained Tensorflow graphs?
The basic idea is to use tf.import_graph_def() to replace the variables in the original (training) graph with constants, and then write out the resulting GraphDef using tf.Graph.as_graph_def().