Visualize Tensorflow Graph from Checkpoint - tensorflow

I am importing a pretrained mobilenet's model mobilenet_v1_0.25_128_frozen.pb into my tensorflow environment. Once imported, I want to be able to save a snapshot of the model architecture in the form of .png. I know that there is a way to do this in keras with tf.keras.utils.plot_model(model, to_file="model.png"). Is there a way to do this in tensorflow session without using Tensorboard. In case, you recommend using tensorboard, I don't want to separately run tensorboard. I want a way to save the model architecture inside the tensorflow session without starting tensorboard.

Related

Saving the Learned Weights of a Network to Train on another Dataset

I would like to train a MLP(Multi Layer Perceptron) with MNIST dataset. I use a validation set so I can save the weights of the best model. Then I want to load these weights back into the same architecture and use them to initialize and train with another dataset. I would like to know if this is possible with Tensorflow 1.x or 2.x. Right now I am trying to write a custom function to do it but it is getting complicated. I am using tf 1.x.
I suggest you take a look at tensorflow's documentation, here a link of a tutorial to save your weights and load them afterwards:
https://www.tensorflow.org/tutorials/keras/save_and_load

how to visualize the Tensorflow model of Text-to-speech synthesis https://github.com/Rayhane-mamah/Tacotron-2

I have been trying to visualize and see layers/parameters/Flops for this Tensorflow model of Text-to-speech synthesis https://github.com/Rayhane-mamah/Tacotron-2
Is there a way I can visualize or see graph of Tacotron-2 with all its RNN/LSTM layers using tensorflow?
Do I need train the model first before being able to print the model, or is there a way to simply see what ops are in each layer for the model without training?
I'm having a hard time figuring this out as I'm new to TF/pytorch frameworks. It seems to me one should be able to just print the model as the github has .py source, but I just don't know how this simple/basic things work with python and how to do it.

How to retrieve original TensorFlow frozen graph from .tflite?

Basically I am trying to use google's pre trained Speaker-id model for speaker detection. But this being a TensorFlow Lite model, I can't use it on my Linux pc. For that, I am trying to find a converter back to its frozen graph model.
Any help on this converter or any direct way to use tensorflow Lite pretrained models on desktop itself, will be appreciated.
You can use the converter which generates tflite models to convert it back to a .pb file if that is what you're searching for.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md

How to load, edit, and then save a TensorFlow checkpoint?

I would like to be able to load in all weights and biases of a TensorFlow checkpoint, apply some mathematical operations to the weights and biases (such as thresholding, scaling, etc), and then save a new TensorFlow checkpoint with the same structure as the original but with the edited weights and biases. Using an answer to this question I am able to successfully load a TensorFlow checkpoint and view the names and contents of the tensors within it, but I am unable to make changes to these tensors. This post asks a similar question and the asker is referred to this documentation on MetaGraphs, but I am new to TensorFlow and so I don't see how this information could be helpful.
So, what is the most effective way to load, edit, and then save TensorFlow checkpoints?

How to run inference on inception v3 trained models?

I've successfully trained the inception v3 model on custom 200 classes from scratch. Now I have ckpt files in my output dir. How to use those models to run inference?
Preferably, load the model on GPU and pass images whenever I want while the model persists on GPU. Using TensorFlow serving is not an option for me.
Note: I've tried to freeze these models but failed to correctly put output_nodes while freezing. Used ImagenetV3/Predictions/Softmax but couldn't use it with feed_dict as I couldn't get required tensors from freezed model.
There is poor documentation on TF site & repo on this inference part.
It sounds like you're on the right track, you don't really do anything different at inference time as you do at training time except that you don't ask it to compute the optimizer at inference time, and by not doing so, no weights are ever updated.
The save and restore guide in tensorflow documentation explains how to restore a model from checkpoint:
https://www.tensorflow.org/programmers_guide/saved_model
You have two options when restoring a model, either you build the OPS again from code (usually a build_graph() method) then load the variables in from the checkpoint, I use this method most commonly. Or you can load the graph definition & variables in from the checkpoint if the graph definition was saved with the checkpoint.
Once you've loaded the graph you'll create a session and ask the graph to compute just the output. The tensor ImagenetV3/Predictions/Softmax looks right to me (I'm not immediately familiar with the particular model you're working with). You will need to pass in the appropriate inputs, your images, and possibly whatever parameters the graph requires, sometimes an is_train boolean is needed, and other such details.
Since you aren't asking tensorflow to compute the optimizer operation no weights will be updated. There's really no difference between training and inference other than what operations you request the graph to compute.
Tensorflow will use the GPU by default just as it did with training, so all of that is pretty much handled behind the scenes for you.