Extracting representations from different layers of a network in TensorFlow 2 - tensorflow2.0

I have the weights of a custom pre-trained model. I need to extract the representations for different inputs that I pass through the model, across its different layers. What would be the best way of doing this?
I am using TensorFlow 2.1.0 and currently load in the weights of the model using either hub.KerasLayer() or tf.saved_model.load()
Any help would be greatly appreciated! I am very new to TensorFlow and have no choice but to use it since the weights were acquired from another source.

tf.saved_model.load() and its wrapper hub.KerasLayer load both the computation graph and the pre-trained weights. I suppose you're dealing with a TF2-style SavedModel that has its computation packaged in TensorFlow functions. If so, there's no easy way to extract intermediate results from within a function. If possible, you could ask the model creator to provide more outputs, or, if you have the model's Python source, build the model from source and initialize its weights with those from the SavedModel (some plumbing required).

Related

Tensorflow: Does a tflite file contain data about the model architecture? (graph?)

Does a tflite file contain data about the model architecture? A graph that shows what operations there where between the weights and features and biases, what kind of layers (linear or convolutional etc), size of layers, and what activation functions are there in-between the layers?
For example a graph you get with graphviz, that contains all the information, or does a tflite file only contain the final weights of the model after training?
I am working on a project with image style transfer. I wanted to do some research on an existing project, and see what parameters work better. The project I am looking at is here:
https://tfhub.dev/sayakpaul/lite-model/arbitrary-image-stylization-inceptionv3-dynamic-shapes/int8/transfer/1
I can download a tflite file, but I don't know much about these files. If they have the architecture I need, how do I read it?
TFLite flatbuffer files contain the model structure as well. For example, there are a subgraph concept in TFLite, which corresponds to the function concept in the programming language and the operator nodes also represent a graph node, which takes inputs and generates outputs. By using the Netron application, the model architecture can be visualized.

How to extract weights of DQN agent in TF-Agents framework?

I am using TF-Agents for a custom reinforcement learning problem, where I train a DQN (constructed using DqnAgents from the TF-Agents framework) on some features from my custom environment, and separately use a keras convolutional model to extract these features from images. Now I want to combine these two models into a single model and use transfer learning, where I want to initialize the weights of the first part of the network (images-to-features) as well as the second part which would have been the DQN layers in the previous case.
I am trying to build this combined model using keras.layers and compiling it with the Tf-Agents tf.networks.sequential class to bring it to the necessary form required when passing it to the DqnAgent() class. (Let's call this statement (a)).
I am able to initialize the image feature extractor network's layers with the weights since I saved it as a .h5 file and am able to obtain numpy arrays of the same. So I am able to do the transfer learning for this part.
The problem is with the DQN layers, where I saved the policy from the previous example using the prescribed Tensorflow Saved Model Format (pb) which gives me a folder containing model attributes. However, I am unable to view/extract the weights of my DQN in this way, and the recommended tf.saved_model.load('policy_directory') is not really transparent with respect to what data I can see regarding the policy. If I have to follow the transfer learning as I do in statement (a), I need to extract the weights of my DQN and assign them to the new network. The documentation seems to be quite sparse for this case where transfer learning needs to be applied.
Can anyone help me in this, by explaining how I can extract weights from the Saved Model method (from the pb file)? Or is there a better way to go about this problem?

Tensorflow Extended: Is it possible to use pytorch training loop in Tensorflow extended flow

I have trained an image classification model using pytorch.
Now, I want to move it from research to production pipeline.
I am thinking of using TensorFlow extended. I have a very noob doubt that will I'll be able to use my PyTorch trained model in the TensorFlow extended pipeline(I can convert the trained model to ONNX and then to Tensorflow compatible format).
I don't want to rewrite and retrain the training part to TensorFlow as it'll be a great overhead.
Is it possible or Is there any better way to productionize the PyTorch trained models?
You should be able to convert your PyTorch image classification model to Tensorflow format using ONNX, as long as you are using standard layers. I would recommend doing the conversion and then look at both model summaries to make sure they are relatively similar. Also, do some tests to make sure your converted model handles any particular edge cases you have. Once you have confirmed that the converted model works, save your model as a TF SavedModel format and then you should be able to use it in Tensorflow Extended (TFX).
For more info on the conversion process, see this tutorial: https://learnopencv.com/pytorch-to-tensorflow-model-conversion/
You could considering using the torchX library. I haven't use it yet, but it seems to make it easier to deploy models by creating and running model pipelines. I don't think it has the same data validation functionality that Tensorflow Extended has, but maybe that will be added in the future.

Where can I find the pretrained models of fasterRCNN / R-FCN with Mobilenet Feature extractor trained on COCO datset?

I want train a custom dataset on FasterRCNN with Mobilenetv1 or v2. I want to use the pre-trained models in tensorflow zoo. But I cant find faster Rcnn model with mobilenet as base extractor. Where can I get it?
I have already tensorflow zoo in github. I have previous used SSD+Mobilenet config for the same. Now I want to compare the results with FasterRCNN and RCNN with Mobilenet.
The official repo has not released Faster RCNN with mobilenet models yet. But if you want you can still use some other models with mobilenet trained on COCO, the process is a bit complicated.
There are two important steps to proceed.
First one is to have corresponding feature extractor class. For Faster RCNN, the models directory already contains faster_rcnn_mobilenet feature extractor implementation so this step is OK. But for R-FCN, you will have to implement the feature extractor class yourself.
Second one is to change tensor names available in the checkpoint. For example, if you use ssd_mobilenet_v1_xxx as checkpoint, then all tensors within mobilenet scope are named as FeatureExtractor/MobilenetV1/XXX while if in the faster_rcnn_mobilenet_v1 model, the tensor names within mobilenet scope are FirstStageFeatureExtractor/MobilenetV1/XXX (and SecondStageFeatureExtractor/MobilenetV1/XXX). So essentially you need to remove FirstStage (as well as SecondStage) in the names of all feature extractor tensors, then these tensors will have exactly the same name as in the checkpoint, and will be correctly restored. If you do this, the function you need to modify is
def restore_map(self,
fine_tune_checkpoint_type='detection',
load_all_detection_checkpoint_vars=False):
in file faster_rcnn_meta_arch.py.

How to save and use a trained neural network developed in PyTorch / TensorFlow / Keras?

Are there ways to save a model after training and sharing just the model with others? Like a regular script? Since the network is a collection of float matrices, is it possible to just extract these trained weights and run it on new data to make predictions, instead of requiring the users to install these frameworks too? I am new to these frameworks and will make any clarifications as needed.
PyTorch: As explained in this post, you can save a model's parameters as a dictionary, or load a dictionary to set your model's parameters.
You can also save/load a PyTorch model as an object.
Both procedures require the user to have at least one tensor computation framework installed, e.g. for efficient matrix multiplication.