How can I use multiple pre-trained model without TF-slim? - tensorflow

I want to use/combine different parts of different pre-trained models into one model. For example, I want to use the first few layers (with the pre-trained weights) of ResNet as encoder and then combine them with a decoder from another model, and then I want to train further on that. Is there a way, preferably without using TF-slim? I'm using TensorFlow 1.4.

Related

What is a lite model in Deep Learning?

I wanted to know what is a lite model?
I know that a model that is easier to train and has fewer neurons is a lite model but how to say how much are these "fewer neurons"??
If I use a pre-trained model and add two Dense layers to it (where I freeze those pre-trained model layers and train only the Final two layers) can I call these a lite model as it is faster to train and inference results are also fast???

How to best transfer learning using Dopamine for Reinforcement Learning?

I am using Google's Dopamine framework to train a specific reinforcement learning use-case. I am using an auto encoder to pre-train the convolutional layers of the Deep Q Network and then transfer those pre-trained weights in the final network.
To that end, I have created a separate model (in this case an auto-encoder) which I train and save the resulting model and weights.
The DQN model is created using Keras's model sub-classing method and the model used to save the trained convolutional layers weights was build using the Sequential API. My issue is with when trying to load the pre-trained weights to my final DQN model. Based on whether I use the load_model() or load_weights() functionality from Tensorflow's API I get two different overall behaviors of my network and I would like to understand why. Specifically I have the two following scenarios:
Loading the weights with theload_weights() method to the final model. The weights are the weights of the encoder plus one additional layer(added just before saving the weights) to fit the architecture of the final network implemented in dopamine where they are loaded.
First load the saved model with load_model() and then when defining the new model in the __init__() method, extract the relevant layers from the loaded model and then use them for the final model.
Overall, I would expect the two approaches to yield similar results with regards to the average reward achieved per episode , when I use the same pre-trained weights. However the two approaches differ ( 1. yield higher average reward than 2. although using the same pre-trained weights) and I don't understand why.
Furthermore, in order to validate this behavior I have tried loading random weights with the two aforementioned approaches in order to see a change in behavior. In both cases, based on which of the two aforementioned loading methods I am using, I end up with very similar resulting behavior with the respected case when loading the trained weights. It's seems like the pre-trained weights in each respected case have no effect on the overall resulting training behavior. Although, this might be irrelevant to the issue I am trying to investigate here as it might be the case that the pre-trained weights don't offer any benefit overall which is also possible.
Any thoughts and ideas on this would be much appreciated.

Extracting representations from different layers of a network in TensorFlow 2

I have the weights of a custom pre-trained model. I need to extract the representations for different inputs that I pass through the model, across its different layers. What would be the best way of doing this?
I am using TensorFlow 2.1.0 and currently load in the weights of the model using either hub.KerasLayer() or tf.saved_model.load()
Any help would be greatly appreciated! I am very new to TensorFlow and have no choice but to use it since the weights were acquired from another source.
tf.saved_model.load() and its wrapper hub.KerasLayer load both the computation graph and the pre-trained weights. I suppose you're dealing with a TF2-style SavedModel that has its computation packaged in TensorFlow functions. If so, there's no easy way to extract intermediate results from within a function. If possible, you could ask the model creator to provide more outputs, or, if you have the model's Python source, build the model from source and initialize its weights with those from the SavedModel (some plumbing required).

How to add a layer which is implemented by TensorFlow into a PyTorch neural model?

I’d like to add some layer into a Pytorch based neural model. Basically I am trying to combine to codes together.
But I notice that the layer I want to add is the implemented by Tensorflow. I’d like to know if there is an easy way to integrate a TensorFlow layer into a Pytorch neural model… ?
The error is shown as:
module ‘torch.nn’ has no attribute ‘tensorflow_layer’

Where can I find the pretrained models of fasterRCNN / R-FCN with Mobilenet Feature extractor trained on COCO datset?

I want train a custom dataset on FasterRCNN with Mobilenetv1 or v2. I want to use the pre-trained models in tensorflow zoo. But I cant find faster Rcnn model with mobilenet as base extractor. Where can I get it?
I have already tensorflow zoo in github. I have previous used SSD+Mobilenet config for the same. Now I want to compare the results with FasterRCNN and RCNN with Mobilenet.
The official repo has not released Faster RCNN with mobilenet models yet. But if you want you can still use some other models with mobilenet trained on COCO, the process is a bit complicated.
There are two important steps to proceed.
First one is to have corresponding feature extractor class. For Faster RCNN, the models directory already contains faster_rcnn_mobilenet feature extractor implementation so this step is OK. But for R-FCN, you will have to implement the feature extractor class yourself.
Second one is to change tensor names available in the checkpoint. For example, if you use ssd_mobilenet_v1_xxx as checkpoint, then all tensors within mobilenet scope are named as FeatureExtractor/MobilenetV1/XXX while if in the faster_rcnn_mobilenet_v1 model, the tensor names within mobilenet scope are FirstStageFeatureExtractor/MobilenetV1/XXX (and SecondStageFeatureExtractor/MobilenetV1/XXX). So essentially you need to remove FirstStage (as well as SecondStage) in the names of all feature extractor tensors, then these tensors will have exactly the same name as in the checkpoint, and will be correctly restored. If you do this, the function you need to modify is
def restore_map(self,
fine_tune_checkpoint_type='detection',
load_all_detection_checkpoint_vars=False):
in file faster_rcnn_meta_arch.py.