How to extract weights of DQN agent in TF-Agents framework? - tensorflow

I am using TF-Agents for a custom reinforcement learning problem, where I train a DQN (constructed using DqnAgents from the TF-Agents framework) on some features from my custom environment, and separately use a keras convolutional model to extract these features from images. Now I want to combine these two models into a single model and use transfer learning, where I want to initialize the weights of the first part of the network (images-to-features) as well as the second part which would have been the DQN layers in the previous case.
I am trying to build this combined model using keras.layers and compiling it with the Tf-Agents tf.networks.sequential class to bring it to the necessary form required when passing it to the DqnAgent() class. (Let's call this statement (a)).
I am able to initialize the image feature extractor network's layers with the weights since I saved it as a .h5 file and am able to obtain numpy arrays of the same. So I am able to do the transfer learning for this part.
The problem is with the DQN layers, where I saved the policy from the previous example using the prescribed Tensorflow Saved Model Format (pb) which gives me a folder containing model attributes. However, I am unable to view/extract the weights of my DQN in this way, and the recommended tf.saved_model.load('policy_directory') is not really transparent with respect to what data I can see regarding the policy. If I have to follow the transfer learning as I do in statement (a), I need to extract the weights of my DQN and assign them to the new network. The documentation seems to be quite sparse for this case where transfer learning needs to be applied.
Can anyone help me in this, by explaining how I can extract weights from the Saved Model method (from the pb file)? Or is there a better way to go about this problem?

Related

Tensorflow Object Detection API - "fine_tune" vs "detection" vs "classification"

I am following this tutorial: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
In it, it has the following snippet in the file pipeline.config:
fine_tune_checkpoint_type: "detection" # Set this to "detection" since we want to be training the full detection model
Further investigation leads to the following discoveries:
There are at least 3 options for the field fine_tune_checkpoint_type - fine_tune,detection and classification
Not all models from the model zoo allow all options.
My questions are:
What do each of fine_tune,detection and classification mean in this context, and more importantly when is it appropriate to use each one.
How do I tell which options are compatible with models in the model zoo?
Ultimately I wish to do transfer learning - e.g. take an existing trained model and train it to draw boxes for one or more novel classes.
Those options indicates how to restore checkpoints and comes from here
I copy here the interesting part:
This option controls how variables are restored from the (pre-trained) fine_tune_checkpoint. For TF2 models, 3 different types are supported:
"classification": Restores only the
classification backbone part of the feature extractor. This option is typically used when you want to train a detection model starting from a pre-trained image classification model, e.g. a ResNet model pre-trained on ImageNet.
"detection": Restores the entire feature extractor. The only parts of the full detection model that are not restored are the box and class prediction heads. This option is typically used when you want to use a pre-trained detection model and train on a new dataset or task which requires different box and class prediction heads.
"full":Restores the entire detection model, including the feature extractor, its classification backbone, and the prediction heads. This option should only be used when the pre-training and fine-tuning tasks are the same. Otherwise, the model's parameters may have incompatible shapes, which will cause errors when attempting to restore the checkpoint. For more details about this parameter, see the restore_map (TF1) or restore_from_object (TF2) function documentation in the /meta_architectures/*meta_arch.py files.
I guess fine_tune is currently replaced by "full". Based on your needs the right choice appear to be "detection". To know which models supports wich options, as indicated above you have to look at the restore_from_object function definition in the proper /meta_architectures/*meta_arch.py files

Extracting representations from different layers of a network in TensorFlow 2

I have the weights of a custom pre-trained model. I need to extract the representations for different inputs that I pass through the model, across its different layers. What would be the best way of doing this?
I am using TensorFlow 2.1.0 and currently load in the weights of the model using either hub.KerasLayer() or tf.saved_model.load()
Any help would be greatly appreciated! I am very new to TensorFlow and have no choice but to use it since the weights were acquired from another source.
tf.saved_model.load() and its wrapper hub.KerasLayer load both the computation graph and the pre-trained weights. I suppose you're dealing with a TF2-style SavedModel that has its computation packaged in TensorFlow functions. If so, there's no easy way to extract intermediate results from within a function. If possible, you could ask the model creator to provide more outputs, or, if you have the model's Python source, build the model from source and initialize its weights with those from the SavedModel (some plumbing required).

Custom Object detection using tensorflow

I have trained the object detection API using ssd_mobilenet_v1_coco_2017_11_17 model to detect a custom object. But after training, the API only detects the custom object and not the objects for which the API is already trained. ssd_mobilenet_v1_coco_2017_11_17 model detect 90 objects.
Is there any way to add more classes to an existing model so that it can detect new objects along with the one it has been trained for?
This question is already asked here and some answer could be found here.
The very last layer of the networks is softmax layer. when the network is trained, the weights of the network is optimized for the exact number of classes on the training set. So, if you need to add a new class and also the classes it was trained, the easiest way is to get the original dataset it was trained on along with your new class images. Then start the training from the pre-trained model weights. The training should converge faster as it has to do relatively little adjustments.

finetuning tensorflow seq2seq model

I've trained a seq2seq model for machine translation (DE-EN). And I have saved the trained model checkpoint. Now, I'd like to fine-tune this model checkpoint to some specific domain data samples which have not been seen in previous training phase. Is there a way to achieve this in tensorflow? Like modifying the embedding matrix somehow.
I couldn't find any relevant papers or works addressing this issue.
Also, I'm aware of the fact that the vocabulary files needs to be updated according to new sentence pairs. But, then do we have to again start training from scratch? Isn't there an easy way to dynamically update the vocabulary files and embedding matrix according to the new samples and continue training from the latest checkpoint?

how can use torch model?

I have a Torch Model which is trained on a large scale dataset (Places Dataset) and it's authors uploaded it on github, i am working on a similar project and i want to make use of it and use it's trained weights instead of use the large dataset to train it and save time and efforts, it is possible ? how can i know the only the trained filters weights? i don't want to copy the code, i only want to make use of it and save time and efforts.
NOTE: I use Tensoflow in my implementation.