How to add a layer which is implemented by TensorFlow into a PyTorch neural model? - tensorflow

I’d like to add some layer into a Pytorch based neural model. Basically I am trying to combine to codes together.
But I notice that the layer I want to add is the implemented by Tensorflow. I’d like to know if there is an easy way to integrate a TensorFlow layer into a Pytorch neural model… ?
The error is shown as:
module ‘torch.nn’ has no attribute ‘tensorflow_layer’

Related

Saving the Learned Weights of a Network to Train on another Dataset

I would like to train a MLP(Multi Layer Perceptron) with MNIST dataset. I use a validation set so I can save the weights of the best model. Then I want to load these weights back into the same architecture and use them to initialize and train with another dataset. I would like to know if this is possible with Tensorflow 1.x or 2.x. Right now I am trying to write a custom function to do it but it is getting complicated. I am using tf 1.x.
I suggest you take a look at tensorflow's documentation, here a link of a tutorial to save your weights and load them afterwards:
https://www.tensorflow.org/tutorials/keras/save_and_load

Can I generate heat map using method such as Grad-CAM in concatenated CNN?

I am trying to apply GradCAM to my pre-trained CNN model to generate heat maps of layers. My custom CNN design is shown as follows:
- It adopted all the convolution layers and the pre-trained weights from the VGG16 model.
- Extract lower level features (early convolution layers) from VGG16.
- Train the fully connected layers of both normal/high and lower level features from VGG16.
- Concatenate outputs of both normal/high- and lower-level f.c. layers and then train more f.c. layers before the final prediction.
model design
I want to use GradCAM to visualize the feature maps of the low-level route and the normal/high-level route and I have done such heatmaps on non-concatenate fine-tuned VGG using the last convolutional layers. My question is, on a concatenated CNN model, can the Grad-CAM method still work using the gradient of the prediction with respect to the low- and high-level feature map feature maps respectfully? If not, are there other methods that can do the heatmaps visualization for such a model? Is using the shared fully connected layer an option?
Any idea and suggestions are much appreciated!

can we build object detection model using Tensorflow or it is only possible with the help f tf.keras

Is there any way to build object detection model using Tensorflow without any help of tf.keras module?
From Tensorflow documentation I'm not able to find any example which helps to create model without Keras.
Keras is a high level API. But if you want to use only Tensorflow then you have to implement the architecture using low level API. You can certainly implement but you have to code it yourself to build all the convolutional layers and dense layer by yourself.

How can I use multiple pre-trained model without TF-slim?

I want to use/combine different parts of different pre-trained models into one model. For example, I want to use the first few layers (with the pre-trained weights) of ResNet as encoder and then combine them with a decoder from another model, and then I want to train further on that. Is there a way, preferably without using TF-slim? I'm using TensorFlow 1.4.

Are there any references for feature extraction using LSTM RNN in tensorflow?

Currently I am trying to use pre-trained LSTM RNN model for feature extraction.
I stumbled across following reference for feature extraction using deep neural nets. That's for images however.
https://www.kernix.com/blog/image-classification-with-a-pre-trained-deep-neural-network_p11
In a similar fashion I would like to use LSTM RNN https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/? for feature extraction. The code is implemented using Tensorflow BasicLSTMCell.
Is there any way to get the layer like "pool_3:0" as described in the first reference link ?
Any links or references would be helpful.