Convert Inception model with include_top=False from Keras to Pytorch - tensorflow

Im try convert old project writen on Keras to PyTorch.
Keras create_model() contains folowing code. This is (129,500,1) grayscale image as input and (None, 2, 14, 2038) as output. Output tensor used in another BiLSTM later.
from tensorflow.python.keras.applications.inception_v3 import InceptionV3
inception_model = InceptionV3(include_top=False, weights=None, input_tensor=input_tensor)
for layer in inception_model.layers:
layer.trainable = False
x = inception_model.output
How I am can convert this code to Pytorch? The main problem is "include_top=False" what do not exist in Pytorch torchvision.inception_v3 model. This flag allow Keras model work with non-standard 1 channel inputs and 4-dim last Conv block outputs.

Actually, InceptionV3 model available in PyTorch.
You can try the below code.
import torchvision
torchvision.models.inception_v3()

Related

Replicate the TFLite model layers using python function of conv2d etc

If I want to replicate the layers in a TFLite model using python tensorflow functions for performing a few experiments on the tensor data, how do I do that?
conv can be done by tf.nn.conv2d, but adding bias to it and then applying relu is not giving correct output.
Which all functions would work - Model - tf resnet50 converted to tflite using tensorflow lite converter and optimizations command
I went through this a few months ago. The problem with trying to replicate TFLite layers in regular Tensorflow is that the ordering for weights is different. Example for conv2d:
TFLite - [out_channels, filter_height, filter_width, in_channels]
Regular TF - [filter_height, filter_width, in_channels, out_channels]
Here is an example python implementation that takes in a TFLite tensor W and reorders it such that it can be used in regular TF:
def reorderWeights(W, in_channels, out_channels, kernel):
flatW = W.flatten()
newW = []
for j in range(in_channels*kernel*kernel):
for i in range(out_channels):
newW.append(flatW[i*(kernel*kernel*in_channels) + j])
newW = np.array(newW)
return newW.reshape(kernel, kernel, in_channels, out_channels)

Concatenate Custom Pretrained Model with new Model Keras

I converted Sports_1M caffe model to Keras and using it as an pretrained model into my new Keras Model.I also loaded the pretrained weights.
I removed the top layer of Pretrained model and finally concatenated with the New Model. I don't want to train the loaded pretrained model again (just wanted to use the embedding of pretrained model and use it to train my new Keras model).
The code looks like this:
from keras.models import model_from_json
from keras import backend as K
K.set_image_dim_ordering('th')
model = model_from_json(open('/content/sports_1M/sports1M_model_new.json', 'r').read())
model.load_weights('/content/sports_1M/sports1M_weights.h5')
My questions are:
Should I compile the pretrained model then concatenate it?
model.compile(loss='mean_squared_error', optimizer='adam')
How do I know that the pretrained model is not training it again (which I don't want)?
How do I train the whole (concatenated) architecture?
model2 = Model(model.get_input_at(0),model.get_layer(layer_name).output)
input_shape = (3, 16, 112, 112)
encoded_l = model2(left_input)
prediction = Dense(1,activation='sigmoid')(encoded_l)
Model([left_input,right_input] , prediction)
When we use Inbuild pretrained models like VGG , we generally use VGG(include_top = False , weights = 'imagenet')
I am thinking like this for my case
I got the answer , simply we can set layers.trainable = False
for layer in model.layers:
layer.trainable = False

Upgrade Tensorflow model or Retrain for SavedModel

I followed "Tensorflow for poets" in 2017 and retrained my own collection of images and created "retrained_graph.pb" and "retrained_labels.txt"
Today I need to run this model on Tensorflow Serving.
There are two options to accomplish this:
Upgrade the old model to save it as under the "saved_model" format and use it on Tensorflow Serving - I found some SO postings to acccomplish it (this or
that).
Use the latest tensorflow Hub with Keras (https://www.tensorflow.org/tutorials/images/hub_with_keras)
I am looking for the best option among these, or a new one.
In my opinion, either using Tensorflow Hub or using the Pre-Trained Models inside tf.keras.applications is preferable because, in either cases, there won't be many code changes required to Save the Model, to make it compatible for Tensorflow Serving.
The code for reusing the Pre-Trained Model, MobileNet which is present inside tf.keras.applications is shown below:
#Import MobileNet V2 with pre-trained weights AND exclude fully connected layers
IMG_SIZE = 224
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras import Model
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
# Add Global Average Pooling Layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# Add a Output Layer
my_mobilenetv2_output = Dense(5, activation='softmax')(x)
# Combine whole Neural Network
my_mobilenetv2_model = Model(inputs=base_model.input, outputs=my_mobilenetv2_output)
You can Save the Model using the Code given below:
version = 1
MODEL_DIR = 'Image_Classification_Model'
export_path = os.path.join(MODEL_DIR, str(version))
tf.keras.models.save_model(model = model, filepath = export_path)

What is the expected behavior and purpose of model.trainable=False in tensorflow keras

It seems setting model.trainable=False in tensorflow keras does nothing except for to print a wrong model.summary(). Here is the code to reproduce the issue:
import tensorflow as tf
import numpy as np
IMG_SHAPE = (160, 160, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
# for layer in base_model.layers:
# layer.trainable=False
bc=[] #before compile
ac=[] #after compile
for layer in base_model.layers:
bc.append(layer.trainable)
print(np.all(bc)) #True
print(base_model.summary()) ##this changes to show no trainable parameters but that is wrong given the output to previous np.all(bc)
base_model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
for layer in base_model.layers:
ac.append(layer.trainable)
print(np.all(ac)) #True
print(base_model.summary()) #this changes to show no trainable parameters but that is wrong given the output to previous np.all(ac)
In light of this - What is the expected behavior and purpose of model.trainable=False in tensorflow keras?
https://github.com/tensorflow/tensorflow/issues/29535
I think this issue could help.
If you are looking for a way to not update some weights in your model I would suggest using the parameter var_list in the minimize function from your Optimizer.
For some reason when creating a model from keras Tensorflow switch all tf.Variables to True, and since all are Tensors we are not able to update the value to False.
What I do in my code is create scope names for all pretrained models and loop over it adding all layers that are not from my pretrained model.
trainable_variables = []
variables_collection = tf.get_collection('learnable_variables')
for layer in tf.trainable_variables():
if 'vgg_model' not in layer.name:
trainable_variables.append(layer)
tf.add_to_collection('learnable_variables', layer)
grad = tf.train.GradientDescentOptimizer(lr)
train_step = grad.minimize(tf.reduce_sum([loss]), var_list=trainable_variables)
Watch out for global_initializer as well, since it will overwrite your pretrained Weights as well. You can solve that by using tf.variables_initializer and passing a list of variables you want to add weights.
sess.run(tf.variables_initializer(variables_collection))
Source I used when trying to solve this problem
Is it possible to make a trainable variable not trainable?
TensorFlow: Using tf.global_variables_initializer() after partially loading pre-trained weights

Keras models in tensorflow

I'm building image processing network in tensorflow and I want to make use of texture loss. Texture loss seems simple to implement if you have pretrained model loaded.
I'm using TF to build the computational graph for my model and I want to incorporate Keras.application.VGG19 model to get output from layer 'block4_conv4'.
The problem is: I have two TF tensors target and result from my main model, how to feed them into keras VGG19 in the same session to compute their diff and use it in main loss for my model?
It seems following code does the trick
with tf.variable_scope("") as scope:
phi_func = VGG19(include_top=False, weights=None, input_shape=(128, 128, 3))
text_1 = phi_func(predicted)
scope.reuse_variables()
text_2 = phi_func(x)
text_loss = tf.reduce_mean((text_1 - text_2)**2)
right after session created I call phi_func.load_weights(path) to initiate weights