I have a trained EfficientNetB0-based model with saved weights in a H5 format.
I want to add some preprocessing layers before the model, load the weights, and retrain it.
If I create a model like this:
inp = tf.keras.layers.Input(shape=[224,224,3])
noise = tf.keras.layers.GaussianNoise(stddev=10.)(inp)
feature_extractor = tf.keras.applications.EfficientNetB0(include_top=False, pooling="max")
features = feature_extractor(noise)
output1 = tf.keras.layers.Dense(100, activation="sigmoid")(features)
output2 = tf.keras.layers.Dense(10, activation="softmax")(output1)
model = tf.keras.models.Model(inp, [output1, output2])
I get this summary:
Layer (type) Output Shape Param #
=================================================================
input_27 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
gaussian_noise_13 (GaussianN (None, 224, 224, 3) 0
_________________________________________________________________
efficientnetb0 (Functional) (None, 1280) 4049571
_________________________________________________________________
dense (Dense) (None, 100) 128100
_________________________________________________________________
dense_1 (Dense) (None, 10) 1010
and I lose access to intermediate layers. I can't use the tf.keras.Sequential approach because my model has two outputs.
I want to retain the layer names inside EfficientNetB0 so that I can reload my weights. How do I do that?
So it looks like for the toy example I created above the answer is:
inp = tf.keras.layers.Input(shape=[224,224,3])
noise = tf.keras.layers.GaussianNoise(stddev=10.)(inp)
feature_extractor = tf.keras.applications.EfficientNetB0(input_tensor=noise, include_top=False, pooling="max")
output1 = tf.keras.layers.Dense(100, activation="sigmoid")(feature_extractor.output)
output2 = tf.keras.layers.Dense(10, activation="softmax")(output1)
model = tf.keras.models.Model(inp, [output1, output2])
However, I'm actually working with a custom model class that doesn't have that argument in the constructor...
Without the input_tensor argument is there another way to do this?
Related
I am a newbie to all this so please be kind to this question :)
What I am trying to do is train a Mobilenet classifier using the transfer learning technique and then implement the Gradcam technique to understand what my model is looking into.
I created a model
input_layer = tf.keras.layers.Input(shape=IMG_SHAPE)
x = preprocess_input(input_layer)
y = base_model(x)
y = tf.keras.layers.GlobalAveragePooling2D()(y)
y = tf.keras.layers.Dropout(0.2)(y)
outputs = tf.keras.layers.Dense(5)(y)
model = tf.keras.Model(inputs=input_layer, outputs=outputs)
model.summary()
model summary:
Model: "functional_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_RealDiv_1 (Tenso [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_Sub_1 (TensorFlo [(None, 224, 224, 3)] 0
_________________________________________________________________
mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d_1 ( (None, 1280) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 6405
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
passed it to grad cam algorithm but the grad cam algorithm is not able to find the last convolutional layer
Plausible solution:
If instead of having an encapsulated 'mobilenetv2_1.00_224' layer if I can have unwrapped layers of mobilenet added in the model the grad cam algorithm will be able to find that last layer
Problem
I am not able to create the model where I can have data augmentation and pre_processing layer added to mobilenet unwrapped layers.
Thanks in advance
Regards
Ankit
#skruff see if this helps
def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None):
# First, we create a model that maps the input image to the activations
# of the last conv layer as well as the output predictions
grad_model = tf.keras.models.Model(
[model.inputs], [model.get_layer(last_conv_layer_name).output, model.output]
)
# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
last_conv_layer_output, preds = grad_model(img_array)
if pred_index is None:
pred_index = tf.argmax(preds[0])
class_channel = preds[:, pred_index]
# This is the gradient of the output neuron (top predicted or chosen)
# with regard to the output feature map of the last conv layer
grads = tape.gradient(class_channel, last_conv_layer_output)
# This is a vector where each entry is the mean intensity of the gradient
# over a specific feature map channel
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
# We multiply each channel in the feature map array
# by "how important this channel is" with regard to the top predicted class
# then sum all the channels to obtain the heatmap class activation
last_conv_layer_output = last_conv_layer_output[0]
heatmap = last_conv_layer_output # pooled_grads[..., tf.newaxis]
heatmap = tf.squeeze(heatmap)
# For visualization purpose, we will also normalize the heatmap between 0 & 1
heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap)
return heatmap.numpy()
How to show all layers in a tensorflow model with the model base?
base_model = keras.applications.MobileNetV3Small(
input_shape=model_input_shape,
include_top=False,
weights="imagenet",
)
# =================== build model
model = keras.Sequential(
[
keras.Input(shape=image_shape),
preprocessing.Resizing(*model_input_shape[:2]),
preprocessing.Rescaling(1.0 / 255),
base_model,
layers.GlobalAveragePooling2D(),
# missing dropout
layers.Dense(1, activation="sigmoid"),
]
)
model.summary()
The output is this:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resizing (Resizing) (None, 224, 224, 3) 0
_________________________________________________________________
rescaling_1 (Rescaling) (None, 224, 224, 3) 0
_________________________________________________________________
MobilenetV3small (Functional (None, 7, 7, 1024) 1529968 <---------- why can't I see all layers here?
_________________________________________________________________
global_average_pooling2d (Gl (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 1) 1025
How do I show all layers?
for layer in model.layers:
print(layer)
The above has the same problem. What am I doing wrong?
In such a setup, the base_model acts as a single layer, ie. become nested. To inspect it, you can try either
model.layers[2].summary()
for i, layer in enumerate(model.layers):
if i == 2:
for nested_layer in layer.layers:
print(nested_layer)
or, more intuitively, you can use this solution.
def summary_plus(layer, i=0):
if hasattr(layer, 'layers'):
if i != 0:
layer.summary()
for l in layer.layers:
i += 1
summary_plus(l, i=i)
summary_plus(model)
or, you can also use the plot_model function as well
keras.utils.plot_model(
model,
expand_nested=True # < make it true
)
Update 1: Raised on the issue regarding this. Keras #15239. Hopefully, it will be solved soon.
Update 2: model.summary now has expand_nested parameter. #15251
import keras
import numpy as numpy
class ActivationLogger(keras.callbacks.Callback):
def set_model(self,model):
self.model = model //inform the callback of what model we will be calling
layer_outputs = [layer.output for layer in model.layers]
self.activations_model = keras.models.Model(model.input,layer_outputs)//returns activation of every layer
def on_epoch_end(self,epoch,logs = None):
if self.validation_data is None:
raise RuntimeError("Requires validation_data")
validation_sample = self.validation_data[0][0:1]
activations = self.activations_model.predict(validation_sample) #computes activation of every epoch
f = open('activations_at_epoch_' + str(epoch) + '.npz', 'w')
np.savez(f, activations)
f.close()
While I was reading this code to create custom callbacks,I couldn't understand few lines of code.I know what are callbacks. What I understood from the above code is that we inherit the super class keras.callbacks.Callback and in the set_model fucntion, we inform the callback of what model it will be calling. I am not able to understand the below line, why does keras.models.Model take model.input?
self.activations_model = keras.models.Model(model.input,
layer_outputs)
and the line activations = self.activations_model.predict(validation_sample)
The further lines just save the numpy arrays to the drive. Also is the callback created,called on every epoch?
Let's say i have an simple model
model = Sequential()
model.add(Dense(32, input_shape=(784, 1), activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(4, activation='softmax'))
cb = ActivationLogger()
cb.set_model(model)
Now let me go through line by line of function set_model:
self.model = model
self.model.summary() = Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 32) 25120
_________________________________________________________________
dense_1 (Dense) (None, 16) 528
_________________________________________________________________
dropout (Dropout) (None, 16) 0
_________________________________________________________________
dense_2 (Dense) (None, 4) 68
=================================================================
Total params: 25,716
Trainable params: 25,716
Non-trainable params: 0
second line:
layer_outputs = [layer.output for layer in model.layers]
print(layer_outputs) = [<tf.Tensor 'dense/Relu:0' shape=(None, 32) dtype=float32>, <tf.Tensor 'dense_1/Relu:0' shape=(None, 16) dtype=float32>, <tf.Tensor 'dropout/cond/Identity:0' shape=(None, 16) dtype=float32>, <tf.Tensor 'dense_2/Softmax:0' shape=(None, 4) dtype=float32>]
layer_outputs contains all the tensors or the layers of the models
and the
third line:
self.activations_model = keras.models.Model(model.input,layer_outputs)
Now in this line, it creates a model with input shape corresponding to original model(model.input = it gives the input tensor or layer of a model. you can also checkout the output shape of a model using model.output)
so self.activation_model is model with one input shape((784, ) in this case) and output at every layer
so when you feed any input through this model it will give you a list of outputs correspond to every layer
Normally output will be a numpy array of shape (none, 4) (taking main Sequential model)
but self.activation will give you a list a numpy arrays. So the line
activations = self.activations_model.predict(validation_sample)
activation just contains the predictions of self.activation_model which a nothing but a list of numpy arrays
[(none, 32)(output of first layer), (None, 16)(output of 2nd), (none, 16)(dropout lyr), (none, 4)(final)
i would suggest you to read about keras Model Function api which is used to make models with many input and outputs
For example:
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(train_dataset))
test_dataset = test_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(test_dataset))
def pad_to_size(vec, size):
zeros = [0] * (size - len(vec))
vec.extend(zeros)
return vec
...
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=False)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
The print reads as:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 64) 523840
_________________________________________________________________
bidirectional (Bidirectional (None, 128) 66048
_________________________________________________________________
dense (Dense) (None, 64) 8256
_________________________________________________________________
dense_1 (Dense) (None, 1) 65
=================================================================
Total params: 598,209
Trainable params: 598,209
Non-trainable params: 0
I have the following question:
1) For the embedding layer, why is the ouput shape is (None, None, 64). I understand '64' is the vector length. Why are the other two None?
2) How is the output shape of bidirectional layer is (None, 128)? Why is it 128?
For the embedding layer, why is the ouput shape is (None, None, 64). I understand '64' is the vector length. Why are the other two None?
You can see this function produces (None,None) (including the batch dimension) (in other words it does input_shape=(None,) as default) if you don't define the input_shape to the first layer of the Sequential model.
If you pass in an input tensor of size (None, None) to an embedding layer, it produces a (None, None, 64) tensor assuming embedding dimension is 64. The first None is the batch dimension and the second is the time dimension (refers to the input_length parameter). So that's why you get a (None, None, 64) sized output.
How is the output shape of bidirectional layer is (None, 128)? Why is it 128?
Here, you have a Bidirectional LSTM. Your LSTM layer produces a (None, 64) sized output (when return_sequences=False). When you have a Bidirectional layer it is like having two LSTM layers (one going forward, other going backwards). And you have a default merge_mode of concat meaning that the two output states from forward and backward layers will be concatenated. This gives you a (None, 128) sized output.
I am trying to develop a siamese network for simple face verification (and recognition in the second stage). I have a network in place that I managed to train but I am a bit puzzled when it comes to how to save and restore the model + making predictions with the trained model. Hoping that maybe an experienced person in the domain can help to make progress..
Here is how I create my siamese network, to begin with...
model = ResNet50(weights='imagenet') # get the original ResNet50 model
model.layers.pop() # Remove the last layer
for layer in model.layers:
layer.trainable = False # do not train any of original layers
x = model.get_layer('flatten_1').output
model_out = Dense(128, activation='relu', name='model_out')(x)
model_out = Lambda(lambda x: K.l2_normalize(x,axis=-1))(model_out)
new_model = Model(inputs=model.input, outputs=model_out)
# At this point, a new layer (with 128 units) added and normalization applied.
# Now create siamese network on top of this
anchor_in = Input(shape=(224, 224, 3))
positive_in = Input(shape=(224, 224, 3))
negative_in = Input(shape=(224, 224, 3))
anchor_out = new_model(anchor_in)
positive_out = new_model(positive_in)
negative_out = new_model(negative_in)
merged_vector = concatenate([anchor_out, positive_out, negative_out], axis=-1)
# Define the trainable model
siamese_model = Model(inputs=[anchor_in, positive_in, negative_in],
outputs=merged_vector)
siamese_model.compile(optimizer=Adam(lr=.0001),
loss=triplet_loss,
metrics=[dist_between_anchor_positive,
dist_between_anchor_negative])
And I train the siamese_model. When I train it, if I interpret results right, it is not really training the underlying model, it just trains the new siamese network (essentially, just the last layer is trained).
But this model has 3 input streams. After the training, I need to save this model in a way so that it just takes 1 or 2 inputs so that I can perform predictions by calculating the distance between 2 given images. How do I save this model and reuse it now?
Thank you in advance!
ADDENDUM:
In case you wonder, here is the summary of siamese model.
siamese_model.summary()
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_3 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_4 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
model_1 (Model) (None, 128) 23849984 input_2[0][0]
input_3[0][0]
input_4[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 384) 0 model_1[1][0]
model_1[2][0]
model_1[3][0]
==================================================================================================
Total params: 23,849,984
Trainable params: 262,272
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
You can use below code to save your model
siamese_model.save_weights(MODEL_WEIGHTS_FILE)
And then to load your model you need to use
siamese_model.load_weights(MODEL_WEIGHTS_FILE)
Thanks