I am trying to use grad-CAM (I'm following this https://www.pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/ from PyImageSearch) on a CNN I'm using transfer learning on.
In particular, I am using a simple CNN for a regression problem. I used MobileNetV2 with an Average Pooling layer and a Dense layer with one unit on top, as shown below:
base_model = MobileNetV2(include_top=False, input_shape=(224, 224, 3), weights='imagenet')
base_model.trainable = False
inputs = keras.Input(shape=(224, 224, 3))
x = base_model(inputs)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(1, activation="linear")(x)
model = keras.Model(inputs, outputs)
and the summary is:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
mobilenetv2_1.00_224 (Model) (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d (Gl (None, 1280) 0
_________________________________________________________________
dense (Dense) (None, 1) 1281
=================================================================
Total params: 2,259,265
Trainable params: 1,281
Non-trainable params: 2,257,984
_________________________________________________________________
I initialize the CAM object with:
pred = 0.35
cam = GradCAM(model, pred, layerName='input_2')
where pred is the predicted output on which I want to inspect the CAM and I also specify the layer name in order to refer to the input layer. Then I compute the heatmap on a sample image "img":
heatmap = cam.compute_heatmap(img)
Now, let's focus on a part of the implementation of the function compute_heatmap from PyImageSearch:
# record operations for automatic differentiation
with tf.GradientTape() as tape:
# cast the image tensor to a float-32 data type, pass the
# image through the gradient model, and grab the loss
# associated with the specific class index
inputs = tf.cast(image, tf.float32)
(convOutputs, predictions) = gradModel(inputs)
# loss = predictions[:, self.classIdx] # original from PyImageSearch
loss = predictions[:] # modified by me as I have only 1 output unit
# use automatic differentiation to compute the gradients
grads = tape.gradient(loss, convOutputs)
The problem here is that the gradient grads is None.
I thought that maybe the problem could lie in the network structure (all goes fine when reproducing the example of the classification task from the website), but I can't figure out where is the problem with this network used for regression!
Could you please help me?
Related
I have a trained EfficientNetB0-based model with saved weights in a H5 format.
I want to add some preprocessing layers before the model, load the weights, and retrain it.
If I create a model like this:
inp = tf.keras.layers.Input(shape=[224,224,3])
noise = tf.keras.layers.GaussianNoise(stddev=10.)(inp)
feature_extractor = tf.keras.applications.EfficientNetB0(include_top=False, pooling="max")
features = feature_extractor(noise)
output1 = tf.keras.layers.Dense(100, activation="sigmoid")(features)
output2 = tf.keras.layers.Dense(10, activation="softmax")(output1)
model = tf.keras.models.Model(inp, [output1, output2])
I get this summary:
Layer (type) Output Shape Param #
=================================================================
input_27 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
gaussian_noise_13 (GaussianN (None, 224, 224, 3) 0
_________________________________________________________________
efficientnetb0 (Functional) (None, 1280) 4049571
_________________________________________________________________
dense (Dense) (None, 100) 128100
_________________________________________________________________
dense_1 (Dense) (None, 10) 1010
and I lose access to intermediate layers. I can't use the tf.keras.Sequential approach because my model has two outputs.
I want to retain the layer names inside EfficientNetB0 so that I can reload my weights. How do I do that?
So it looks like for the toy example I created above the answer is:
inp = tf.keras.layers.Input(shape=[224,224,3])
noise = tf.keras.layers.GaussianNoise(stddev=10.)(inp)
feature_extractor = tf.keras.applications.EfficientNetB0(input_tensor=noise, include_top=False, pooling="max")
output1 = tf.keras.layers.Dense(100, activation="sigmoid")(feature_extractor.output)
output2 = tf.keras.layers.Dense(10, activation="softmax")(output1)
model = tf.keras.models.Model(inp, [output1, output2])
However, I'm actually working with a custom model class that doesn't have that argument in the constructor...
Without the input_tensor argument is there another way to do this?
I am a newbie to all this so please be kind to this question :)
What I am trying to do is train a Mobilenet classifier using the transfer learning technique and then implement the Gradcam technique to understand what my model is looking into.
I created a model
input_layer = tf.keras.layers.Input(shape=IMG_SHAPE)
x = preprocess_input(input_layer)
y = base_model(x)
y = tf.keras.layers.GlobalAveragePooling2D()(y)
y = tf.keras.layers.Dropout(0.2)(y)
outputs = tf.keras.layers.Dense(5)(y)
model = tf.keras.Model(inputs=input_layer, outputs=outputs)
model.summary()
model summary:
Model: "functional_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_RealDiv_1 (Tenso [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_Sub_1 (TensorFlo [(None, 224, 224, 3)] 0
_________________________________________________________________
mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d_1 ( (None, 1280) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 6405
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
passed it to grad cam algorithm but the grad cam algorithm is not able to find the last convolutional layer
Plausible solution:
If instead of having an encapsulated 'mobilenetv2_1.00_224' layer if I can have unwrapped layers of mobilenet added in the model the grad cam algorithm will be able to find that last layer
Problem
I am not able to create the model where I can have data augmentation and pre_processing layer added to mobilenet unwrapped layers.
Thanks in advance
Regards
Ankit
#skruff see if this helps
def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None):
# First, we create a model that maps the input image to the activations
# of the last conv layer as well as the output predictions
grad_model = tf.keras.models.Model(
[model.inputs], [model.get_layer(last_conv_layer_name).output, model.output]
)
# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
last_conv_layer_output, preds = grad_model(img_array)
if pred_index is None:
pred_index = tf.argmax(preds[0])
class_channel = preds[:, pred_index]
# This is the gradient of the output neuron (top predicted or chosen)
# with regard to the output feature map of the last conv layer
grads = tape.gradient(class_channel, last_conv_layer_output)
# This is a vector where each entry is the mean intensity of the gradient
# over a specific feature map channel
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
# We multiply each channel in the feature map array
# by "how important this channel is" with regard to the top predicted class
# then sum all the channels to obtain the heatmap class activation
last_conv_layer_output = last_conv_layer_output[0]
heatmap = last_conv_layer_output # pooled_grads[..., tf.newaxis]
heatmap = tf.squeeze(heatmap)
# For visualization purpose, we will also normalize the heatmap between 0 & 1
heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap)
return heatmap.numpy()
I have the sample following model
from tensorflow.keras import models
from tensorflow.keras import layers
sample_model = models.Sequential()
sample_model.add(layers.Dense(32, input_shape=(4,)))
sample_model.add(layers.Dense(16, input_shape = (44,)))
sample_model.compile(loss="binary_crossentropy",
optimizer="adam", metrics = ["accuracy"])
IP for the model:
sam_x = np.random.rand(10,4)
sam_y = np.array([0,1,1,0,1,0,0,1,0,1,])
sample_model.fit(sam_x,sam_y)
The confusion is the fit should have thrown an error of shape mismatch as the expected_input_shape for the 2nd Dense Layer is given as (None,44) but the output for the 1st Dense Layer (which is the input of the 2nd Dense Layer) will be of shape (None,32). But it ran successfully.
I don't understand why there was no error. Any clarifications will be helpful
The input_shape keyword argument has an effect only on the first layer of a Sequential. The shape of the input of the other layers will be derived from their previous layer.
That behaviour is hinted in the doc of tf.keras.layers.InputShape:
When using InputLayer with Keras Sequential model, it can be skipped by moving the input_shape parameter to the first layer after the InputLayer.
And in the Sequential Model guide.
The behaviour can be confirmed by looking at the source of the Sequential.add method:
if not self._layers:
if isinstance(layer, input_layer.InputLayer):
# Case where the user passes an Input or InputLayer layer via `add`.
set_inputs = True
else:
batch_shape, dtype = training_utils.get_input_shape_and_dtype(layer)
if batch_shape:
# Instantiate an input layer.
x = input_layer.Input(
batch_shape=batch_shape, dtype=dtype, name=layer.name + '_input')
# This will build the current layer
# and create the node connecting the current layer
# to the input layer we just created.
layer(x)
set_inputs = True
If there is no layers yet in the model, then an Input will be added to the model with the shape derived from the first layer of the model. This is done only if no layer is present yet in the model.
That shape is either fully known (if input_shape has been passed to the first layer of the model) or will be fully known once the model is built (for example, with a call to model.build(input_shape)).
The thing is after checking the input shape of the model from the first layer, it won't check or deal with other declared input shape inside that same model. For example, if you write your model the following way
sample_model.add(layers.Dense(32, input_shape=(4,)))
sample_model.add(layers.Dense(16, input_shape = (44,)))
sample_model.add(layers.Dense(8, input_shape = (32,)))
The program will always check the first declared input shape layer and discard the rest. So, if you start your first layer with input_shape = (44,), you need to pass exact feature numbers to your model as input such as:
sam_x = np.random.rand(10,44)
sam_y = np.array([0,1,1,0,1,0,0,1,0,1,])
sample_model.fit(sam_x,sam_y)
Additionally, if you look at the Functional API, unlike the Sequential model, you must create and define a standalone Input layer that specifies the shape of input data. It's not learnable but simply a spec layer. It's a kind of gateway of the input data for the model. That means even if we define input_shape inside the other layers, they all will be discarded. For example:
nputs = keras.Input(shape=(4,))
dense = layers.Dense(64, input_shape=(8,)) # dicard input_shape
x = dense(inputs)
x = layers.Dense(64, input_shape=(16,))(x) # dicard input_shape
outputs = layers.Dense(10)(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
Here is a more complex example with Conv2D and MNIST.
encoder_input = keras.Input(shape=(28, 28, 1),)
x = layers.Conv2D(16, 3, activation="relu", input_shape=[32,32,3])(encoder_input)
x = layers.Conv2D(32, 3, activation="relu", input_shape=[64,64,3])(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu", input_shape=[224,321,3])(x)
x = layers.Conv2D(16, 3, activation="relu", input_shape=[420,32,3])(x)
x = layers.GlobalMaxPooling2D()(x)
out = layers.Dense(10, activation='softmax')(x)
encoder = keras.Model(encoder_input, out, name="encoder")
encoder.summary()
Model: "encoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) [(None, 28, 28, 1)] 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 26, 26, 16) 160
_________________________________________________________________
conv2d_9 (Conv2D) (None, 24, 24, 32) 4640
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 32) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 6, 6, 32) 9248
_________________________________________________________________
conv2d_11 (Conv2D) (None, 4, 4, 16) 4624
_________________________________________________________________
global_max_pooling2d_2 (Glob (None, 16) 0
_________________________________________________________________
dense_56 (Dense) (None, 10) 170
=================================================================
Total params: 18,842
Trainable params: 18,842
Non-trainable params: 0
def pre_process(image, label):
return (image / 256)[...,None].astype('float32'),
tf.keras.utils.to_categorical(label, num_classes=10)
(x, y), (_, _) = tf.keras.datasets.mnist.load_data('mnist')
encoder.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = tf.keras.metrics.CategoricalAccuracy(),
optimizer = tf.keras.optimizers.Adam())
encoder.fit(x, y, batch_size=256)
4s 14ms/step - loss: 1.4303 - categorical_accuracy: 0.5279
I think Keras will create (or preserves to create) an additional Input Layer - but as the second dense layer is added using model.add() it will automatically be connected to the layer before, and thus the extra input layer stays unconnected and is not part of the model.
(I agree that it would be nice of Keras to hint at unconnected layers, I sometimes created unconnected layers when using the functional API and changed the inputs. Keras doesn't remind me that I had jumped several layers, I just wondered why the summary() was so short...)
I have a U-net network with VGG16 encoder architecture with pre-trained imagenet weights. Since my input images are grayscale, I added in a convolutional layer with depth 3 prior to sending the input to the U-net model.
Now, I'm trying to get the output of an intermediate layer within the U-net network. I create an intermediate model whose output is the output of the layer that I'm interested in. Here is my code:
base_model = sm.Unet('vgg16', encoder_weights='imagenet', classes=1, activation='sigmoid')
inp = Input(shape=(448, 224, 1))
l1 = Conv2D(3, (1,1))(inp)
out = base_model(l1)
model = Model(inp, out)
model.summary()
intermediate_layer_model = Model(inputs=model.layers[0].input,
outputs=model.get_layer('model_1').get_layer('center_block2_relu').output)
Here is the output:
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 448, 224, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 448, 224, 3) 6
_________________________________________________________________
model_1 (Model) multiple 23752273
=================================================================
Total params: 23,752,279
Trainable params: 23,748,247
Non-trainable params: 4,032
_________________________________________________________________
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, ?, ?, 3), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
It seems to me that there is an issue with the U-net model having an input layer (input_1) and I'm not supplying this information during the construction of intermediate_layer_model. However, I expect that the intermediate model to take only the grayscale images as input and not require an additional 3-channel input.
Any help would be appreciated.
I am trying to develop a siamese network for simple face verification (and recognition in the second stage). I have a network in place that I managed to train but I am a bit puzzled when it comes to how to save and restore the model + making predictions with the trained model. Hoping that maybe an experienced person in the domain can help to make progress..
Here is how I create my siamese network, to begin with...
model = ResNet50(weights='imagenet') # get the original ResNet50 model
model.layers.pop() # Remove the last layer
for layer in model.layers:
layer.trainable = False # do not train any of original layers
x = model.get_layer('flatten_1').output
model_out = Dense(128, activation='relu', name='model_out')(x)
model_out = Lambda(lambda x: K.l2_normalize(x,axis=-1))(model_out)
new_model = Model(inputs=model.input, outputs=model_out)
# At this point, a new layer (with 128 units) added and normalization applied.
# Now create siamese network on top of this
anchor_in = Input(shape=(224, 224, 3))
positive_in = Input(shape=(224, 224, 3))
negative_in = Input(shape=(224, 224, 3))
anchor_out = new_model(anchor_in)
positive_out = new_model(positive_in)
negative_out = new_model(negative_in)
merged_vector = concatenate([anchor_out, positive_out, negative_out], axis=-1)
# Define the trainable model
siamese_model = Model(inputs=[anchor_in, positive_in, negative_in],
outputs=merged_vector)
siamese_model.compile(optimizer=Adam(lr=.0001),
loss=triplet_loss,
metrics=[dist_between_anchor_positive,
dist_between_anchor_negative])
And I train the siamese_model. When I train it, if I interpret results right, it is not really training the underlying model, it just trains the new siamese network (essentially, just the last layer is trained).
But this model has 3 input streams. After the training, I need to save this model in a way so that it just takes 1 or 2 inputs so that I can perform predictions by calculating the distance between 2 given images. How do I save this model and reuse it now?
Thank you in advance!
ADDENDUM:
In case you wonder, here is the summary of siamese model.
siamese_model.summary()
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_3 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_4 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
model_1 (Model) (None, 128) 23849984 input_2[0][0]
input_3[0][0]
input_4[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 384) 0 model_1[1][0]
model_1[2][0]
model_1[3][0]
==================================================================================================
Total params: 23,849,984
Trainable params: 262,272
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
You can use below code to save your model
siamese_model.save_weights(MODEL_WEIGHTS_FILE)
And then to load your model you need to use
siamese_model.load_weights(MODEL_WEIGHTS_FILE)
Thanks