Grad-CAM in keras, ValueError: Graph disconnected: cannot obtain value for tensor Tensor "input_11_6:0", shape=(None, 150, 150, 3) - tensorflow

How to perform Grad-CAM on pretrained custom model.
How to select last_conv_layer_name and classifier_layer_names?
What is its significances and how to select layers' names?
Should I consider Densenet121 sublayers or densenet as one functional layer?
How to perform Grad-CAM for this trained network?
These are the steps I tried,
#load model and custom metrics
dependencies = {'recall_m': recall_m, 'precision_m' : precision_m, 'f1_m' : f1_m }
model = keras.models.load_model("model_val_acc-73.33.h5", custom_objects = dependencies)
model.summary()
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
densenet121 (Functional) (None, 4, 4, 1024) 7037504
_________________________________________________________________
flatten (Flatten) (None, 16384) 0
_________________________________________________________________
dense_encoder (Dense) (None, 1024) 16778240
_________________________________________________________________
dropout_51 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 262400
_________________________________________________________________
dropout_52 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 128) 32896
_________________________________________________________________
dropout_53 (Dropout) (None, 128) 0
_________________________________________________________________
dense_4 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_54 (Dropout) (None, 64) 0
_________________________________________________________________
dense_5 (Dense) (None, 32) 2080
_________________________________________________________________
dropout_55 (Dropout) (None, 32) 0
_________________________________________________________________
Final (Dense) (None, 2) 66
=================================================================
Total params: 24,121,442
Trainable params: 17,083,938
Non-trainable params: 7,037,504
This is the heat map function:-
###defining heat map
def make_gradcam_heatmap(img_array, model, last_conv_layer_name, classifier_layer_names):
# First, we create a model that maps the input image to the activations
# of the last conv layer
last_conv_layer = model.get_layer(last_conv_layer_name)
last_conv_layer_model = keras.Model(model.inputs, last_conv_layer.output)
# Second, we create a model that maps the activations of the last conv
# layer to the final class predictions
classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:])
x = classifier_input
for layer_name in classifier_layer_names:
x = model.get_layer(layer_name)(x)
classifier_model = keras.Model(classifier_input, x)
# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
# Compute activations of the last conv layer and make the tape watch it
last_conv_layer_output = last_conv_layer_model(img_array)
tape.watch(last_conv_layer_output)
# Compute class predictions
preds = classifier_model(last_conv_layer_output)
top_pred_index = tf.argmax(preds[0])
top_class_channel = preds[:, top_pred_index]
# This is the gradient of the top predicted class with regard to
# the output feature map of the last conv layer
grads = tape.gradient(top_class_channel, last_conv_layer_output)
# This is a vector where each entry is the mean intensity of the gradient
# over a specific feature map channel
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
# We multiply each channel in the feature map array
# by "how important this channel is" with regard to the top predicted class
last_conv_layer_output = last_conv_layer_output.numpy()[0]
pooled_grads = pooled_grads.numpy()
for i in range(pooled_grads.shape[-1]):
last_conv_layer_output[:, :, i] *= pooled_grads[i]
# The channel-wise mean of the resulting feature map
# is our heatmap of class activation
heatmap = np.mean(last_conv_layer_output, axis=-1)
# For visualization purpose, we will also normalize the heatmap between 0 & 1
heatmap = np.maximum(heatmap, 0) / np.max(heatmap)
return heatmap
this is an image input
img_array = X_test[10] # 10th image sample
X_test[10].shape
#(150, 150, 3)
last_conv_layer_name = "densenet121"
classifier_layer_names = [ "dense_2", "dense_3", "dense_4", "dense_5", "Final" ]
# Generate class activation heatmap
heatmap = make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
) ####===> (I'm getting error here, in this line)
So what is wrong with last_conv_layer_name and classifier_layer_names.
Can anyone please explain this?

Related

Tensorflow music generation with lstm - model.fit not working

I am trying to train a lstm model for music generation, but something seems to cause an error when calling model.fit().
# Compile the model
lstm.compile(loss='categorical_crossentropy', optimizer='rmsprop')
tf.keras.utils.plot_model(lstm, show_shapes=True)
# Train the model
lstm.fit([trainChords, trainDurations], [targetChords, targetDurations], epochs=500)
Model: "model_6"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_19 (InputLayer) [(None, 32)] 0 []
input_20 (InputLayer) [(None, 32)] 0 []
embedding_18 (Embedding) (None, 32, 64) 2048 ['input_19[0][0]']
embedding_19 (Embedding) (None, 32, 64) 2048 ['input_20[0][0]']
concatenate_9 (Concatenate) (None, 64, 64) 0 ['embedding_18[0][0]',
'embedding_19[0][0]']
lstm_8 (LSTM) (None, 64, 512) 1181696 ['concatenate_9[0][0]']
dense_18 (Dense) (None, 64, 256) 131328 ['lstm_8[0][0]']
dense_19 (Dense) (None, 64, 32) 8224 ['dense_18[0][0]']
dense_20 (Dense) (None, 64, 32) 8224 ['dense_18[0][0]']
==================================================================================================
Total params: 1,333,568
Trainable params: 1,333,568
Non-trainable params: 0
I get the error that some shapes are not compatible, but I don't know how to fix this.
ValueError Traceback (most recent call last)
<ipython-input-63-8785c106bc4b> in <module>
5
6 # Train the model
----> 7 lstm.fit([trainChords, trainDurations], [targetChords, targetDurations], epochs=500)
ValueError: Shapes (None,) and (None, 64, 32) are incompatible
Any help would be appreciated.
Update on building the model. Maybe the error is with the output layers(?)
# Define input layers
chordInput = tf.keras.layers.Input(shape = (nChords))
durationInput = tf.keras.layers.Input(shape = (nDurations))
# Define embedding layers
chordEmbedding = tf.keras.layers.Embedding(nChords, embedDim, input_length = sequenceLength)(chordInput)
durationEmbedding = tf.keras.layers.Embedding(nDurations, embedDim, input_length = sequenceLength)(durationInput)
# Merge embedding layers using a concatenation layer
mergeLayer = tf.keras.layers.Concatenate(axis=1)([chordEmbedding, durationEmbedding])
# Define LSTM layer
lstmLayer = tf.keras.layers.LSTM(512, return_sequences=True)(mergeLayer)
# Define dense layer
denseLayer = tf.keras.layers.Dense(256)(lstmLayer)
# Define output layers
chordOutput = tf.keras.layers.Dense(nChords, activation = 'softmax')(denseLayer)
durationOutput = tf.keras.layers.Dense(nDurations, activation = 'softmax')(denseLayer)
# nChords and nDurations are both 32
# Define model
lstm = tf.keras.Model(inputs = [chordInput, durationInput], outputs = [chordOutput, durationOutput])

Tensorflow: access to see the layer activation (fine-tuning),

I use fine-tuning. How can I see and access the activations of all layers that are inside of the convolutional base?
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(inp_img_h, inp_img_w, 3))
def create_functional_model():
inp = Input(shape=(inp_img_h, inp_img_w, 3))
model = conv_base(inp)
model = Flatten()(model)
model = Dense(256, activation='relu')(model)
outp = Dense(1, activation='sigmoid')(model)
return Model(inputs=inp, outputs=outp)
model = create_functional_model()
model.summary()
The model summary is
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 7, 7, 512) 14714688
_________________________________________________________________
flatten_2 (Flatten) (None, 25088) 0
_________________________________________________________________
dense_4 (Dense) (None, 256) 6422784
_________________________________________________________________
dense_5 (Dense) (None, 1) 257
=================================================================
Total params: 21,137,729
Trainable params: 21,137,729
Non-trainable params: 0
_________________________________________________________________
Thus, the levels inside the conv_base are not accessible.
As #Frightera said in comments, you can access the base model summary by:
model.layers[0].summary()
And if you want to access activation functions of its layers you can try this:
print(model.layers[0].layers[index_of_layer].activation)
#or
print(model.layers[0].get_layer("name_of_layer").activation)

ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (3306, 67, 1)

I am trying to train a neural network on Semantic Role Labeling task (text classification task). The dataset consist of sentences on which the neural network has to be trained to predict a class for each word. Apart from using the embedding matrix, I am also using other features (meta_data_features). The number of classes in Y_train are 61. The number 3306 represents the number of sentences in my dataset (size of my dataset). MAX_LEN = 67. The code for the architecture is:
embedding_layer = Embedding(67,
300,
embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_LEN,
trainable=False)
sentence_input = Input(shape=(67,), dtype='int32')
meta_input = Input(shape=(67,), name='meta_input')
embedded_sequences = embedding_layer(sentence_input)
x_1 = (SimpleRNN(256))(embedded_sequences)
x = concatenate([x_1, meta_input], axis=1)
x = Dropout(0.3)(x)
x = Dense(32, activation='relu')(x)
predictions = Dense(61, activation='softmax')(x)
model = Model([sentence_input,meta_input], predictions)
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['sparse_categorical_accuracy'])
print(model.summary())
The snapshot of model summary is:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 67) 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 67, 300) 1176000 input_1[0][0]
__________________________________________________________________________________________________
simple_rnn_1 (SimpleRNN) (None, 256) 142592 embedding_1[0][0]
__________________________________________________________________________________________________
meta_input (InputLayer) (None, 67) 0
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 323) 0 simple_rnn_1[0][0]
meta_input[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 323) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 32) 10368 dropout_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 61) 2013 dense_1[0][0]
==================================================================================================
Total params: 1,330,973
Trainable params: 154,973
Non-trainable params: 1,176,000
__________________________________________________________________________________________________
The function call is:
simple_RNN_model_trainable.fit([padded_sentences, meta_data_features], padded_verbs,batch_size=32,epochs=1)
X_train constitutes [padded_sentences, meta_data_features] and Y_train is padded_verbs. Their shapes are:
padded_sentences - (3306, 67)
meta_data_features - (3306, 67)
padded_verbs - (3306, 67, 1)
When I try to fit the model, I get the error, "ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (3306, 67, 1)"
It would be great if somebody can help me in resolving the error. Thanks!

How to read Keras's model structure?

For example:
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(train_dataset))
test_dataset = test_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(test_dataset))
def pad_to_size(vec, size):
zeros = [0] * (size - len(vec))
vec.extend(zeros)
return vec
...
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=False)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
The print reads as:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 64) 523840
_________________________________________________________________
bidirectional (Bidirectional (None, 128) 66048
_________________________________________________________________
dense (Dense) (None, 64) 8256
_________________________________________________________________
dense_1 (Dense) (None, 1) 65
=================================================================
Total params: 598,209
Trainable params: 598,209
Non-trainable params: 0
I have the following question:
1) For the embedding layer, why is the ouput shape is (None, None, 64). I understand '64' is the vector length. Why are the other two None?
2) How is the output shape of bidirectional layer is (None, 128)? Why is it 128?
For the embedding layer, why is the ouput shape is (None, None, 64). I understand '64' is the vector length. Why are the other two None?
You can see this function produces (None,None) (including the batch dimension) (in other words it does input_shape=(None,) as default) if you don't define the input_shape to the first layer of the Sequential model.
If you pass in an input tensor of size (None, None) to an embedding layer, it produces a (None, None, 64) tensor assuming embedding dimension is 64. The first None is the batch dimension and the second is the time dimension (refers to the input_length parameter). So that's why you get a (None, None, 64) sized output.
How is the output shape of bidirectional layer is (None, 128)? Why is it 128?
Here, you have a Bidirectional LSTM. Your LSTM layer produces a (None, 64) sized output (when return_sequences=False). When you have a Bidirectional layer it is like having two LSTM layers (one going forward, other going backwards). And you have a default merge_mode of concat meaning that the two output states from forward and backward layers will be concatenated. This gives you a (None, 128) sized output.

How to feed only half of the RNN output to next RNN output layer in tensorflow?

I want to feed only RNN output at odd positions to the next RNN layer. How to achieve that in tensorflow?
I basically want to build the top layer in the following diagram, which halves the sequence size. The bottom layer is just a simple RNN.
Is this what you need?
from tensorflow.keras import layers, models
import tensorflow.keras.backend as K
inp = layers.Input(shape=(10, 5))
out = layers.LSTM(50, return_sequences=True)(inp)
out = layers.Lambda(lambda x: tf.stack(tf.unstack(out, axis=1)[::2], axis=1))(out)
out = layers.LSTM(50)(out)
out = layers.Dense(20)(out)
m = models.Model(inputs=inp, outputs=out)
m.summary()
You get the following model. You can see the second LSTM only gets 5 timesteps from the total 10 steps (i.e. every other output of the previous layer)
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 10, 5)] 0
_________________________________________________________________
lstm_2 (LSTM) (None, 10, 50) 11200
_________________________________________________________________
lambda_1 (Lambda) (None, 5, 50) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 50) 20200
_________________________________________________________________
dense_1 (Dense) (None, 20) 1020
=================================================================
Total params: 32,420
Trainable params: 32,420
Non-trainable params: 0