Making Class Activation Map (CAM) for EfficientNetB3 architecture - tensorflow

I would like to draw a class activation map for a model built upon EfficeintNet B3. But when I follow different tutorials and codes from different sources, it simply fails....
#load images
img = tf.keras.preprocessing.image.load_img(
base, target_size=(img_height, img_width))
img_array = tf.keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
last_conv = model.layers[2].layers[-3]
grad_model = tf.keras.models.Model(
[model.inputs], [last_conv.output, model.output])
Can't build a grad_model
ValueError: Graph disconnected: cannot obtain value for tensor
KerasTensor(type_spec=TensorSpec(shape=(None, 300, 300, 3),
dtype=tf.float32, name='input_1'), name='input_1',
description="created by layer 'input_1'") at layer "stem_conv". The
following previous layers were accessed without issue: []
This is the model:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequential (Sequential) (None, 300, 300, 3) 0
_________________________________________________________________
rescaling (Rescaling) (None, 300, 300, 3) 0
_________________________________________________________________
efficientnet-b3 (Functional) (None, 10, 10, 1536) 10783528
_________________________________________________________________
global_average_pooling2d (Gl (None, 1536) 0
_________________________________________________________________
dropout (Dropout) (None, 1536) 0
_________________________________________________________________
dense (Dense) (None, 128) 196736
_________________________________________________________________
dense_1 (Dense) (None, 5) 645
=================================================================

To address the graph disconnected value error, you need to build the grad cam model properly. Here is one of the ways to build a model for grad-cam.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
inputs = tf.keras.Input(shape=(300, 300, 3))
x = keras.applications.EfficientNetB3(
input_tensor=inputs, # pass input to input_tensor
include_top=False,
weights=None
)
# flat the base model with x.output
x = layers.GlobalAveragePooling2D()(x.output)
# others
x = layers.Dense(128)(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(5)(x)
model = keras.Model(inputs, x)
for i, layer in enumerate(model.layers[-10:]):
print(i, layer.name, layer.output_shape, layer.trainable)
0 block7b_project_bn (None, 10, 10, 384) True
1 block7b_drop (None, 10, 10, 384) True
2 block7b_add (None, 10, 10, 384) True
3 top_conv (None, 10, 10, 1536) True
4 top_bn (None, 10, 10, 1536) True
5 top_activation (None, 10, 10, 1536) True # < - We will pick this 2D maps
6 global_average_pooling2d_2 (None, 1536) True
7 dense_2 (None, 128) True
8 dropout (None, 128) True
9 dense_3 (None, 5) True
Build Grad-CAM Model
grad_model = keras.models.Model(
[model.inputs],
[
model.get_layer('top_activation').output,
model.output
]
)
Check
With your setup, you would get a disconnected error with the following code. But now, it wouldn't happen.
import numpy as np
image = np.random.rand(1, 300, 300, 3).astype(np.float32)
with tf.GradientTape() as tape:
convOutputs, predictions = grad_model(tf.cast(image, tf.float32))
loss = predictions[:, tf.argmax(predictions[0])]
grads = tape.gradient(loss, convOutputs)
print(grads.shape)
(1, 10, 10, 1536) # NO DISCONNECTED ERROR
To get the heatmaps from your grad-cam model, check the following answers and sources as references.
Grad-CAM
Guided-GradCAM
Grad-CAM mult-output Model
GradCAM-Callback
Swin-Transformer : GradCAM

Related

Sentence classification: Why does my embedding not reduce the shape of the subsequent layer?

I want to embed sentences that all contain 5 words and a my training-set has a total vocabulary of 10000 words. I use this code:
import tensorflow as tf
vocab_size = 10000
inputs = tf.keras.layers.Input(shape=(5,vocab_size), name="input", )
embedding = tf.keras.layers.Embedding(10000, 64)(inputs)
conv2d_1 = Conv2D( filters = 32, kernel_size = (3,3),
strides =(1), padding = 'SAME',)(embedding)
model = tf.keras.models.Model(inputs=inputs, outputs=conv2d_1)
model.summary()
After running I get:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) [(None, 5, 10000)] 0
_________________________________________________________________
embedding_105 (Embedding) (None, 5, 10000, 64) 640000
_________________________________________________________________
conv2d_102 (Conv2D) (None, 5, 10000, 32) 18464
=================================================================
I want to do the embedding to convert the sparse 10000x5 tensor to a dense 64x5 tensor. Apparently that doesn't work as intended, so my question is: Why is the shape of the next layer not (None, 5, 64, 32) instead of (None, 5, 10000, 32)? How can I achieve the compactization?

How to use model subclassing in Keras?

Having the following model written in the sequential API:
config = {
'learning_rate': 0.001,
'lstm_neurons':32,
'lstm_activation':'tanh',
'dropout_rate': 0.08,
'batch_size': 128,
'dense_layers':[
{'neurons': 32, 'activation': 'relu'},
{'neurons': 32, 'activation': 'relu'},
]
}
def get_model(num_features, output_size):
opt = Adam(learning_rate=0.001)
model = Sequential()
model.add(Input(shape=[None,num_features], dtype=tf.float32, ragged=True))
model.add(LSTM(config['lstm_neurons'], activation=config['lstm_activation']))
model.add(BatchNormalization())
if 'dropout_rate' in config:
model.add(Dropout(config['dropout_rate']))
for layer in config['dense_layers']:
model.add(Dense(layer['neurons'], activation=layer['activation']))
model.add(BatchNormalization())
if 'dropout_rate' in layer:
model.add(Dropout(layer['dropout_rate']))
model.add(Dense(output_size, activation='sigmoid'))
model.compile(loss='mse', optimizer=opt, metrics=['mse'])
print(model.summary())
return model
When using a distributed training framework, I need to convert the syntax to use model subclassing instead.
I've looked at the docs but couldn't figure out how to do it.
Here is one equivalent subclassed implementation. Though I didn't test.
import tensorflow as tf
# your config
config = {
'learning_rate': 0.001,
'lstm_neurons':32,
'lstm_activation':'tanh',
'dropout_rate': 0.08,
'batch_size': 128,
'dense_layers':[
{'neurons': 32, 'activation': 'relu'},
{'neurons': 32, 'activation': 'relu'},
]
}
# Subclassed API Model
class MySubClassed(tf.keras.Model):
def __init__(self, output_size):
super(MySubClassed, self).__init__()
self.lstm = tf.keras.layers.LSTM(config['lstm_neurons'],
activation=config['lstm_activation'])
self.bn = tf.keras.layers.BatchNormalization()
if 'dropout_rate' in config:
self.dp1 = tf.keras.layers.Dropout(config['dropout_rate'])
self.dp2 = tf.keras.layers.Dropout(config['dropout_rate'])
self.dp3 = tf.keras.layers.Dropout(config['dropout_rate'])
for layer in config['dense_layers']:
self.dense1 = tf.keras.layers.Dense(layer['neurons'],
activation=layer['activation'])
self.bn1 = tf.keras.layers.BatchNormalization()
self.dense2 = tf.keras.layers.Dense(layer['neurons'],
activation=layer['activation'])
self.bn2 = tf.keras.layers.BatchNormalization()
self.out = tf.keras.layers.Dense(output_size,
activation='sigmoid')
def call(self, inputs, training=True, **kwargs):
x = self.lstm(inputs)
x = self.bn(x)
if 'dropout_rate' in config:
x = self.dp1(x)
x = self.dense1(x)
x = self.bn1(x)
if 'dropout_rate' in config:
x = self.dp2(x)
x = self.dense2(x)
x = self.bn2(x)
if 'dropout_rate' in config:
x = self.dp3(x)
return self.out(x)
# A convenient way to get model summary
# and plot in subclassed api
def build_graph(self, raw_shape):
x = tf.keras.layers.Input(shape=(None, raw_shape),
ragged=True)
return tf.keras.Model(inputs=[x],
outputs=self.call(x))
Build and compile the mdoel
s = MySubClassed(output_size=1)
s.compile(
loss = 'mse',
metrics = ['mse'],
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001))
Pass some tensor to create weights (check).
raw_input = (16, 16, 16)
y = s(tf.ones(shape=(raw_input)))
print("weights:", len(s.weights))
print("trainable weights:", len(s.trainable_weights))
weights: 21
trainable weights: 15
Summary and Plot
Summarize and visualize the model graph.
s.build_graph(16).summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, None, 16)] 0
_________________________________________________________________
lstm (LSTM) (None, 32) 6272
_________________________________________________________________
batch_normalization (BatchNo (None, 32) 128
_________________________________________________________________
dropout (Dropout) (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 32) 1056
_________________________________________________________________
batch_normalization_3 (Batch (None, 32) 128
_________________________________________________________________
dropout_1 (Dropout) (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 32) 1056
_________________________________________________________________
batch_normalization_4 (Batch (None, 32) 128
_________________________________________________________________
dropout_2 (Dropout) (None, 32) 0
_________________________________________________________________
dense_4 (Dense) (None, 1) 33
=================================================================
Total params: 8,801
Trainable params: 8,609
Non-trainable params: 192
tf.keras.utils.plot_model(
s.build_graph(16),
show_shapes=True,
show_dtype=True,
show_layer_names=True,
rankdir="TB",
)

Grad-CAM in keras, ValueError: Graph disconnected: cannot obtain value for tensor Tensor "input_11_6:0", shape=(None, 150, 150, 3)

How to perform Grad-CAM on pretrained custom model.
How to select last_conv_layer_name and classifier_layer_names?
What is its significances and how to select layers' names?
Should I consider Densenet121 sublayers or densenet as one functional layer?
How to perform Grad-CAM for this trained network?
These are the steps I tried,
#load model and custom metrics
dependencies = {'recall_m': recall_m, 'precision_m' : precision_m, 'f1_m' : f1_m }
model = keras.models.load_model("model_val_acc-73.33.h5", custom_objects = dependencies)
model.summary()
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
densenet121 (Functional) (None, 4, 4, 1024) 7037504
_________________________________________________________________
flatten (Flatten) (None, 16384) 0
_________________________________________________________________
dense_encoder (Dense) (None, 1024) 16778240
_________________________________________________________________
dropout_51 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 262400
_________________________________________________________________
dropout_52 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 128) 32896
_________________________________________________________________
dropout_53 (Dropout) (None, 128) 0
_________________________________________________________________
dense_4 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_54 (Dropout) (None, 64) 0
_________________________________________________________________
dense_5 (Dense) (None, 32) 2080
_________________________________________________________________
dropout_55 (Dropout) (None, 32) 0
_________________________________________________________________
Final (Dense) (None, 2) 66
=================================================================
Total params: 24,121,442
Trainable params: 17,083,938
Non-trainable params: 7,037,504
This is the heat map function:-
###defining heat map
def make_gradcam_heatmap(img_array, model, last_conv_layer_name, classifier_layer_names):
# First, we create a model that maps the input image to the activations
# of the last conv layer
last_conv_layer = model.get_layer(last_conv_layer_name)
last_conv_layer_model = keras.Model(model.inputs, last_conv_layer.output)
# Second, we create a model that maps the activations of the last conv
# layer to the final class predictions
classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:])
x = classifier_input
for layer_name in classifier_layer_names:
x = model.get_layer(layer_name)(x)
classifier_model = keras.Model(classifier_input, x)
# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
# Compute activations of the last conv layer and make the tape watch it
last_conv_layer_output = last_conv_layer_model(img_array)
tape.watch(last_conv_layer_output)
# Compute class predictions
preds = classifier_model(last_conv_layer_output)
top_pred_index = tf.argmax(preds[0])
top_class_channel = preds[:, top_pred_index]
# This is the gradient of the top predicted class with regard to
# the output feature map of the last conv layer
grads = tape.gradient(top_class_channel, last_conv_layer_output)
# This is a vector where each entry is the mean intensity of the gradient
# over a specific feature map channel
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
# We multiply each channel in the feature map array
# by "how important this channel is" with regard to the top predicted class
last_conv_layer_output = last_conv_layer_output.numpy()[0]
pooled_grads = pooled_grads.numpy()
for i in range(pooled_grads.shape[-1]):
last_conv_layer_output[:, :, i] *= pooled_grads[i]
# The channel-wise mean of the resulting feature map
# is our heatmap of class activation
heatmap = np.mean(last_conv_layer_output, axis=-1)
# For visualization purpose, we will also normalize the heatmap between 0 & 1
heatmap = np.maximum(heatmap, 0) / np.max(heatmap)
return heatmap
this is an image input
img_array = X_test[10] # 10th image sample
X_test[10].shape
#(150, 150, 3)
last_conv_layer_name = "densenet121"
classifier_layer_names = [ "dense_2", "dense_3", "dense_4", "dense_5", "Final" ]
# Generate class activation heatmap
heatmap = make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
) ####===> (I'm getting error here, in this line)
So what is wrong with last_conv_layer_name and classifier_layer_names.
Can anyone please explain this?

ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (3306, 67, 1)

I am trying to train a neural network on Semantic Role Labeling task (text classification task). The dataset consist of sentences on which the neural network has to be trained to predict a class for each word. Apart from using the embedding matrix, I am also using other features (meta_data_features). The number of classes in Y_train are 61. The number 3306 represents the number of sentences in my dataset (size of my dataset). MAX_LEN = 67. The code for the architecture is:
embedding_layer = Embedding(67,
300,
embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_LEN,
trainable=False)
sentence_input = Input(shape=(67,), dtype='int32')
meta_input = Input(shape=(67,), name='meta_input')
embedded_sequences = embedding_layer(sentence_input)
x_1 = (SimpleRNN(256))(embedded_sequences)
x = concatenate([x_1, meta_input], axis=1)
x = Dropout(0.3)(x)
x = Dense(32, activation='relu')(x)
predictions = Dense(61, activation='softmax')(x)
model = Model([sentence_input,meta_input], predictions)
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['sparse_categorical_accuracy'])
print(model.summary())
The snapshot of model summary is:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 67) 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 67, 300) 1176000 input_1[0][0]
__________________________________________________________________________________________________
simple_rnn_1 (SimpleRNN) (None, 256) 142592 embedding_1[0][0]
__________________________________________________________________________________________________
meta_input (InputLayer) (None, 67) 0
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 323) 0 simple_rnn_1[0][0]
meta_input[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 323) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 32) 10368 dropout_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 61) 2013 dense_1[0][0]
==================================================================================================
Total params: 1,330,973
Trainable params: 154,973
Non-trainable params: 1,176,000
__________________________________________________________________________________________________
The function call is:
simple_RNN_model_trainable.fit([padded_sentences, meta_data_features], padded_verbs,batch_size=32,epochs=1)
X_train constitutes [padded_sentences, meta_data_features] and Y_train is padded_verbs. Their shapes are:
padded_sentences - (3306, 67)
meta_data_features - (3306, 67)
padded_verbs - (3306, 67, 1)
When I try to fit the model, I get the error, "ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (3306, 67, 1)"
It would be great if somebody can help me in resolving the error. Thanks!

"Tap" a specific layer in existing Keras Model and make a branch to a new output?

Environment:
Im using TF.Keras (Tensorflow 1.14) on Google Colab, and my model architecture is MobileNet V2 1.00 224.
Problem:
I am trying (and failing) to attach a new layer and make a new output to an existing layer that is not the normal output of my Model. I.e make a branch earlier in MobileNet V2
I want this new branch to be for a regression output - but I dont want that output to serially connected off of the final embedding layer of MobileNet, but a much earlier stage (which one - im not sure, im experimenting). Basically a branch with its own output, and then the normal, pre-trained image net embedding out.
Grab MobileNet V2 as base_model:
base_model = tf.keras.applications.MobileNetV2(input_shape=(IMG_SIZE, IMG_SIZE, 3),
include_top=False,
weights='imagenet')
base_model.trainable = False
Make my layers from base_model and make my new outputs.
# get layers from mobilenet base layer
mobilenet_input = base_model.get_layer('input_1')
mobilenet_output = base_model.get_layer('out_relu')
# add our average pooling layer to our MobileNetV2 output like all of our other classifiers so we split our graph on the same nodes
out_global_pooling = tf.keras.layers.GlobalAveragePooling2D(name='embedding_pooling')(mobilenet_output.output)
out_global_pooling.trainable = False
# Our new branch and outputs for the branch
expanded_conv_depthwise_BN = base_model.get_layer('expanded_conv_depthwise_BN')
regression_dropout = tf.keras.layers.Dropout(0.5) (expanded_conv_depthwise_BN.output)
regression_global_pooling = tf.keras.layers.GlobalAveragePooling2D(name="regression_pooling")(regression_dropout)
new_regression_output = tf.keras.layers.Dense(num_labels, activation = 'sigmoid', name = "cinemanet_output") (regression_global_pooling)
This appears to be fine, and I can even make my model via the functional API:
model = tf.keras.Model(inputs=mobilenet_input.input, outputs=[out_global_pooling, new_regression_output])
My Training Code
My data set is a set of 30 floats (10 RGB duplets) I want to predict from an input image. My data set functions when training a 'sequence' model, but fails when I try to train this model.
ops.reset_default_graph()
tf.keras.backend.set_learning_phase(1) # 0 testing, 1 training mode
# preview contents of CSV to verify things are sane
import csv
import math
def lenopenreadlines(filename):
with open(filename) as f:
return len(f.readlines())
def csvheaderrow(filename):
with open(filename) as f:
reader = csv.reader(f)
return next(reader, None)
# !head {label_file}
NUM_IMAGES = ( lenopenreadlines(label_file) - 1) # remove header
COLUMN_NAMES = csvheaderrow(label_file)
LABEL_NAMES = COLUMN_NAMES[:]
LABEL_NAMES.remove("filepath")
ALL_LABELS.extend(LABEL_NAMES)
# make our data set
BATCH_SIZE = 256
NUM_EPOCHS = 50
FILE_PATH = ["filepath"]
LABELS_TO_PRINT = ' '.join(LABEL_NAMES)
print("Label contains: " + str(NUM_IMAGES) + " images")
print("Label Are: " + LABELS_TO_PRINT)
print("Creating Data Set From " + label_file)
csv_dataset = get_dataset(label_file, BATCH_SIZE, NUM_EPOCHS, COLUMN_NAMES)
#make a new data set from our csv by mapping every value to the above function
split_dataset = csv_dataset.map(split_csv_to_path_and_labels)
# make a new datas set that loads our images from the first path
image_and_labels_ds = split_dataset.map(load_and_preprocess_image_batch, num_parallel_calls=AUTOTUNE)
# update our image floating point range to match -1, 1
ds = image_and_labels_ds.map(change_range)
print(image_and_labels_ds)
model = build_model(LABEL_NAMES, use_masked_loss)
#split the final data set into train / validation splits to use for our model.
DATASET_SIZE = NUM_IMAGES
ds = ds.repeat()
steps_per_epoch = int(math.floor(DATASET_SIZE/BATCH_SIZE))
history = model.fit(ds, epochs=NUM_EPOCHS, steps_per_epoch=steps_per_epoch, callbacks=[TensorBoardColabCallback(tbc)])
print(history)
# results = model.evaluate(test_dataset)
# print('test loss, test acc:', results)
export_model(model, model_name, LABEL_NAMES, date)
ValueError: Error when checking model target:
the list of Numpy arrays that you are passing to your model is not the size the model expected.
Expected to see 2 array(s), but instead got the following list of 1 arrays:
[<tf.Tensor 'IteratorGetNext:1' shape=(?, 30) dtype=float32>]
If I instead use a Sequence and naively try to train my regression task against final output of mobile net (rather than the branch) - training works fine (although I get poor results).
My Model summary appears to tell me things are wired as I expect. My dropout is connected to expanded_conv_depthwise_BN. My regression pooling is connected to my drop out and my output layer appears in the summary connected to my regressing pooling
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3) 0
__________________________________________________________________________________________________
Conv1_pad (ZeroPadding2D) (None, 225, 225, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
Conv1 (Conv2D) (None, 112, 112, 32) 864 Conv1_pad[0][0]
__________________________________________________________________________________________________
bn_Conv1 (BatchNormalization) (None, 112, 112, 32) 128 Conv1[0][0]
__________________________________________________________________________________________________
Conv1_relu (ReLU) (None, 112, 112, 32) 0 bn_Conv1[0][0]
__________________________________________________________________________________________________
expanded_conv_depthwise (Depthw (None, 112, 112, 32) 288 Conv1_relu[0][0]
__________________________________________________________________________________________________
expanded_conv_depthwise_BN (Bat (None, 112, 112, 32) 128 expanded_conv_depthwise[0][0]
__________________________________________________________________________________________________
expanded_conv_depthwise_relu (R (None, 112, 112, 32) 0 expanded_conv_depthwise_BN[0][0]
__________________________________________________________________________________________________
expanded_conv_project (Conv2D) (None, 112, 112, 16) 512 expanded_conv_depthwise_relu[0][0
__________________________________________________________________________________________
< snip for brevity >
________
block_16_project (Conv2D) (None, 7, 7, 320) 307200 block_16_depthwise_relu[0][0]
__________________________________________________________________________________________________
block_16_project_BN (BatchNorma (None, 7, 7, 320) 1280 block_16_project[0][0]
__________________________________________________________________________________________________
Conv_1 (Conv2D) (None, 7, 7, 1280) 409600 block_16_project_BN[0][0]
__________________________________________________________________________________________________
Conv_1_bn (BatchNormalization) (None, 7, 7, 1280) 5120 Conv_1[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 112, 112, 32) 0 expanded_conv_depthwise_BN[0][0]
__________________________________________________________________________________________________
out_relu (ReLU) (None, 7, 7, 1280) 0 Conv_1_bn[0][0]
__________________________________________________________________________________________________
regression_pooling (GlobalAvera (None, 32) 0 dropout[0][0]
__________________________________________________________________________________________________
embedding_pooling (GlobalAverag (None, 1280) 0 out_relu[0][0]
__________________________________________________________________________________________________
cinemanet_output (Dense) (None, 30) 990 regression_pooling[0][0]
==================================================================================================
Total params: 2,258,974
Trainable params: 990
Non-trainable params: 2,257,984
It looks like you are setting things up correctly, but your training dataset doesn't include tensors for both outputs. If you only want to train the new output, you can provide dummy tensors (or even real training data) for the other one while using a loss weight of 0 to prevent the parameters from updating. That should also prevent any parameters that are not directly "upstream" of the new output layer from updating during training.
When compiling your model, use the argument loss_weights to pass the weights as either a list (e.g., loss_weights=[0, 1]) or a dictionary (e.g., loss_weights={'out_relu': 0, 'cinemanet_output': 1}).