I am using a transfer learning model is a ay very similar to that explained in Chollet's keras Transfer learning guide. To avoid problems with the batch normalization layer, as stated in the guide and many other places, I have to insert the original pretrained base model as a functional model with the training=false option like this:
inputs = layers.Input(shape=(224,224, 3))
x = img_augmentation(inputs)
baseModel = VGG19(weights="imagenet", include_top=False,input_tensor=x)
x=baseModel(x,training=False)
# construct the head of the model that will be placed on top of the
# the base model
x=Conv2D(32,2)(x)
headModel = AveragePooling2D(pool_size=(4, 4))(x)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(64, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(3, activation="softmax")(headModel)
model = Model(inputs, outputs=headModel)
My problem is that I need to use gradcam as in Chollet's gradcam example page. To do this I need access to the basemodel last convolutional layer but when I summarize my model I get:
Model: "model_163"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
img_augmentation (Sequential (None, 224, 224, 3) 0
_________________________________________________________________
vgg19 (Functional) (None, 7, 7, 512) 20024384
_________________________________________________________________
conv2d_2 (Conv2D) (None, 6, 6, 32) 65568
_________________________________________________________________
average_pooling2d_2 (Average (None, 1, 1, 32) 0
_________________________________________________________________
flatten (Flatten) (None, 32) 0
_________________________________________________________________
dense_4 (Dense) (None, 64) 2112
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_5 (Dense) (None, 3) 195
=================================================================
Total params: 20,092,259
Trainable params: 67,875
Non-trainable params: 20,024,384
__________________________________________
Thus, the outputs I need are inside one of the vgg19 functional model layers. How can I access this layer without having to remove the training=True option?
I generally don't like nesting models in models. Although it encourages modularity and introduce nice structure to complex models, TensorFlow gives trouble when you want to do unconventional things (like computing GradCAM or accessing gradients, etc.). I've found it easier to un-nest the model so that you can access the layer that you like easily.
I recently wrote a tutorial to implement GradCAM
on TensorFlow 2 for InceptionNet. It should give you enough context to access the required layer.
So as you see the VGG model in your case has type Functional. When you iterate through your compound model's layers you can check for the type of each layer, like this, find the nested Functional model and work with it's layers:
for layer in model.layers:
if "Functional" == layer.__class__.__name__:
#here you can iterate and choose the layers of your nested model
for _layer in layer.layers:
# your logic with nested model layers
Related
I'm using tensorflow 2.6 keras for transfer learning. Currently I take MobileNetV2. I take input, apply some preprocessing using Lambda layer, then feed this preprocessed input to MobileNetV2, then add Dense layer and train this thing. Training, inference etc actually work as expected.
However, the summary of the model looks as follows:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) [(None, 201, 189, 1)] 0
_________________________________________________________________
lambda (Lambda) (None, 201, 189, None) 0
_________________________________________________________________
lambda_1 (Lambda) (None, 201, 189, None) 0
_________________________________________________________________
mobilenetv2_1.00_224 (Functi (None, 7, 6, 1280) 2257984
_________________________________________________________________
flatten (Flatten) (None, 53760) 0
_________________________________________________________________
output (Dense) (None, 2) 107522
=================================================================
Total params: 2,365,506
Trainable params: 2,331,394
Non-trainable params: 34,112
So the MobileNetV2 structure is hidden and shown as one layer of type tensorflow.python.keras.engine.functional.Functional. If I print summary of this layer, I get all the internal layers of the model. I have a script for automatic GradCam visualizations which looks for the last Conv layer of the model. If the model is constructed by hand using Lambda, Conv2D, Dense layers, then everyhting works fine. If I use pretrained model, then currently it fails, because the Conv layer is hidden inside of this Functional layer.
How do I construct my modified MobileNetV2 model with my additional layers so that the full structure of the model is shown?
This is how I approximately construct my final model:
input = Input(shape=params.image_shape, name="input")
flow = input
flow = input_correction(flow, params) #some Lambda layers
keras_model = MobileNetV2(
input_shape=image_shape,
weights='imagenet',
include_top=False)
keras_model_output=keras_model(flow)
keras_model_input=input
keras_model_output = Flatten()(keras_model_output)
output = Dense(units=len(params.classes),
activation=tf.nn.softmax,
name="output")(keras_model_output)
model = Model(inputs=keras_model_input, outputs=output)
model.compile(...)
In default, summary doesnt show nested models. Just include expand_nested argument in the summary.
model.summary(expand_nested=True)
How to show all layers in a tensorflow model with the model base?
base_model = keras.applications.MobileNetV3Small(
input_shape=model_input_shape,
include_top=False,
weights="imagenet",
)
# =================== build model
model = keras.Sequential(
[
keras.Input(shape=image_shape),
preprocessing.Resizing(*model_input_shape[:2]),
preprocessing.Rescaling(1.0 / 255),
base_model,
layers.GlobalAveragePooling2D(),
# missing dropout
layers.Dense(1, activation="sigmoid"),
]
)
model.summary()
The output is this:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resizing (Resizing) (None, 224, 224, 3) 0
_________________________________________________________________
rescaling_1 (Rescaling) (None, 224, 224, 3) 0
_________________________________________________________________
MobilenetV3small (Functional (None, 7, 7, 1024) 1529968 <---------- why can't I see all layers here?
_________________________________________________________________
global_average_pooling2d (Gl (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 1) 1025
How do I show all layers?
for layer in model.layers:
print(layer)
The above has the same problem. What am I doing wrong?
In such a setup, the base_model acts as a single layer, ie. become nested. To inspect it, you can try either
model.layers[2].summary()
for i, layer in enumerate(model.layers):
if i == 2:
for nested_layer in layer.layers:
print(nested_layer)
or, more intuitively, you can use this solution.
def summary_plus(layer, i=0):
if hasattr(layer, 'layers'):
if i != 0:
layer.summary()
for l in layer.layers:
i += 1
summary_plus(l, i=i)
summary_plus(model)
or, you can also use the plot_model function as well
keras.utils.plot_model(
model,
expand_nested=True # < make it true
)
Update 1: Raised on the issue regarding this. Keras #15239. Hopefully, it will be solved soon.
Update 2: model.summary now has expand_nested parameter. #15251
I'm currently building a CNN that uses transfer learning to classify images.
In my model, there is a tensorflow-hub KerasLayer that uses EfficientNet in order to create a feature vector.
My code is here:
model = models.Sequential([
hub.KerasLayer("https://tfhub.dev/google/efficientnet/b7/feature-vector/1", trainable=True), # Trainable
layers.Dropout(DROPOUT),
layers.Dense(NEURONS_PER_LAYER, kernel_regularizer=tf.keras.regularizers.l2(REG_LAMBDA), activation=ACTIVATION),
layers.Dropout(DROPOUT),
layers.Dense(NEURONS_PER_LAYER, kernel_regularizer=tf.keras.regularizers.l2(REG_LAMBDA), activation=ACTIVATION),
layers.Dropout(DROPOUT),
layers.Dense(NEURONS_PER_LAYER, kernel_regularizer=tf.keras.regularizers.l2(REG_LAMBDA), activation=ACTIVATION),
layers.Dropout(DROPOUT),
layers.Dense(NEURONS_PER_LAYER, kernel_regularizer=tf.keras.regularizers.l2(REG_LAMBDA), activation=ACTIVATION),
layers.Dropout(DROPOUT),
layers.Dense(1, activation="sigmoid")])
I can freeze or unfreeze the entire KerasLayer, but I can't seem to find a way to only freeze the earlier layers and fine-tune the higher-level parts. Can anyone help?
You can freeze entire layer by using layer.trainable = False. Just in case you happen to load entire model or create a model from scratch you can do this loop to find specific a layer to freeze.
# load a model or create a model
model = Model(...)
# first you print out your model summary
model.summary()
# you will get something like this
'''
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
inception_resnet_v2 (Model) (None, 2, 2, 1536) 54336736
_________________________________________________________________
flatten_2 (Flatten) (None, 6144) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 6144) 0
_________________________________________________________________
dense_8 (Dense) (None, 2048) 12584960
_________________________________________________________________
dense_9 (Dense) (None, 1024) 2098176
_________________________________________________________________
dense_10 (Dense) (None, 512) 524800
_________________________________________________________________
dense_11 (Dense) (None, 17) 8721
=================================================================
'''
# here is loop for freezing particular layer (dense_10 in this example)
for layer in model.layers:
# selecting layer by name
if layer.name == 'dense_10':
layer.trainable = False
# for that hub layer you need to create hub layer outside your model just for easy access
# my inception layer
inception_layer = keras.applications.InceptionResNetV2(weights='imagenet', include_top=False, input_shape=(128, 128, 3))
# create model
model.add(inception_layer)
# same trick
inception_layer.summary()
# here is same loop from upper example
for layer in inception_layer.layers:
# selecting layer by name
if layer.name == 'block8_10_conv':
layer.trainable = False
Keras is giving different results when I define my model via the declarative method instead of the functional method. The two models appear to be equivillent, but using the ".add()" syntax works while using the declarative syntax gives errors -- it's a different error each time, but usually something like:
A target array with shape (10, 1) was passed for an output of shape (None, 16) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.
There seems to be something going on with auto-conversion of input shapes, but I can't tell what. Does anyone know what I'm doing wrong? Why aren't these two models exactly equivillent?
import tensorflow as tf
import tensorflow.keras
import numpy as np
x = np.arange(10).reshape((-1,1,1))
y = np.arange(10)
#This model works fine
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(32, input_shape=(1, 1), return_sequences = True))
model.add(tf.keras.layers.LSTM(16))
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Activation('linear'))
#This model fails. But shouldn't this be equivalent to the above?
model2 = tf.keras.Sequential(
{
tf.keras.layers.LSTM(32, input_shape=(1, 1), return_sequences = True),
tf.keras.layers.LSTM(16),
tf.keras.layers.Dense(1),
tf.keras.layers.Activation('linear')
})
#This works
model.compile(loss='mean_squared_error', optimizer='adagrad')
model.fit(x, y, epochs=1, batch_size=1, verbose=2)
#But this doesn't! Why not? The error is different each time, but usually
#something about the input size being wrong
model2.compile(loss='mean_squared_error', optimizer='adagrad')
model2.fit(x, y, epochs=1, batch_size=1, verbose=2)
Why aren't those two models equivalent? Why does one handle the input size correctly but the other doesn't? The second model fails with a different error each time (once in a while it even works) so i thought maybe there's some interaction with the first model? But I've tried commenting out the first model and that doesn't help. So why doesn't the second one work?
UPDATE: Here is the "model.summary() for the first and second model. They do seem different but I don't understand why.
For model.summary():
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 1, 32) 4352
_________________________________________________________________
lstm_1 (LSTM) (None, 16) 3136
_________________________________________________________________
dense (Dense) (None, 1) 17
_________________________________________________________________
activation (Activation) (None, 1) 0
=================================================================
Total params: 7,505
Trainable params: 7,505
Non-trainable params: 0
For model2.summary():
model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_2 (LSTM) (None, 1, 32) 4352
_________________________________________________________________
activation_1 (Activation) (None, 1, 32) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 16) 3136
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 7,505
Trainable params: 7,505
Non-trainable params: 0```
When you are creating the model with the inline declarations, you put the layers in curly braces {}, which makes it a set, which is inherently unordered. Change the curly braces to square brackets [] to put them in an ordered list. This will make sure that the layers are in the correct order in your model.
Failed to optimize keras model with Intel inference engine (OpenVINO toolkit R.5)
I freeze my model just like following tutorial suggests. The keras model is trained and tested. I need to optimize it for inference.
However I get an error while running model optimizer (mo.py script) on custom model.
[ ERROR ] shapes (128,9) and (0,) not aligned: 9 (dim 1) != 0 (dim 0)
Last few layers of my model (9 is number of output of classes) are:
conv2d_4 (Conv2D) (None, 4, 4, 128) 204928 batch_normalization_3[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 4, 4, 128) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 4, 4, 128) 512 activation_4[0][0]
__________________________________________________________________________________________________
average_pooling2d_2 (AveragePoo (None, 1, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 1, 1, 128) 0 average_pooling2d_2[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 128) 0 dropout_2[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 128) 16512 flatten[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 128) 0 dense[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 128) 512 activation_5[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 128) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 9) 1161 dropout_3[0][0]
__________________________________________________________________________________________________
color_prediction (Activation) (None, 9) 0 dense_1[0][0]
__________________________________________________________________________________________________
Model optimizer fails due to presence of BatchNormalization layers. When I remove them it runs successfully. However I freeze graph with
tf.keras.backend.set_learning_phase(0)
So nodes like BatchNormalization and Dropout must be removed in freezed graph, I can't figure out why they don't removed.
Thanks a lot!
I managed to run OpenVINO model optimizer on Keras model with Batch Normalization layers. The model also seemed to converge little faster. Though test classification rate was lower for about 5-7% (and a gap between classification rate on testing and training datasets was bigger) than one of the model without BN. I am not sure if BatchNormalization is properly removed from model in my solution (but openVINO model file doesn't include one so it's removed).
Remove BN and Dropout layers:
#Clear any previous session.
tf.keras.backend.clear_session()
#This line must be executed before loading Keras model.
tf.keras.backend.set_learning_phase(0)
model = tf.keras.models.load_model(weights_path)
for layer in model.layers:
layer.training = False
if isinstance(layer, tf.keras.layers.BatchNormalization):
layer._per_input_updates = {}
elif isinstance(layer, tf.keras.layers.Dropout):
layer._per_input_updates = {}
And than freeze session:
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
#param session The TensorFlow session to be frozen.
#param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
#param output_names Names of the relevant graph outputs.
#param clear_devices Remove the device directives from the graph for better portability.
#return The frozen graph definition.
"""
from tensorflow.python.framework.graph_util import convert_variables_to_constants
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
# Graph -> GraphDef ProtoBuf
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = convert_variables_to_constants(session, input_graph_def,
output_names, freeze_var_names)
return frozen_graph