Out of memory while running on tensorflow-gpu - tensorflow

I'm training a model on gpu RTX3060 with 6GB memory ,
tensorflow 2.4,
cuda 11.0 and cudnn 8.0.4.
I'm facing this problem dispite the fact that I'm using only batchsize=2 ( even 1 fails )
-09-18 11:27:14.053184: I tensorflow/core/common_runtime/bfc_allocator.cc:1040] Sum Total of in-use chunks: 979.49MiB
2021-09-18 11:27:14.053190: I tensorflow/core/common_runtime/bfc_allocator.cc:1042] total_region_allocated_bytes_: 1081081856 memory_limit_: 4963174976 available bytes: 3882093120 curr_region_allocation_bytes_: 4294967296
2021-09-18 11:27:14.053200: I tensorflow/core/common_runtime/bfc_allocator.cc:1048] Stats:
Limit: 4963174976
InUse: 1027070208
MaxInUse: 2221276928
NumAllocs: 73401
MaxAllocSize: 1234173952
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0
2021-09-18 11:27:14.053218: W tensorflow/core/common_runtime/bfc_allocator.cc:441] **********************************************____**************************************************
2021-09-18 11:27:14.053235: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at cwise_ops_common.h:128 : Resource exhausted: OOM when allocating tensor with shape[2,128,128,128,16] and type bool on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
I can't solve the problem , I have set
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
but it doesn't work, I have the necessary memory but it fails to run.
Can somebody help me?
Edit :
I'm using a 3D-Unet model , here is a summary of my model
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 128, 128, 12 0
__________________________________________________________________________________________________
conv3d (Conv3D) (None, 128, 128, 128 1312 input_1[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 128, 128, 128 0 conv3d[0][0]
__________________________________________________________________________________________________
conv3d_1 (Conv3D) (None, 128, 128, 128 6928 dropout[0][0]
__________________________________________________________________________________________________
max_pooling3d (MaxPooling3D) (None, 64, 64, 64, 1 0 conv3d_1[0][0]
__________________________________________________________________________________________________
conv3d_2 (Conv3D) (None, 64, 64, 64, 3 13856 max_pooling3d[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 64, 64, 64, 3 0 conv3d_2[0][0]
__________________________________________________________________________________________________
conv3d_3 (Conv3D) (None, 64, 64, 64, 3 27680 dropout_1[0][0]
__________________________________________________________________________________________________
max_pooling3d_1 (MaxPooling3D) (None, 32, 32, 32, 3 0 conv3d_3[0][0]
__________________________________________________________________________________________________
conv3d_4 (Conv3D) (None, 32, 32, 32, 6 55360 max_pooling3d_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 32, 32, 32, 6 0 conv3d_4[0][0]
__________________________________________________________________________________________________
conv3d_5 (Conv3D) (None, 32, 32, 32, 6 110656 dropout_2[0][0]
__________________________________________________________________________________________________
max_pooling3d_2 (MaxPooling3D) (None, 16, 16, 16, 6 0 conv3d_5[0][0]
__________________________________________________________________________________________________
conv3d_6 (Conv3D) (None, 16, 16, 16, 1 221312 max_pooling3d_2[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 16, 16, 16, 1 0 conv3d_6[0][0]
__________________________________________________________________________________________________
conv3d_7 (Conv3D) (None, 16, 16, 16, 1 442496 dropout_3[0][0]
__________________________________________________________________________________________________
max_pooling3d_3 (MaxPooling3D) (None, 8, 8, 8, 128) 0 conv3d_7[0][0]
__________________________________________________________________________________________________
conv3d_8 (Conv3D) (None, 8, 8, 8, 256) 884992 max_pooling3d_3[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 8, 8, 8, 256) 0 conv3d_8[0][0]
__________________________________________________________________________________________________
conv3d_9 (Conv3D) (None, 8, 8, 8, 256) 1769728 dropout_4[0][0]
__________________________________________________________________________________________________
conv3d_transpose (Conv3DTranspo (None, 16, 16, 16, 1 262272 conv3d_9[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 16, 16, 16, 2 0 conv3d_transpose[0][0]
conv3d_7[0][0]
__________________________________________________________________________________________________
conv3d_10 (Conv3D) (None, 16, 16, 16, 1 884864 concatenate[0][0]
__________________________________________________________________________________________________
dropout_5 (Dropout) (None, 16, 16, 16, 1 0 conv3d_10[0][0]
__________________________________________________________________________________________________
conv3d_11 (Conv3D) (None, 16, 16, 16, 1 442496 dropout_5[0][0]
__________________________________________________________________________________________________
conv3d_transpose_1 (Conv3DTrans (None, 32, 32, 32, 6 65600 conv3d_11[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 32, 32, 32, 1 0 conv3d_transpose_1[0][0]
conv3d_5[0][0]
__________________________________________________________________________________________________
conv3d_12 (Conv3D) (None, 32, 32, 32, 6 221248 concatenate_1[0][0]
__________________________________________________________________________________________________
dropout_6 (Dropout) (None, 32, 32, 32, 6 0 conv3d_12[0][0]
__________________________________________________________________________________________________
conv3d_13 (Conv3D) (None, 32, 32, 32, 6 110656 dropout_6[0][0]
__________________________________________________________________________________________________
conv3d_transpose_2 (Conv3DTrans (None, 64, 64, 64, 3 16416 conv3d_13[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 64, 64, 64, 6 0 conv3d_transpose_2[0][0]
conv3d_3[0][0]
__________________________________________________________________________________________________
conv3d_14 (Conv3D) (None, 64, 64, 64, 3 55328 concatenate_2[0][0]
__________________________________________________________________________________________________
dropout_7 (Dropout) (None, 64, 64, 64, 3 0 conv3d_14[0][0]
__________________________________________________________________________________________________
conv3d_15 (Conv3D) (None, 64, 64, 64, 3 27680 dropout_7[0][0]
__________________________________________________________________________________________________
conv3d_transpose_3 (Conv3DTrans (None, 128, 128, 128 4112 conv3d_15[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 128, 128, 128 0 conv3d_transpose_3[0][0]
conv3d_1[0][0]
__________________________________________________________________________________________________
conv3d_16 (Conv3D) (None, 128, 128, 128 13840 concatenate_3[0][0]
__________________________________________________________________________________________________
dropout_8 (Dropout) (None, 128, 128, 128 0 conv3d_16[0][0]
__________________________________________________________________________________________________
conv3d_17 (Conv3D) (None, 128, 128, 128 6928 dropout_8[0][0]
__________________________________________________________________________________________________
conv3d_18 (Conv3D) (None, 128, 128, 128 68 conv3d_17[0][0]
==================================================================================================
Total params: 5,645,828
Trainable params: 5,645,828
Non-trainable params: 0
__________________________________________________________________________________________________
Image shape : (None, 128, 128, 128, 3)
Mask shape :(None, 128, 128, 128, 4)

Related

Modify Input layer of Keras Model

I've a pretrained network. I want read that model and change the shape of input layer. I've tried with following code:
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
2.4.1
from google.colab import drive
drive.mount("/content/drive", force_remount=True )
new_model = tf.keras.models.load_model("/content/drive/My Drive/NonQuantRelu.h5")
new_model.summary()
Model: "functional_1"
Layer (type) Output Shape Param #
Input (InputLayer) [(None, 108, 1)] 0
ConvL1_Filters (Conv1D) (None, 98, 24) 264
I really don't want the None in the InputLayer, so I've tried to:
new_input_layer = keras.Input(batch_size=1, shape=(108,1),name="Input",dtype="float32",ragged=False,sparse=False)
new_input_layer.shape
TensorShape([1, 108, 1])
new_model.layers[0] = new_input_layer
new_model.summary()
Model: "functional_1"
Layer (type) Output Shape Param #
Input (InputLayer) [(None, 108, 1)] 0
ConvL1_Filters (Conv1D) (None, 98, 24) 264
Why Input layer is not changed?
Thank to everyone
I was able to replicate your issue using vgg16 network.
import tensorflow as tf
print(tf.__version__)
from google.colab import drive
drive.mount('/content/drive/')
model = tf.keras.models.load_model('/content/drive/MyDrive/vgg16.h5')
model.summary()
Output:
2.4.1
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
To remove first layer of the network, use pop as shown below
model._layers.pop(0)
To add new input layer, you can run code as shown below
new_input_layer = tf.keras.Input(batch_size= 32, shape=(224,224,3))
new_output_layer = model(new_input_layer)
new_model = tf.keras.Model(new_input_layer, new_output_layer)
new_model.summary()
Output:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(32, 224, 224, 3)] 0
_________________________________________________________________
vgg16 (Functional) (None, 7, 7, 512) 14714688
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
You can use get_layer to retrieve a layer. Here to get vgg16 (Functional) layer (i.e. indexed at 1 in the new_model) details, you can run code as shown below
new_model.get_layer(index=1).summary()
Output:
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________

Keras Model Training - Is it possible to pass an generator with 1 or more inputs

I'm currently working on a Visual Question Answering subject.
I've made a model as follow :
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) [(None, 224, 224, 3) 0
__________________________________________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 input_3[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 block1_conv1[0][0]
__________________________________________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 block1_conv2[0][0]
__________________________________________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128 73856 block1_pool[0][0]
__________________________________________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128 147584 block2_conv1[0][0]
__________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 block2_conv2[0][0]
__________________________________________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 block2_pool[0][0]
__________________________________________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 block3_conv1[0][0]
__________________________________________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 block3_conv2[0][0]
__________________________________________________________________________________________________
block3_conv4 (Conv2D) (None, 56, 56, 256) 590080 block3_conv3[0][0]
__________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 block3_conv4[0][0]
__________________________________________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 block3_pool[0][0]
__________________________________________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 block4_conv1[0][0]
__________________________________________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 block4_conv2[0][0]
__________________________________________________________________________________________________
block4_conv4 (Conv2D) (None, 28, 28, 512) 2359808 block4_conv3[0][0]
__________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 block4_conv4[0][0]
__________________________________________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 block4_pool[0][0]
__________________________________________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 block5_conv1[0][0]
__________________________________________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 block5_conv2[0][0]
__________________________________________________________________________________________________
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808 block5_conv3[0][0]
__________________________________________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 block5_conv4[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 25088) 0 block5_pool[0][0]
__________________________________________________________________________________________________
input_4 (InputLayer) [(None, 20)] 0
__________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 20, 25088) 0 flatten_1[0][0]
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 20, 50) 901900 input_4[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 20, 25138) 0 repeat_vector_1[0][0]
embedding_1[0][0]
__________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 20, 50) 5032800 concatenate_1[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_1 (GlobalM (None, 50) 0 bidirectional_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 18037) 919887 global_max_pooling1d_1[0][0]
==================================================================================================
You can find the original paper on VQA, here : http://arxiv.org/pdf/1512.02167.pdf
To sum up I have a Model with 2 inputs
a pre-trained VGG19 that take images
and an embedded layer that take tokenized question.
As output we have a Bidirectional LSTM with a final Dense layer that give the answer to the question.
The training data is at follow :
img_path question answer
103 train2014/COCO_train2014_000000262171.jpg How many people are on the boat? 5
104 train2014/COCO_train2014_000000262171.jpg What color are the leaves? green
105 train2014/COCO_train2014_000000262171.jpg What type of watercraft is that? raft
131 train2014/COCO_train2014_000000262180.jpg What is the fruit? banana
132 train2014/COCO_train2014_000000262180.jpg Is this a good dessert?
My question is this one : I can't charge all the images in memory, I will like to know if it is possible to fit the model using a generator to generate the image on the fly + the tokenize question ?
I would like to do something like :
h = model_VQA.fit([X_train_img_generator, X_train_question], y_train_answer, epochs = 15, batch_size = 32)
where : X_train_question are the tokenized questions and X_train_img_generator the image generator.
--> But it doesn't work, is there a way to handle this properly ?
---------- Edit June 02 2021
Ok I've now update the answer to my problem and also correct some issu regarding input image size is now 480x640x3
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_8 (InputLayer) [(None, 480, 640, 3) 0
__________________________________________________________________________________________________
block1_conv1 (Conv2D) (None, 480, 640, 64) 1792 input_8[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D) (None, 480, 640, 64) 36928 block1_conv1[0][0]
__________________________________________________________________________________________________
block1_pool (MaxPooling2D) (None, 240, 320, 64) 0 block1_conv2[0][0]
__________________________________________________________________________________________________
block2_conv1 (Conv2D) (None, 240, 320, 128 73856 block1_pool[0][0]
__________________________________________________________________________________________________
block2_conv2 (Conv2D) (None, 240, 320, 128 147584 block2_conv1[0][0]
__________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 120, 160, 128 0 block2_conv2[0][0]
__________________________________________________________________________________________________
block3_conv1 (Conv2D) (None, 120, 160, 256 295168 block2_pool[0][0]
__________________________________________________________________________________________________
block3_conv2 (Conv2D) (None, 120, 160, 256 590080 block3_conv1[0][0]
__________________________________________________________________________________________________
block3_conv3 (Conv2D) (None, 120, 160, 256 590080 block3_conv2[0][0]
__________________________________________________________________________________________________
block3_conv4 (Conv2D) (None, 120, 160, 256 590080 block3_conv3[0][0]
__________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 60, 80, 256) 0 block3_conv4[0][0]
__________________________________________________________________________________________________
block4_conv1 (Conv2D) (None, 60, 80, 512) 1180160 block3_pool[0][0]
__________________________________________________________________________________________________
block4_conv2 (Conv2D) (None, 60, 80, 512) 2359808 block4_conv1[0][0]
__________________________________________________________________________________________________
block4_conv3 (Conv2D) (None, 60, 80, 512) 2359808 block4_conv2[0][0]
__________________________________________________________________________________________________
block4_conv4 (Conv2D) (None, 60, 80, 512) 2359808 block4_conv3[0][0]
__________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 30, 40, 512) 0 block4_conv4[0][0]
__________________________________________________________________________________________________
block5_conv1 (Conv2D) (None, 30, 40, 512) 2359808 block4_pool[0][0]
__________________________________________________________________________________________________
block5_conv2 (Conv2D) (None, 30, 40, 512) 2359808 block5_conv1[0][0]
__________________________________________________________________________________________________
block5_conv3 (Conv2D) (None, 30, 40, 512) 2359808 block5_conv2[0][0]
__________________________________________________________________________________________________
block5_conv4 (Conv2D) (None, 30, 40, 512) 2359808 block5_conv3[0][0]
__________________________________________________________________________________________________
block5_pool (MaxPooling2D) (None, 15, 20, 512) 0 block5_conv4[0][0]
__________________________________________________________________________________________________
flatten_7 (Flatten) (None, 153600) 0 block5_pool[0][0]
__________________________________________________________________________________________________
input_quest (InputLayer) [(None, None)] 0
__________________________________________________________________________________________________
repeat_vector_7 (RepeatVector) (None, 20, 153600) 0 flatten_7[0][0]
__________________________________________________________________________________________________
embedding_7 (Embedding) (None, 20, 50) 540700 input_quest[0][0]
__________________________________________________________________________________________________
concatenate_7 (Concatenate) (None, 20, 153650) 0 repeat_vector_7[0][0]
embedding_7[0][0]
__________________________________________________________________________________________________
bidirectional_7 (Bidirectional) (None, 20, 22) 13522256 concatenate_7[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_7 (GlobalM (None, 22) 0 bidirectional_7[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 7465) 171695 global_max_pooling1d_7[0][0]
==================================================================================================
And the dataset as :
def load(file_path):
img = tf.io.read_file(file_path)
img = tf.image.decode_png(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = preprocess_input(img)
#img = tf.image.resize(img, size=(224, 224))
img /= 255.
img = tf.expand_dims(img, axis = 0)
return img
x1 = tf.data.Dataset.from_tensor_slices(X_img_train).map(lambda xx: load(xx))
x2 = tf.data.Dataset.from_tensor_slices(X_train_rnn_pad)
y = tf.data.Dataset.from_tensor_slices(answer_tr)
dataset = tf.data.Dataset.zip(((x1, x2), y))
h = model_VQA.fit(x = dataset, batch_size = 32, shuffle = True, epochs = 15)
but I get the following error :
ValueError: Dimension 0 in both shapes must be equal, but are 1 and 20. Shapes are [1,20] and [20,1]. for '{{node model_8/concatenate_7/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](model_8/repeat_vector_7/Tile, model_8/embedding_7/embedding_lookup/Identity_1, model_8/concatenate_7/concat/axis)' with input shapes: [1,20,153600], [20,1,50], [] and with computed input tensors: input[2] = <2>.
I guess it has to do with the input shape for the Embedding part, but I don't what I missing
My Inputs data shape are
X_train_rnn_pad = (53607,20),
and
answer_tr = (53607, 7465)
Yes, you should use something like the tf.data.Datasets API
if you have a 2 input model, you will do something like this:
x = tf.data.Dataset.from_tensor_slices((img_path_array, questions_array))
y = tf.data.Dataset.from_tensor_slices(answer_array)
dataset = tf.data.Dataset.zip((x, y)).shuffle(50)
After that, you can use the .map method to load your images and also apply data augmentation if you want.
Then for fitting you simply do:
h = model.fit(dataset)
Using this API avoids loading images in memory.

Keras model gives different prediction on the same input during fit() and predict()

I'm training a simple adversarial image to break a pretrained model. However, the result I obtained during the fit() process is different from calling predict() on the same input (constant input).
model.trainable = False
gan = Sequential()
gan.add(Dense( 256 * 256 * 3, use_bias=False, input_shape=(1,)))
gan.add(Reshape((256, 256, 3)))
gan.add(model)
gan.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 196608) 196608
_________________________________________________________________
reshape_2 (Reshape) (None, 256, 256, 3) 0
_________________________________________________________________
sequential_1 (Sequential) (None, 2) 24952610
=================================================================
Total params: 25,149,218
Trainable params: 196,608
Non-trainable params: 24,952,610
_________________________________________________________________
img = img.reshape(256, 256, 3)
def custom_loss(layer):
# Create a loss function that adds the MSE loss to the mean of all squared activations of a specific layer
def loss(y_true,y_pred):
y_true = K.print_tensor(y_true, message='y_true = ')
y_pred = K.print_tensor(y_pred, message='y_pred = ')
label_diff = K.square(y_pred - y_true)
return K.mean(label_diff)
# Return a function
return loss
gan.compile(optimizer='adam',
loss=custom_loss(gan.layers[1]), # Call the loss function with the selected layer
metrics=['accuracy'])
x = np.ones((1,1))
goal = np.array([0, 1])
y = goal.reshape((1,2))
gan.fit(x, y, epochs=300, verbose=1)
During fit(), the loss is decreasing nicely
Epoch 1/300
1/1 [==============================] - 5s 5s/step - loss: 0.9950 - acc: 0.0000e+00
...
Epoch 300/300
1/1 [==============================] - 0s 46ms/step - loss: 0.0045 - acc: 1.0000
In the backend, the y_pred and y_true were also correct
......
y_true = [[0 1]]
y_pred = [[0.100334756 0.899665236]]
y_true = [[0 1]]
y_pred = [[0.116679631 0.883320332]]
y_true = [[0 1]]
y_pred = [[0.0832592845 0.916740656]]
y_true = [[0 1]]
y_pred = [[0.098835744 0.901164234]]
y_true = [[0 1]]
y_pred = [[0.0979194269 0.902080595]]
y_true = [[0 1]]
y_pred = [[0.057831794 0.942168236]]
y_true = [[0 1]]y_pred = [[0.0760448873 0.923955142]]
y_true = [[0 1]]
y_pred = [[0.041532293 0.958467722]]
y_true = [[0 1]]
y_pred = [[0.0667938739 0.933206141]]
print(gan.predict(x))
Gives
[[0.99923825 0.00076174]]
Tried with both pretrained Resnet and InceptionV3 and both are experiencing the same problem. Attached is model.summary()
For Inception:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
inception_v3 (Model) (None, None, None, 2048) 21802784
_________________________________________________________________
global_average_pooling2d_1 ( (None, 2048) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 2098176
_________________________________________________________________
dropout_1 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 1024) 1049600
_________________________________________________________________
dropout_2 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_3 (Dense) (None, 2) 2050
=================================================================
Total params: 24,952,610
Trainable params: 14,264,706
Non-trainable params: 10,687,904
_________________________________________________________________
For Resnet:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 262, 262, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 128, 128, 64) 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, 128, 128, 64) 256 conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 128, 128, 64) 0 bn_conv1[0][0]
__________________________________________________________________________________________________
pool1_pad (ZeroPadding2D) (None, 130, 130, 64) 0 activation_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 64, 64, 64) 0 pool1_pad[0][0]
__________________________________________________________________________________________________
res2a_branch2a (Conv2D) (None, 64, 64, 64) 4160 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 64, 64, 64) 256 res2a_branch2a[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 64, 64, 64) 0 bn2a_branch2a[0][0]
__________________________________________________________________________________________________
res2a_branch2b (Conv2D) (None, 64, 64, 64) 36928 activation_2[0][0]
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 64, 64, 64) 256 res2a_branch2b[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 64, 64, 64) 0 bn2a_branch2b[0][0]
__________________________________________________________________________________________________
res2a_branch2c (Conv2D) (None, 64, 64, 256) 16640 activation_3[0][0]
__________________________________________________________________________________________________
res2a_branch1 (Conv2D) (None, 64, 64, 256) 16640 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizati (None, 64, 64, 256) 1024 res2a_branch2c[0][0]
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 64, 64, 256) 1024 res2a_branch1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 64, 64, 256) 0 bn2a_branch2c[0][0]
bn2a_branch1[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 64, 64, 256) 0 add_1[0][0]
__________________________________________________________________________________________________
res2b_branch2a (Conv2D) (None, 64, 64, 64) 16448 activation_4[0][0]
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 64, 64, 64) 256 res2b_branch2a[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 64, 64, 64) 0 bn2b_branch2a[0][0]
__________________________________________________________________________________________________
res2b_branch2b (Conv2D) (None, 64, 64, 64) 36928 activation_5[0][0]
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 64, 64, 64) 256 res2b_branch2b[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 64, 64, 64) 0 bn2b_branch2b[0][0]
__________________________________________________________________________________________________
res2b_branch2c (Conv2D) (None, 64, 64, 256) 16640 activation_6[0][0]
__________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizati (None, 64, 64, 256) 1024 res2b_branch2c[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 64, 64, 256) 0 bn2b_branch2c[0][0]
activation_4[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 64, 64, 256) 0 add_2[0][0]
__________________________________________________________________________________________________
res2c_branch2a (Conv2D) (None, 64, 64, 64) 16448 activation_7[0][0]
__________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizati (None, 64, 64, 64) 256 res2c_branch2a[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 64, 64, 64) 0 bn2c_branch2a[0][0]
__________________________________________________________________________________________________
res2c_branch2b (Conv2D) (None, 64, 64, 64) 36928 activation_8[0][0]
__________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizati (None, 64, 64, 64) 256 res2c_branch2b[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 64, 64, 64) 0 bn2c_branch2b[0][0]
__________________________________________________________________________________________________
res2c_branch2c (Conv2D) (None, 64, 64, 256) 16640 activation_9[0][0]
__________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizati (None, 64, 64, 256) 1024 res2c_branch2c[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 64, 64, 256) 0 bn2c_branch2c[0][0]
activation_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 64, 64, 256) 0 add_3[0][0]
__________________________________________________________________________________________________
res3a_branch2a (Conv2D) (None, 32, 32, 128) 32896 activation_10[0][0]
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 32, 32, 128) 512 res3a_branch2a[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 32, 32, 128) 0 bn3a_branch2a[0][0]
__________________________________________________________________________________________________
res3a_branch2b (Conv2D) (None, 32, 32, 128) 147584 activation_11[0][0]
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 32, 32, 128) 512 res3a_branch2b[0][0]
__________________________________________________________________________________________________
activation_12 (Activation) (None, 32, 32, 128) 0 bn3a_branch2b[0][0]
__________________________________________________________________________________________________
res3a_branch2c (Conv2D) (None, 32, 32, 512) 66048 activation_12[0][0]
__________________________________________________________________________________________________
res3a_branch1 (Conv2D) (None, 32, 32, 512) 131584 activation_10[0][0]
__________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizati (None, 32, 32, 512) 2048 res3a_branch2c[0][0]
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 32, 32, 512) 2048 res3a_branch1[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 32, 32, 512) 0 bn3a_branch2c[0][0]
bn3a_branch1[0][0]
__________________________________________________________________________________________________
activation_13 (Activation) (None, 32, 32, 512) 0 add_4[0][0]
__________________________________________________________________________________________________
res3b_branch2a (Conv2D) (None, 32, 32, 128) 65664 activation_13[0][0]
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 32, 32, 128) 512 res3b_branch2a[0][0]
__________________________________________________________________________________________________
activation_14 (Activation) (None, 32, 32, 128) 0 bn3b_branch2a[0][0]
__________________________________________________________________________________________________
res3b_branch2b (Conv2D) (None, 32, 32, 128) 147584 activation_14[0][0]
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 32, 32, 128) 512 res3b_branch2b[0][0]
__________________________________________________________________________________________________
activation_15 (Activation) (None, 32, 32, 128) 0 bn3b_branch2b[0][0]
__________________________________________________________________________________________________
res3b_branch2c (Conv2D) (None, 32, 32, 512) 66048 activation_15[0][0]
__________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizati (None, 32, 32, 512) 2048 res3b_branch2c[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 32, 32, 512) 0 bn3b_branch2c[0][0]
activation_13[0][0]
__________________________________________________________________________________________________
activation_16 (Activation) (None, 32, 32, 512) 0 add_5[0][0]
__________________________________________________________________________________________________
res3c_branch2a (Conv2D) (None, 32, 32, 128) 65664 activation_16[0][0]
__________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizati (None, 32, 32, 128) 512 res3c_branch2a[0][0]
__________________________________________________________________________________________________
activation_17 (Activation) (None, 32, 32, 128) 0 bn3c_branch2a[0][0]
__________________________________________________________________________________________________
res3c_branch2b (Conv2D) (None, 32, 32, 128) 147584 activation_17[0][0]
__________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizati (None, 32, 32, 128) 512 res3c_branch2b[0][0]
__________________________________________________________________________________________________
activation_18 (Activation) (None, 32, 32, 128) 0 bn3c_branch2b[0][0]
__________________________________________________________________________________________________
res3c_branch2c (Conv2D) (None, 32, 32, 512) 66048 activation_18[0][0]
__________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizati (None, 32, 32, 512) 2048 res3c_branch2c[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 32, 32, 512) 0 bn3c_branch2c[0][0]
activation_16[0][0]
__________________________________________________________________________________________________
activation_19 (Activation) (None, 32, 32, 512) 0 add_6[0][0]
__________________________________________________________________________________________________
res3d_branch2a (Conv2D) (None, 32, 32, 128) 65664 activation_19[0][0]
__________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizati (None, 32, 32, 128) 512 res3d_branch2a[0][0]
__________________________________________________________________________________________________
activation_20 (Activation) (None, 32, 32, 128) 0 bn3d_branch2a[0][0]
__________________________________________________________________________________________________
res3d_branch2b (Conv2D) (None, 32, 32, 128) 147584 activation_20[0][0]
__________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizati (None, 32, 32, 128) 512 res3d_branch2b[0][0]
__________________________________________________________________________________________________
activation_21 (Activation) (None, 32, 32, 128) 0 bn3d_branch2b[0][0]
__________________________________________________________________________________________________
res3d_branch2c (Conv2D) (None, 32, 32, 512) 66048 activation_21[0][0]
__________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizati (None, 32, 32, 512) 2048 res3d_branch2c[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 32, 32, 512) 0 bn3d_branch2c[0][0]
activation_19[0][0]
__________________________________________________________________________________________________
activation_22 (Activation) (None, 32, 32, 512) 0 add_7[0][0]
__________________________________________________________________________________________________
res4a_branch2a (Conv2D) (None, 16, 16, 256) 131328 activation_22[0][0]
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 16, 16, 256) 1024 res4a_branch2a[0][0]
__________________________________________________________________________________________________
activation_23 (Activation) (None, 16, 16, 256) 0 bn4a_branch2a[0][0]
__________________________________________________________________________________________________
res4a_branch2b (Conv2D) (None, 16, 16, 256) 590080 activation_23[0][0]
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 16, 16, 256) 1024 res4a_branch2b[0][0]
__________________________________________________________________________________________________
activation_24 (Activation) (None, 16, 16, 256) 0 bn4a_branch2b[0][0]
__________________________________________________________________________________________________
res4a_branch2c (Conv2D) (None, 16, 16, 1024) 263168 activation_24[0][0]
__________________________________________________________________________________________________
res4a_branch1 (Conv2D) (None, 16, 16, 1024) 525312 activation_22[0][0]
__________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizati (None, 16, 16, 1024) 4096 res4a_branch2c[0][0]
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 16, 16, 1024) 4096 res4a_branch1[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, 16, 16, 1024) 0 bn4a_branch2c[0][0]
bn4a_branch1[0][0]
__________________________________________________________________________________________________
activation_25 (Activation) (None, 16, 16, 1024) 0 add_8[0][0]
__________________________________________________________________________________________________
res4b_branch2a (Conv2D) (None, 16, 16, 256) 262400 activation_25[0][0]
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 16, 16, 256) 1024 res4b_branch2a[0][0]
__________________________________________________________________________________________________
activation_26 (Activation) (None, 16, 16, 256) 0 bn4b_branch2a[0][0]
__________________________________________________________________________________________________
res4b_branch2b (Conv2D) (None, 16, 16, 256) 590080 activation_26[0][0]
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 16, 16, 256) 1024 res4b_branch2b[0][0]
__________________________________________________________________________________________________
activation_27 (Activation) (None, 16, 16, 256) 0 bn4b_branch2b[0][0]
__________________________________________________________________________________________________
res4b_branch2c (Conv2D) (None, 16, 16, 1024) 263168 activation_27[0][0]
__________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizati (None, 16, 16, 1024) 4096 res4b_branch2c[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, 16, 16, 1024) 0 bn4b_branch2c[0][0]
activation_25[0][0]
__________________________________________________________________________________________________
activation_28 (Activation) (None, 16, 16, 1024) 0 add_9[0][0]
__________________________________________________________________________________________________
res4c_branch2a (Conv2D) (None, 16, 16, 256) 262400 activation_28[0][0]
__________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizati (None, 16, 16, 256) 1024 res4c_branch2a[0][0]
__________________________________________________________________________________________________
activation_29 (Activation) (None, 16, 16, 256) 0 bn4c_branch2a[0][0]
__________________________________________________________________________________________________
res4c_branch2b (Conv2D) (None, 16, 16, 256) 590080 activation_29[0][0]
__________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizati (None, 16, 16, 256) 1024 res4c_branch2b[0][0]
__________________________________________________________________________________________________
activation_30 (Activation) (None, 16, 16, 256) 0 bn4c_branch2b[0][0]
__________________________________________________________________________________________________
res4c_branch2c (Conv2D) (None, 16, 16, 1024) 263168 activation_30[0][0]
__________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizati (None, 16, 16, 1024) 4096 res4c_branch2c[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, 16, 16, 1024) 0 bn4c_branch2c[0][0]
activation_28[0][0]
__________________________________________________________________________________________________
activation_31 (Activation) (None, 16, 16, 1024) 0 add_10[0][0]
__________________________________________________________________________________________________
res4d_branch2a (Conv2D) (None, 16, 16, 256) 262400 activation_31[0][0]
__________________________________________________________________________________________________
... omitted ...
Total params: 23,593,859
Trainable params: 23,540,739
Non-trainable params: 53,120
__________________________________________________________________________________________________
Those pretrained models contain BatchNormalization layers.
It's expected that they perform differently between train and test (this is also true for Dropout layers, but the differences would not be so drastic).
A BatchNormalization during training will use the mean and variance from the current batch to do normalization, it will also apply some statistic compensation for the fact that one batch may not be representative of the full dataset.
But during evaluation, the BatchNormalization will use the adjusted values that were gathered during training for mean and variation. (In this case, gathered during "pretraining" not your training)
For BatchNormalization to work correctly, you need that the inputs to the pretrained model are in the same range as the model's original training data. Otherwise you have to leave the BatchNormalization layers trainable so the mean and variance adjust for your data.
But your training needs significant batch sizes as well as real data to train properly.
Hints for training images.
In the same module where you import the pretrained model, you can import the preprocess_input function. Give it some images loaded with keras.preprocessing.images.load_img and see what is the model's expected range.
When using ImageDataGenerator, you can pass this preprocess_input function so the generator gives you expected data.

My rcnn model is too big when I save weights, how do I make it smaller?

My rcnn model is too big near 1Gb, when I save_weights(). I want to reduce size of it.
I use loop to imitate simple rnn, but inputs are different. And I need all the steps for output in stack to be able to calculate total loss for every step. I tried to rewrite it with time distributed layers, but I didn't succeed. Do you have any suggestions?
x_input = tf.keras.layers.Input((shape[1],shape[2], const.num_channels),name='x_input')
y_init = tf.keras.layers.Input((const.num_patches,2),name='y_init')
dxs = []
for i in range(const.num_iters_rnn):
if i is 0:
patches = tf.keras.layers.Lambda(extract_patches)([x_input,y_init])
else:
patches = tf.keras.layers.Lambda(extract_patches)([x_input,dxs[i-1]])
conv2d1 = tf.keras.layers.Conv2D(32, (3,3), padding='same', activation='relu')(patches)
maxpool1 = tf.keras.layers.MaxPooling2D()(conv2d1)
conv2d2 = tf.keras.layers.Conv2D(32, (3,3), padding='same', activation='relu')(maxpool1)
maxpool2 = tf.keras.layers.MaxPooling2D()(conv2d2)
crop = tf.keras.layers.Cropping2D(cropping=(const.crop_size, const.crop_size))(conv2d2)
cnn = tf.keras.layers.concatenate([crop,maxpool2])
cnn = tf.keras.layers.Lambda(reshape)(cnn)
if i is 0:
hidden_state = tf.keras.layers.Dense(const.numNeurons,activation='tanh')(cnn)
else:
concat = tf.keras.layers.concatenate([cnn,hidden_state],axis=1)
hidden_state = tf.keras.layers.Dense(const.numNeurons,activation='tanh')(concat)
hidden_state = tf.keras.layers.BatchNormalization()(hidden_state)
prediction = tf.keras.layers.Dense(const.num_patches*2,activation=None)(hidden_state)
prediction = tf.keras.layers.Dropout(0.5)(prediction)
prediction_reshape = tf.keras.layers.Reshape((const.num_patches, 2))(prediction)
if i is 0:
prediction = tf.keras.layers.Add()([prediction_reshape, y_init])
dxs.append(prediction)
else:
prediction = tf.keras.layers.Add()([prediction_reshape,dxs[i-1]])
dxs.append(prediction)
output = tf.keras.layers.Lambda(stack)(dxs)
model = tf.keras.models.Model(inputs=[x_input, y_init], outputs=[output])
def extract_patches(inputs):
list_patches = []
for j in range(const.num_patches):
patch_one = tf.image.extract_glimpse(inputs[0], [const.size_patch[0], const.size_patch[1]], inputs[1][:, j, :], centered=False, normalized=False, noise='zero')
list_patches.append(patch_one)
patches = tf.keras.backend.stack(list_patches,1)
return tf.keras.backend.reshape(patches,(-1,patches.shape[2],patches.shape[3],patches.shape[4]))
def reshape(inputs):
return tf.keras.backend.reshape(inputs,(-1,const.num_patches*inputs.shape[1]*inputs.shape[2]*inputs.shape[3]))
def stack(inputs):
return tf.keras.backend.stack(inputs)
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
x_input (InputLayer) [(None, 255, 235, 1) 0
__________________________________________________________________________________________________
y_init (InputLayer) [(None, 52, 2)] 0
__________________________________________________________________________________________________
lambda (Lambda) (None, 26, 26, 1) 0 x_input[0][0]
y_init[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 26, 26, 32) 320 lambda[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 conv2d[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 13, 13, 32) 9248 max_pooling2d[0][0]
__________________________________________________________________________________________________
cropping2d (Cropping2D) (None, 6, 6, 32) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 6, 6, 32) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 6, 6, 64) 0 cropping2d[0][0]
max_pooling2d_1[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 119808) 0 concatenate[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 512) 61342208 lambda_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 512) 2048 dense[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 104) 53352 batch_normalization[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 104) 0 dense_1[0][0]
__________________________________________________________________________________________________
reshape (Reshape) (None, 52, 2) 0 dropout[0][0]
__________________________________________________________________________________________________
add (Add) (None, 52, 2) 0 reshape[0][0]
y_init[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 26, 26, 1) 0 x_input[0][0]
add[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 26, 26, 32) 320 lambda_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 13, 13, 32) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 13, 13, 32) 9248 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
cropping2d_1 (Cropping2D) (None, 6, 6, 32) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 6, 6, 32) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 6, 6, 64) 0 cropping2d_1[0][0]
max_pooling2d_3[0][0]
__________________________________________________________________________________________________
lambda_3 (Lambda) (None, 119808) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 120320) 0 lambda_3[0][0]
batch_normalization[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 512) 61604352 concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 512) 2048 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 104) 53352 batch_normalization_1[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 104) 0 dense_3[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 52, 2) 0 dropout_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 52, 2) 0 reshape_1[0][0]
add[0][0]
__________________________________________________________________________________________________
lambda_4 (Lambda) (None, 26, 26, 1) 0 x_input[0][0]
add_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 26, 26, 32) 320 lambda_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 13, 13, 32) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 13, 13, 32) 9248 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
cropping2d_2 (Cropping2D) (None, 6, 6, 32) 0 conv2d_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 6, 6, 32) 0 conv2d_5[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 6, 6, 64) 0 cropping2d_2[0][0]
max_pooling2d_5[0][0]
__________________________________________________________________________________________________
lambda_5 (Lambda) (None, 119808) 0 concatenate_3[0][0]
__________________________________________________________________________________________________
concatenate_4 (Concatenate) (None, 120320) 0 lambda_5[0][0]
batch_normalization_1[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 512) 61604352 concatenate_4[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 512) 2048 dense_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 104) 53352 batch_normalization_2[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 104) 0 dense_5[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 52, 2) 0 dropout_2[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 52, 2) 0 reshape_2[0][0]
add_1[0][0]
__________________________________________________________________________________________________
lambda_6 (Lambda) (None, 26, 26, 1) 0 x_input[0][0]
add_2[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 26, 26, 32) 320 lambda_6[0][0]
__________________________________________________________________________________________________
max_pooling2d_6 (MaxPooling2D) (None, 13, 13, 32) 0 conv2d_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 13, 13, 32) 9248 max_pooling2d_6[0][0]
__________________________________________________________________________________________________
cropping2d_3 (Cropping2D) (None, 6, 6, 32) 0 conv2d_7[0][0]
__________________________________________________________________________________________________
max_pooling2d_7 (MaxPooling2D) (None, 6, 6, 32) 0 conv2d_7[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate) (None, 6, 6, 64) 0 cropping2d_3[0][0]
max_pooling2d_7[0][0]
__________________________________________________________________________________________________
lambda_7 (Lambda) (None, 119808) 0 concatenate_5[0][0]
__________________________________________________________________________________________________
concatenate_6 (Concatenate) (None, 120320) 0 lambda_7[0][0]
batch_normalization_2[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 512) 61604352 concatenate_6[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 512) 2048 dense_6[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 104) 53352 batch_normalization_3[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 104) 0 dense_7[0][0]
__________________________________________________________________________________________________
reshape_3 (Reshape) (None, 52, 2) 0 dropout_3[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 52, 2) 0 reshape_3[0][0]
add_2[0][0]
__________________________________________________________________________________________________
lambda_8 (Lambda) (4, None, 52, 2) 0 add[0][0]
add_1[0][0]
add_2[0][0]
add_3[0][0]
==================================================================================================
Total params: 246,415,136
Trainable params: 246,411,040
Non-trainable params: 4,096
you must decease your model size because for this model 1GB is reasonable but there are some solutions that do this in a way that final accuracy is not decreasing and also it increase in some cases. you can search for improving neural network with pruning.

Training and Validation Loss saturates at a value and does not decrease any further

I am trying to build a network for detecting 68 landmarks (x, y) on face. The training and validation images are 320x320x3 normalized between -0.5 and 0.5. My labels are 136 logits, each between 0 to 1.0 corresponding to X->(0, 320); Y->(0, 320). The loss function is keras "root_mean_square". Number of images in training dataset is about 5k. While training, my training and validation loss starts at about 6.0 and decreases to about 0.0022 in 100 iterations but then seems to be saturated at that level and does not go any lower. I have tried up to 2000 iterations. Looking at the output, it seems like the network learns to output 68 points in the shape of a face at the center of frame irrespective of where the really is.
I am using generator for getting the data and using sklearn.utils.shuffle() to make sure my data is shuffled properly.
Some posts suggested that the network could be overfitting because it is so complex for such a simple problem, so I have tried both a very simple network with about 10 layers and complex network of about 20 layers and my result is still the same. My current network is shown below, I have used 2 skip connections, 3 dropouts, and l2 regularizer to make sure that my network does not overfit. Underfitting should not be an issue because I have tried training my network for up to 2000 iterations.
Any suggestions on how to resolve this issue is greatly appreciated. Thanks!
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 320, 320, 3) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 320, 320, 3) 228 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 320, 320, 3) 12 conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 320, 320, 3) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 160, 160, 3) 0 activation_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 160, 160, 8) 608 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 160, 160, 8) 32 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 160, 160, 8) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 80, 80, 8) 0 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 80, 80, 16) 1168 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 80, 80, 16) 2320 conv2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 80, 80, 16) 64 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 80, 80, 16) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 40, 40, 16) 0 activation_3[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 40, 40, 32) 4640 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 40, 40, 32) 9248 conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 40, 40, 32) 9248 conv2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 40, 40, 32) 128 conv2d_7[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 40, 40, 32) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_6 (MaxPooling2D) (None, 20, 20, 32) 0 activation_4[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 20, 20, 32) 0 max_pooling2d_6[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 20, 20, 64) 18496 dropout_1[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 20, 20, 64) 36928 conv2d_8[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 20, 20, 16) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 20, 20, 64) 36928 conv2d_9[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 20, 20, 80) 0 max_pooling2d_3[0][0]
conv2d_10[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 20, 20, 80) 320 concatenate_1[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 20, 20, 80) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_7 (MaxPooling2D) (None, 10, 10, 80) 0 activation_5[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 10, 10, 80) 0 max_pooling2d_7[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 10, 10, 128) 92288 dropout_2[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 10, 10, 128) 147584 conv2d_11[0][0]
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 10, 10, 32) 0 conv2d_7[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 10, 10, 128) 147584 conv2d_12[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 10, 10, 160) 0 max_pooling2d_5[0][0]
conv2d_13[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 10, 10, 160) 640 concatenate_2[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 10, 10, 160) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
max_pooling2d_8 (MaxPooling2D) (None, 5, 5, 160) 0 activation_6[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 5, 5, 160) 0 max_pooling2d_8[0][0]
__________________________________________________________________________________________________
flatten_2 (Flatten) (None, 4000) 0 dropout_3[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 1024) 4097024 flatten_2[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 136) 139400 dense_3[0][0]
==================================================================================================
Total params: 4,744,888
Trainable params: 4,744,290
Non-trainable params: 598
__________________________________________________________________________________________________