I have this model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3, 3), padding='same',
input_shape=(96, 96, 1),
kernel_regularizer = tf.keras.regularizers.l2(0.01)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.PReLU(alpha_initializer='zeros'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(128, (5, 5), padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.PReLU(alpha_initializer='zeros'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(512, (3, 3), padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.PReLU(alpha_initializer='zeros'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(512, (3, 3), padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.PReLU(alpha_initializer='zeros'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.PReLU(alpha_initializer='zeros'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1024),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.PReLU(alpha_initializer='zeros'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(7, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7),
loss="categorical_crossentropy",
metrics =['accuracy'])
the model works, trained well and everything, but when I try to predict I get a weird shape, as you can see in the image below
why do I have a none, my shape is (96,96,1), how do I fix this?
There is nothing to fix, the None in the batch dimension implies that this dimension is variable sized. This makes sense since the model can be trained using any batch size.
Related
I'm training a U-net type model with a minor variation in the architecture which is the Atrous Spatial Pyramid pooling (ASPP) layer at the bottleneck after the encoder. I profiled the model during one forward pass and used tensorboard to check the tracer_view to see which part of the model has the highest latency.
Profiler Tracer View with ASPP layer
This revealed that there's a lot of idle GPU time at ASPP computation. I double checked it by removing the ASPP layer and the just connected the encoder to the decoder. In this experiment, the idle time that was previously there disappeared.
Profiler Tracer View without ASPP layer
I understand that the second model example would be a bit smaller than the former.
This is how my model looks like with ASPP layer. And to I just commented those ASPP layers out to profile the model without ASPP layers.
With ASPP
def get_custom_deeplab(image_size: tuple, num_classes: int):
"""
This model uses a vanilla CNN backbone. This model also uses upsampling2d in place of conv2d transpose
"""
input_layer = keras.Input(shape=(image_size[0], image_size[1], 3))
conv1 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(input_layer)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(pool1)
conv2 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(128, (1, 1), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(pool2)
conv3 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(128, (1, 1), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
#######ASPP layers
out_1 = Conv2D(256, (1, 1), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), dilation_rate=1, padding='same')(pool4)
out_6 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), dilation_rate=6, padding='same')(pool4)
out_12 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), dilation_rate=10, padding='same')(pool4)
out_14 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), dilation_rate=14, padding='same')(pool4)
x = layers.Concatenate(axis=-1)([out_1, out_6, out_12, out_14])
########ASPP's output
x = Conv2D(256, (1, 1), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), dilation_rate=1, padding='same')(x)
x = layers.UpSampling2D(
(2,2),interpolation="bilinear",
)(x)
skip_connection_1 = pool3
x = layers.Concatenate(axis=-1)([x,skip_connection_1])
x = Conv2D(128, (1, 1), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(x)
x = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(x)
x = layers.UpSampling2D(
(2,2),interpolation="bilinear",
)(x)
skip_connection_2 = pool2
x = layers.Concatenate(axis=-1)([x,skip_connection_2])
x = Conv2D(128, (1, 1), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(x)
x = Conv2D(256, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(x)
x = layers.UpSampling2D(
(2,2),interpolation="bilinear",
)(x)
x = Conv2D(64, (3, 3), activation='relu', kernel_initializer='lecun_uniform', kernel_constraint=max_norm(3), padding='same')(x)
x = layers.UpSampling2D(
(2,2),interpolation="bilinear",
)(x)
x = Conv2D(
num_classes,
kernel_size=1,
padding="same",
use_bias=True,
kernel_initializer=keras.initializers.HeNormal(),
)(x)
return tf.keras.Model(inputs=input_layer,outputs=x)
But, I would like to know if there's any workaround to mitigate the problem of GPU idle time when the model has layers like ASPP?
I want to implement convLSTM2D with attention mechanism.
time_step = 5
row = 15
col = 20
seq = Sequential()
seq.add(ConvLSTM2D(filters=40, kernel_size=(2, 2),
input_shape=(time_step, row, col, 1),
padding='same', return_sequences=True))
seq.add(BatchNormalization())
seq.add(ConvLSTM2D(filters=40, kernel_size=(2, 2),
padding='same', return_sequences=True))
seq.add(BatchNormalization())
seq.add(ConvLSTM2D(filters=40, kernel_size=(2, 2),
padding='same', return_sequences=True))
seq.add(BatchNormalization())
seq.add(Attention(return_sequences=True))
seq.add(ConvLSTM2D(filters=40, kernel_size=(2, 2),
padding='same'))
seq.add(BatchNormalization())
seq.add(Conv2D(filters=1, kernel_size=(3, 3),
activation='sigmoid',
padding='same', data_format='channels_last'))
seq.compile(loss='mse', optimizer='adamax')
but error occurs every time
Call arguments received by layer "attention" (type Attention):
• x=tf.Tensor(shape=(None, 5, 15, 20, 40), dtype=float32)
i don't know how to solve it. but there is few things about convLSTM2D cell. everything I know is usual attention with convLSTM2D is not available.
I need some help to find which architecture does this CNN Model belongs to, like leNet, GoogleNet, etc..
input_shape = (BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, CHANNELS)
n_classes = 10
model = models.Sequential([
resize_and_rescale,
layers.Conv2D(32, kernel_size = (3,3), activation='relu', input_shape=input_shape),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, kernel_size = (3,3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, kernel_size = (3,3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(n_classes, activation='softmax'),
])
model.build(input_shape=input_shape)
Model Summary image link is attached
I have zigsaw puzzle images and I have the corresponding pairs.I want to give the image as input to the model and find the corresponding pair of it.I have made the below model which achieves a bad accuracy of 30% while in the training.But when I pass the test images array it predicts an array having all nan values.Should I change my loss function? Please check the code below the image
in_shape=(32,256,256,3)
model1=models.Sequential(
[
resize_and_rescale,
layers.Conv2D(32,(3,3),activation="relu",input_shape=in_shape,padding='same'),
layers.Dropout(0.1),
layers.Conv2D(32,(3,3),activation="relu",input_shape=in_shape,padding='same'),
layers.MaxPooling2D((2,2)),
layers.Conv2D(64, kernel_size = (3,3), activation='relu',padding='same'),
layers.Dropout(0.1),
layers.Conv2D(64,(3,3),activation="relu",input_shape=in_shape,padding='same'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, kernel_size = (3,3), activation='relu',padding='same'),
layers.Dropout(0.1),
layers.Conv2D(128,(3,3),activation="relu",input_shape=in_shape,padding='same'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(256, kernel_size = (3,3), activation='relu',padding='same'),
layers.MaxPooling2D((2, 2)),
layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same'),
layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Dropout(0.2),
layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same'),
layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same'),
layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Dropout(0.2),
layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same'),
layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Dropout(0.2),
layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same'),
layers.Conv2D(3, (1, 1), activation='sigmoid')
]
)
model1.build(input_shape=in_shape)
model1.compile(
optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
metrics=['accuracy']
)
If you're predicting pixel values [0, 255], then you'll want to change your last layer to:
layers.Conv2D(3, (1, 1), activation='linear')
A sigmoid activation function will try to force your outputs to a range of [0, 1], whereas a linear activation will allow for regression to pixel values of the range [0, 255], assuming that's what you want.
I was building a simple CNN for a binary image classification task and found something confusing regarding the use of the BatchNormalization layer in Tensorflow. There are 320 images in the training set with evenly divided negative and positive cases. There are 80 images in the validation set with evenly divided negative and positive cases. I set the batch_size for both training and validation set to 32.
Here is the architecture of my original model.
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dropout, Dense
model = Sequential([
Input(shape=(256, 256, 3]),
Conv2D(64, (3, 3), padding=“same”, activation=“relu”),
Conv2D(64, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Conv2D(128, (3, 3), padding=“same”, activation=“relu”),
Conv2D(128, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation=“relu”),
Dropout(0.5),
Dense(64, activation=“relu”),
Dense(1, activation=“sigmoid”)
])
Both training and validation accuracies increased from somewhere between 0.5 and 0.6 and eventually converged somewhere between 0.95, which is good. However, with everything else unchanged, if I introduced BatchNormalization layers between Conv2D layers like the following:
model = Sequential([
Input(shape=(256, 256, 3]),
Conv2D(64, (3, 3), padding=“same”),
BatchNormalization(),
ReLU(),
Conv2D(64, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Conv2D(128, (3, 3), padding=“same”),
BatchNormalization(),
ReLU(),
Conv2D(128, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Conv2D(256, (3, 3), padding=“same”),
BatchNormalization(),
ReLU(),
Conv2D(256, (3, 3), padding=“same”),
BatchNormalization(),
ReLU(),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation=“relu”),
Dropout(0.5),
Dense(64, activation=“relu”),
Dense(1, activation=“sigmoid”)
])
The training accuracy still behaved the same way as before but validation accuracy remained constant at 0.5. I also tried several other implementations of BatchNormalization layers and had the same observation.
model = Sequential([
Input(shape=(256, 256, 3]),
Conv2D(64, (3, 3), padding=“same”, activation=“relu”),
BatchNormalization(),
Conv2D(64, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Conv2D(128, (3, 3), padding=“same”, activation=“relu”),
BatchNormalization(),
Conv2D(128, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
BatchNormalization(),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
BatchNormalization(),
Conv2D(256, (3, 3), padding=“same”, activation=“relu”),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation=“relu”),
Dropout(0.5),
Dense(64, activation=“relu”),
Dense(1, activation=“sigmoid”)
])
I am rather new with deep learning and from what I have read recently on different tutorials about BatchNormalization, it should usually (though not always) be a performance booster. However, from what I have observed, BatchNormalization was ruining my validation performance. There are people online saying that there are bugs in Keras’s implementation of BatchNormalization. I had tried to reduce the layers of BatchNormalization and found that with even a single layer of BatchNormalization introduced, the validation accuracy will be constant at 0.5. Do you have any ideas why?