Keras (Tensorflow) Reshape Layer input error - tensorflow

I have a reshape input error and i don't know why.
The requested shape is 1058400, what is (1, 21168) multiplied with the batch size of 50.
What I do not understand is the apparent input size of 677376.
I don't know where this value is coming from. The layer before the reshape is a flatten layer and I directly use the shape of it when I define the target shape of the Reshape layer.
The Model compiles just fine and I use Tensorflow as the backend, so it is defined before runtime. But the error appears only when I put date in to it.
Code:
import numpy as np
import tensorflow as tf
import keras.backend as K
from keras import Model
from keras.layers import LSTM, Conv2D, Dense, Flatten, Input, Reshape
from keras.optimizers import Adam
config = tf.ConfigProto(allow_soft_placement=True)
sess = tf.Session(config=config)
K.set_session(sess)
input = Input(batch_shape=(50, 230, 230, 1))
conv1 = Conv2D(
filters=12, kernel_size=(7, 7), strides=(1, 1), padding="valid", activation="relu"
)(input)
conv2 = Conv2D(
filters=24, kernel_size=(5, 5), strides=(1, 1), padding="valid", activation="relu"
)(conv1)
conv3 = Conv2D(
filters=48, kernel_size=(3, 3), strides=(2, 2), padding="valid", activation="relu"
)(conv2)
conv4 = Conv2D(
filters=48, kernel_size=(5, 5), strides=(5, 5), padding="valid", activation="relu"
)(conv3)
conv_out = Flatten()(conv4)
conv_out = Reshape(target_shape=(1, int(conv_out.shape[1])))(conv_out)
conv_out = Dense(128, activation="relu")(conv_out)
rnn_1 = LSTM(128, stateful=True, return_sequences=True)(conv_out)
rnn_2 = LSTM(128, stateful=True, return_sequences=True)(rnn_1)
rnn_3 = LSTM(128, stateful=True, return_sequences=False)(rnn_2)
value = Dense(1, activation="linear")(rnn_3)
policy = Dense(5, activation="softmax")(rnn_3)
model = Model(inputs=input, outputs=[value, policy])
adam = Adam(lr=0.001)
model.compile(loss="mse", optimizer=adam)
model.summary()
out = model.predict(np.random.randint(1, 5, size=(50, 230, 230, 1)))
print(out)
Summary:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (50, 230, 230, 1) 0
__________________________________________________________________________________________________
conv2d (Conv2D) (50, 224, 224, 12) 600 input_1[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (50, 220, 220, 24) 7224 conv2d[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (50, 109, 109, 48) 10416 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (50, 21, 21, 48) 57648 conv2d_2[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (50, 21168) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
reshape (Reshape) (50, 1, 21168) 0 flatten[0][0]
__________________________________________________________________________________________________
dense (Dense) (50, 1, 128) 2709632 reshape[0][0]
__________________________________________________________________________________________________
lstm (LSTM) (50, 1, 128) 131584 dense[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (50, 1, 128) 131584 lstm[0][0]
__________________________________________________________________________________________________
lstm_2 (LSTM) (50, 128) 131584 lstm_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (50, 1) 129 lstm_2[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (50, 5) 645 lstm_2[0][0]
==================================================================================================
Total params: 3,181,046
Trainable params: 3,181,046
Non-trainable params: 0
EDIT:
Error for the above code:
Traceback (most recent call last):
File "foo.py", line 45, in <module>
out = model.predict(np.random.randint(1, 5, size=(50, 230, 230, 1)))
File "/home/vyz/.conda/envs/stackoverflow/lib/python3.6/site-packages/keras/engine/training.py", line 1157, in predict
'Batch size: ' + str(batch_size) + '.')
ValueError: In a stateful network, you should only pass inputs with a number of samples that can be divided by the batch size. Found: 50 samples. Batch size: 32.

Edition of question
Important: I have edited your question so it actually runs and represents your problem. Input should take batch_shape as provided currently. Next time please make sure your code works, will be easier.
Solution
Solution is quite simple; your batch passed to the network has wrong dimension.
677376 / 21168 = 32 it is the default size of the batch which is expected by predict. You are supposed to specify it if it's different (50 in your case), like this:
out = model.predict(np.random.randint(1, 5, size=(50, 230, 230, 1)), batch_size=50)
Everything should work fine now and remember to specify batch size if you want it hardcoded.

Related

How to merge 2 trained model in keras?

Good evening everyone,
I have 5 classes and each one has 2000 images, I built 2 Models with different model names and that's my model code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(5, activation=tf.nn.softmax)
], name="Model1")
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels,
batch_size=128, epochs=30, validation_split=0.2)
model.save('f3_1st_model_seg.h5')
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(5, activation=tf.nn.softmax)
], name="Model2")
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels,
batch_size=128, epochs=30, validation_split=0.2)
model.save('f3_2nd_model_seg.h5')
then I used this code to merge the 2 models
input_shape = [150, 150, 3]
model = keras.models.load_model('1st_model_seg.h5')
model.summary()
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 148, 148, 32) 896
max_pooling2d (MaxPooling2D (None, 74, 74, 32) 0
)
conv2d_1 (Conv2D) (None, 72, 72, 32) 9248
max_pooling2d_1 (MaxPooling (None, 36, 36, 32) 0
2D)
conv2d_2 (Conv2D) (None, 34, 34, 64) 18496
max_pooling2d_2 (MaxPooling (None, 17, 17, 64) 0
2D)
conv2d_3 (Conv2D) (None, 15, 15, 128) 73856
max_pooling2d_3 (MaxPooling (None, 7, 7, 128) 0
2D)
flatten (Flatten) (None, 6272) 0
dense (Dense) (None, 5) 31365
=================================================================
Total params: 133,861
Trainable params: 133,861
Non-trainable params: 0
model2 = keras.models.load_model('2nd_model_seg.h5')
model2.summary()
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 148, 148, 32) 896
max_pooling2d (MaxPooling2D (None, 74, 74, 32) 0
)
conv2d_1 (Conv2D) (None, 72, 72, 32) 9248
max_pooling2d_1 (MaxPooling (None, 36, 36, 32) 0
2D)
conv2d_2 (Conv2D) (None, 34, 34, 64) 18496
max_pooling2d_2 (MaxPooling (None, 17, 17, 64) 0
2D)
conv2d_3 (Conv2D) (None, 15, 15, 128) 73856
max_pooling2d_3 (MaxPooling (None, 7, 7, 128) 0
2D)
flatten (Flatten) (None, 6272) 0
dense (Dense) (None, 5) 31365
=================================================================
Total params: 133,861
Trainable params: 133,861
Non-trainable params: 0
def concat_horizontal(models, input_shape):
models_count = len(models)
hidden = []
input = tf.keras.layers.Input(shape=input_shape)
for i in range(models_count):
hidden.append(models[i](input))
output = tf.keras.layers.concatenate(hidden)
model = tf.keras.Model(inputs=input, outputs=output)
return model
new_model = concat_horizontal(
[model, model2], (input_shape))
new_model.save('f1_1st_merged_seg.h5')
new_model.summary()
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 150, 150, 3 0 []
)]
model1 (Sequential) (None, 5) 133861 ['input_1[0][0]']
model2 (Sequential) (None, 5) 133861 ['input_1[0][0]']
concatenate (Concatenate) (None, 10) 0 ['model1[0][0]',
'model2[0][0]']
==================================================================================================
Total params: 267,722
Trainable params: 267,722
Non-trainable params: 0
so after I tested the merged model I found some images getting classes 7 and 9 although I have only 5 classes and that's my code for prediction
class_names = ['A', 'B', 'C', D', 'E']
for img in os.listdir(path):
# predicting images
img2 = tf.keras.preprocessing.image.load_img(
os.path.join(path, img), target_size=(150, 150))
x = tf.keras.preprocessing.image.img_to_array(img2)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = np.argmax(model.predict(images), axis=-1)
y_out = class_names[classes[0]]
I got this error
y_out = class_names[classes[0]]
IndexError: list index out of range
for this case it could have been done even by sequential method, look you are trying to concatenate two output layers with 5 columns; so it would lead into increase classes from 5 to 10; try out to define these two models up to output layer (the flatten layer as the last layer defined for both these models) and then define final model with input layer, these two models, and concatenate layer and then the output layer with five units and activation;
so remove output layer
tf.keras.layers.Dense(5, activation=tf.nn.softmax)
from those two models, and implement it just as one layer after the output layer you have defined here
def concat_horizontal(models, input_shape):
models_count = len(models)
hidden = []
input = tf.keras.layers.Input(shape=input_shape)
for i in range(models_count):
hidden.append(models[i](input))
output = tf.keras.layers.concatenate(hidden)
output = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(output)
model = tf.keras.Model(inputs=input, outputs=output)
return model
But notice it would be better to define branch models based on functional API method for these cases

conv-autoencoder that val_loss doesn't decrease

I build a anomaly detection model using conv-autoencoder on UCSD_ped2 dataset. What puzzles me is that after very few epochs ,the val_loss don't decrease. It seem that the model couldn't learn any longer. I have done some research to improve my model,but it doesn't getting better. what should i do to fix it?
Here's my model's struct:
x=144;y=224
input_img = Input(shape = (x, y, inChannel))
bn1= BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(input_img)
conv1 = Conv2D(256, (11, 11), strides=(4,4),activation='relu',
kernel_regularizer=regularizers.l2(0.0005),
kernel_initializer=initializers.glorot_normal(seed=None),
padding='same')(bn1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
bn2= BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(pool1)
conv2 = Conv2D(128, (5, 5),activation='relu',
kernel_regularizer=regularizers.l2(0.0005),
kernel_initializer=initializers.glorot_normal(seed=None),
padding='same')(bn2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
bn3= BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(pool2)
conv3 = Conv2D(64, (3, 3), activation='relu',
kernel_regularizer=regularizers.l2(0.0005),
kernel_initializer=initializers.glorot_normal(seed=None),
padding='same')(bn3)
ubn3=BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(conv3)
uconv3=Conv2DTranspose(128, (3,3),activation='relu',
kernel_regularizer=regularizers.l2(0.0005),
kernel_initializer=initializers.glorot_normal(seed=None),
padding='same')(ubn3)
upool3=UpSampling2D(size=(2, 2))(uconv3)
ubn2=BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(upool3)
uconv2=Conv2DTranspose(256, (3, 3),activation='relu',
kernel_regularizer=regularizers.l2(0.0005),
kernel_initializer=initializers.glorot_normal(seed=None),
padding='same')(ubn2)
upool2=UpSampling2D(size=(2, 2))(uconv2)
ubn1=BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(upool2)
decoded = Conv2DTranspose(1, (11, 11), strides=(4, 4),
kernel_regularizer=regularizers.l2(0.0005),
kernel_initializer=initializers.glorot_normal(seed=None),
activation='sigmoid', padding='same')(ubn1)
autoencoder = Model(input_img, decoded)
autoencoder.compile(loss = 'mean_squared_error', optimizer ='Adadelta',metrics=['accuracy'])
history=autoencoder.fit(X_train, Y_train,validation_split=0.3,
batch_size = batch_size, epochs = epochs, verbose = 0,
shuffle=True,
callbacks=[earlystopping,checkpointer,reduce_lr])
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 144, 224, 1) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 144, 224, 1) 4
_________________________________________________________________
conv2d_1 (Conv2D) (None, 36, 56, 256) 31232
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 18, 28, 256) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 18, 28, 256) 1024
_________________________________________________________________
conv2d_2 (Conv2D) (None, 18, 28, 128) 819328
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 9, 14, 128) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 9, 14, 128) 512
_________________________________________________________________
conv2d_3 (Conv2D) (None, 9, 14, 64) 73792
_________________________________________________________________
batch_normalization_4 (Batch (None, 9, 14, 64) 256
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 9, 14, 128) 73856
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 18, 28, 128) 0
_________________________________________________________________
batch_normalization_5 (Batch (None, 18, 28, 128) 512
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 18, 28, 256) 295168
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 36, 56, 256) 0
_________________________________________________________________
batch_normalization_6 (Batch (None, 36, 56, 256) 1024
_________________________________________________________________
conv2d_transpose_3 (Conv2DTr (None, 144, 224, 1) 30977
=================================================================
Total params: 1,327,685
Trainable params: 1,326,019
Non-trainable params: 1,666
the batch size=30;epoch=100 training data has 1785 pic; validation data has 765 pic.
I have tried :
delete kernel_regularizer;
adding ReduceLROnPlateau.
,but it only get a little improve.
Epoch 00043: ReduceLROnPlateau reducing learning rate to 9.99999874573554e-12.
Epoch 00044: val_loss did not improve from 0.00240
Epoch 00045: val_loss did not improve from 0.00240
As the val_loss get 0.00240, it didn't decrease...
The following figure was loss with epoch.
The following figure show model's reconstruction result which are truly poor.How can I making my model more workful?
Based on your screenshot, It seems that it is not an issue of overfitting or underfitting.
On my understanding:
Underfitting – Validation and training error high
Overfitting – Validation error is high, training error low
Good fit – Validation error low, slightly higher than the training error
Generally speaking, the dataset should be split properly for training and validation.
Typically the training set should be 4 times (80/20) the number of your validation set.
My suggestion is that you can try to increase the number of your datasets by doing data augmentation and continue the training.
Kindly refer to the documentation for data augmentation.

Error: expected conv3d_1_input to have 5 dimensions, but got array with shape (10, 224, 224, 3)

I'm trying to train a Neural Network on a dataset for liveness anti-spoofing. I have some videos in two folders named genuine and fake. I have extracted 10 frames of each video and saved them in two folders with aforementioned names under a new directory tarining.
--/training/
----/genuine/ #containes 10frame*300videos=3000images
----/fake/ #containes 10frame*800videos=8000images
I designed the following 3D Convent using Keras as my first try, but after running it, it throws the following exception:
from keras.preprocessing.image import ImageDataGenerator
from keras import Model, optimizers, activations, losses, regularizers, backend, Sequential
from keras.layers import Dense, MaxPooling3D, AveragePooling3D, Conv3D, Input, Flatten, BatchNormalization
BATCH_SIZE = 10
TARGET_SIZE = (224, 224)
train_datagen = ImageDataGenerator(rescale=1.0/255,
data_format='channels_last',
validation_split=0.2,
shear_range=0.0,
zoom_range=0,
horizontal_flip=False,
featurewise_center=False,
featurewise_std_normalization=False,
width_shift_range=False,
height_shift_range=False)
train_generator = train_datagen.flow_from_directory("./training/",
target_size=TARGET_SIZE,
batch_size=BATCH_SIZE,
class_mode='binary',
shuffle=False,
subset='training')
validation_generator = train_datagen.flow_from_directory("./training/",
target_size=TARGET_SIZE,
batch_size=BATCH_SIZE,
class_mode='binary',
shuffle=False,
subset='validation')
SHAPE = (10, 224, 224, 3)
model = Sequential()
model.add(Conv3D(filters=128, kernel_size=(1, 3, 3), data_format='channels_last', activation='relu', input_shape=(10, 224, 224, 3)))
model.add(MaxPooling3D(data_format='channels_last', pool_size=(1, 2, 2)))
model.add(Conv3D(filters=64, kernel_size=(2, 3, 3), activation='relu'))
model.add(MaxPooling3D(pool_size=(1, 2, 2)))
model.add(Conv3D(filters=32, kernel_size=(2, 3, 3), activation='relu'))
model.add(Conv3D(filters=32, kernel_size=(2, 3, 3), activation='relu'))
model.add(MaxPooling3D(pool_size=(1, 2, 2)))
model.add(Conv3D(filters=16, kernel_size=(2, 3, 3), activation='relu'))
model.add(Conv3D(filters=16, kernel_size=(2, 3, 3), activation='relu'))
model.add(AveragePooling3D())
model.add(BatchNormalization())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer=optimizers.adam(), loss=losses.binary_crossentropy, metrics=['accuracy'])
model.fit_generator(train_generator, steps_per_epoch=train_generator.samples/train_generator.batch_size, epochs=5, validation_data=validation_generator, validation_steps=validation_generator.samples/validation_generator.batch_size)
model.save('3d.h5')
Here is the Error:
ValueError: Error when checking input: expected conv3d_1_input to have 5 dimensions, but got array with shape (10, 224, 224, 3)
And this is the output of model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv3d_1 (Conv3D) (None, 10, 222, 222, 128) 3584
_________________________________________________________________
max_pooling3d_1 (MaxPooling3 (None, 10, 111, 111, 128) 0
_________________________________________________________________
conv3d_2 (Conv3D) (None, 9, 109, 109, 64) 147520
_________________________________________________________________
max_pooling3d_2 (MaxPooling3 (None, 9, 54, 54, 64) 0
_________________________________________________________________
conv3d_3 (Conv3D) (None, 8, 52, 52, 32) 36896
_________________________________________________________________
conv3d_4 (Conv3D) (None, 7, 50, 50, 32) 18464
_________________________________________________________________
max_pooling3d_3 (MaxPooling3 (None, 7, 25, 25, 32) 0
_________________________________________________________________
conv3d_5 (Conv3D) (None, 6, 23, 23, 16) 9232
_________________________________________________________________
conv3d_6 (Conv3D) (None, 5, 21, 21, 16) 4624
_________________________________________________________________
average_pooling3d_1 (Average (None, 2, 10, 10, 16) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 2, 10, 10, 16) 64
_________________________________________________________________
dense_1 (Dense) (None, 2, 10, 10, 32) 544
_________________________________________________________________
dense_2 (Dense) (None, 2, 10, 10, 1) 33
=================================================================
Total params: 220,961
Trainable params: 220,929
Non-trainable params: 32
__________________________________________________________
I'd appreciate any help to fix the exception. By the way, I'm using TensorFlow as backend if it helps to solve the problem.
As #thushv89 mentioned in the comments Keras has no build-in video generator which causes a lot of problems for those who will work with big video datasets. Therefore, I wrote a simple VideoDataGenerator which works almost as simple as ImageDataGenerator. The script could be found here on my github in case someone needs it in the future.

keras-tensorflow CAE dimension mismatch

I'm basically following this guide to build convolutional autoencoder with tensorflow backend. The main difference to the guide is that my data is 257x257 grayscale images. The following code:
TRAIN_FOLDER = 'data/OIRDS_gray/'
EPOCHS = 10
SHAPE = (257,257,1)
FILELIST = os.listdir(TRAIN_FOLDER)
def loadTrainData():
train_data = []
for fn in FILELIST:
img = misc.imread(TRAIN_FOLDER + fn)
img = np.reshape(img,(len(img[0,:]), len(img[:,0]), SHAPE[2]))
if img.shape != SHAPE:
print "image shape mismatch!"
print "Expected: "
print SHAPE
print "but got:"
print img.shape
sys.exit()
train_data.append (img)
train_data = np.array(train_data)
train_data = train_data.astype('float32')/ 255
return np.array(train_data)
def createModel():
input_img = Input(shape=SHAPE)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu',padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid',padding='same')(x)
return Model(input_img, decoded)
x_train = loadTrainData()
autoencoder = createModel()
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
print x_train.shape
autoencoder.summary()
# Run the network
autoencoder.fit(x_train, x_train,
epochs=EPOCHS,
batch_size=128,
shuffle=True)
gives me a error:
ValueError: Error when checking target: expected conv2d_7 to have shape (None, 260, 260, 1) but got array with shape (859, 257, 257, 1)
As you can see this is not the standard problem with theano/tensorflow backend dim ordering, but something else. I checked that my data is what it's supposed to be with print x_train.shape:
(859, 257, 257, 1)
And I also run autoencoder.summary():
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 257, 257, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 257, 257, 16) 160
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 129, 129, 16) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 129, 129, 8) 1160
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 65, 65, 8) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 65, 65, 8) 584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 33, 33, 8) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 33, 33, 8) 584
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 66, 66, 8) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 66, 66, 8) 584
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 132, 132, 8) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 132, 132, 16) 1168
_________________________________________________________________
up_sampling2d_3 (UpSampling2 (None, 264, 264, 16) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 264, 264, 1) 145
=================================================================
Total params: 4,385
Trainable params: 4,385
Non-trainable params: 0
_________________________________________________________________
Now I'm not exactly sure where the problem is, but it does look like things go wrong around conv2d_6 (Param # too high). I do know how CAE's work on principle, but I'm not that familiar with the exact technical details yet and I have tried to solve this mainly by messing with deconvolution padding (instead of same, using valid). The closes I got to dims matching was (None, 258, 258, 1). I achieved this by blindly trying different combinations of padding on deconvolution side, not really a smart way to solve a problem...
At this point I'm at a loss, and any help would be appreciated
Since your input and output data are the same, your final output shape should be the same as the input shape.
The last convolutional layer should have shape (None, 257,257,1).
The problem is happening because you have an odd number as the sizes of the images (257).
When you apply MaxPooling, it should divide the number by two, so it chooses rounding either up or down (it's going up, see the 129, coming from 257/2 = 128.5)
Later, when you do UpSampling, the model doesn't know the current dimensions were rounded, it simply doubles the value. This happening in sequence is adding 7 pixels to the final result.
You could try either cropping the result or padding the input.
I usually work with images of compatible sizes. If you have 3 MaxPooling layers, your size should be a multiple of 2³. The answer is 264.
Padding the input data directly:
x_train = numpy.lib.pad(x_train,((0,0),(3,4),(3,4),(0,0)),mode='constant')
This will require that SHAPE=(264,264,1)
Padding inside the model:
import keras.backend as K
input_img = Input(shape=SHAPE)
x = Lambda(lambda x: K.spatial_2d_padding(x, padding=((3, 4), (3, 4))), output_shape=(264,264,1))(input_img)
Cropping the results:
This will be required in any case where you do not change the actual data (numpy array) directly.
decoded = Lambda(lambda x: x[:,3:-4,3:-4,:], output_shape=SHAPE)(x)

Upsampling by decimal factor in Keras

I want to use an upsampling 2D layer in keras so that I can increase the image size by a decimal factor (in this case from [213,213] to [640,640]). The layer is compiled as expected, but when I want to train or predict on real images, they are upsampled only by the closest integer to the input factor. Any idea? Details below:
Network:
mp_size = (3,3)
inputs = Input(input_data.shape[1:])
lay1 = Conv2D(32, (3,3), strides=(1,1), activation='relu', padding='same', kernel_initializer='glorot_normal')(inputs)
lay2 = MaxPooling2D(pool_size=mp_size)(lay1)
lay3 = Conv2D(32, (3,3), strides=(1,1), activation='relu', padding='same', kernel_initializer='glorot_normal')(lay2)
size1=lay3.get_shape()[1:3]
size2=lay1.get_shape()[1:3]
us_size = size2[0].value/size1[0].value, size2[1].value/size1[1].value
lay4 = Concatenate(axis=-1)([UpSampling2D(size=us_size)(lay3),lay1])
lay5 = Conv2D(1, (1, 1), strides=(1,1), activation='sigmoid')(lay4)
model = Model(inputs=inputs, outputs=lay5)
Network summary when I use model.summary() :
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_4 (InputLayer) (None, 640, 640, 2) 0
____________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, 640, 640, 32) 608 input_4[0][0]
____________________________________________________________________________________________________
max_pooling2d_14 (MaxPooling2D) (None, 213, 213, 32) 0 conv2d_58[0][0]
____________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, 213, 213, 32) 9248 max_pooling2d_14[0][0]
____________________________________________________________________________________________________
up_sampling2d_14 (UpSampling2D) (None, 640.0, 640.0, 0 conv2d_59[0][0]
____________________________________________________________________________________________________
concatenate_14 (Concatenate) (None, 640.0, 640.0, 0 up_sampling2d_14[0][0]
conv2d_58[0][0]
____________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, 640.0, 640.0, 65 concatenate_14[0][0]
====================================================================================================
Total params: 9,921
Trainable params: 9,921
Non-trainable params: 0
Error when training the network:
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,639,639,32] vs. shape[1] = [1,640,640,32]
[[Node: concatenate_14/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](up_sampling2d_14/ResizeNearestNeighbor, conv2d_58/Relu, concatenate_14/concat/axis)]]
It can be resolved by using the below code:
from keras.layers import UpSampling2D
from keras.utils.generic_utils import transpose_shape
class UpSamplingUnet(UpSampling2D):
def compute_output_shape(self, input_shape):
size_all_dims = (1,) + self.size + (1,)
spatial_axes = list(range(1, 1 + self.rank))
size_all_dims = transpose_shape(size_all_dims,
self.data_format,
spatial_axes)
output_shape = list(input_shape)
for dim in range(len(output_shape)):
if output_shape[dim] is not None:
output_shape[dim] *= size_all_dims[dim]
output_shape[dim]=int(output_shape[dim])
return tuple(output_shape)
Then alter UpSampling2D(size=us_size) to UpSamplingUnet(size=us_size).