Keras error with shape of neural network - tensorflow

Please help with the following code piece, error seems to be related to shape of output but I am not sure what should I change, my input is X and lable for training data is y(see in code)
def model(load, shape, checkpoint=None):
"""Return a model from file or to train on."""
if load and checkpoint: return load_model(checkpoint)
conv_layers, dense_layers = [32, 32, 64, 128], [1024, 512]
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='elu', input_shape=shape))
model.add(MaxPooling2D())
for cl in conv_layers:
model.add(Convolution2D(cl, 3, 3, activation='elu'))
model.add(MaxPooling2D())
model.add(Flatten())
for dl in dense_layers:
model.add(Dense(dl, activation='elu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer="adam")
return model
net = model(load=False, shape=(100, 100, 3))
X = ['/path/to/img/file',...]
y = [[1.2, 4.5],[<num1>,[num2>]]]
net.fit_generator(_generator(256, X, y), samples_per_epoch=1000, nb_epoch=2)
Leads to following error:
net.fit_generator(_generator(256, X, y), samples_per_epoch=1000, nb_epoch=2)
ValueError: Error when checking target: expected dense_3 to have shape (None, 1) but got array with shape (256, 2)

Seems like you want to do a binary classification. Your labeled Data has the shape (batch_size, 2). I would gues that is always 0,1 or 1,0 depending on which class is right. But your model only has one output. If you're using mean squared array your model need as many output neurons as last dimension in your labels => 2.
You can now either transform your label data in something of shape (batch_size, 1) (or only (batch_size) I'm not sure) or you increase the number of neurons in your output layer.
Also if I'm right, that you want to do a binary classification use binary crossentropy as loss function.

If y is of the form [[1.2, 4.5], ...], this will work (instead of the last layer you have defined currently):
model.add(Dense(2, activation='linear'))
If y is either in the form [1.2, 3.4, ...] or [[1.2], [3.4], ...] you can use the layer you have:
model.add(Dense(1, activation='linear'))

Related

How can I concatenate Tensorflow Dataset columns?

I have a Keras model that takes an input layer with shape (n, 288, 1), of which 288 is the number of features. I am using a TensorFlow dataset tf.data.experimental.make_batched_features_dataset and my input layer will be (n, 1, 1) which means it gives one feature to the model at a time. How can I make an input tensor with the shape of (n, 288, 1)? I mean how can I use all my features in one tensor?
Here is my code for the model:
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
def _input_fn(file_pattern, tf_transform_output, batch_size):
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=features.transformed_name(features.LABEL_KEY))
return dataset
def _build_keras_model(nb_classes=2, input_shape, learning_rate):
# Keras needs the feature definitions at compile time.
input_shape = (288,1)
input_layer = keras.layers.Input(input_shape)
padding = 'valid'
if input_shape[0] < 60:
padding = 'same'
conv1 = keras.layers.Conv1D(filters=6, kernel_size=7, padding=padding, activation='sigmoid')(input_layer)
conv1 = keras.layers.AveragePooling1D(pool_size=3)(conv1)
conv2 = keras.layers.Conv1D(filters=12, kernel_size=7, padding=padding, activation='sigmoid')(conv1)
conv2 = keras.layers.AveragePooling1D(pool_size=3)(conv2)
flatten_layer = keras.layers.Flatten()(conv2)
output_layer = keras.layers.Dense(units=nb_classes, activation='sigmoid')(flatten_layer)
model = keras.models.Model(inputs=input_layer, outputs=output_layer)
optimizer = keras.optimizers.Adam(lr=learning_rate)
# Compile Keras model
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
model.summary(print_fn=logging.info)
return model
This is the error:
tensorflow:Model was constructed with shape (None, 288, 1) for input Tensor("input_1:0", shape=(None, 288, 1), dtype=float32), but it was called on an input with incompatible shape (128, 1, 1).

keras classification problem, error in model.fit command

'I want to solve a classification problem by keras.model, but after running model.fit I face to a dimension error. I have run following code:'
print(X_train.shape)
print(y_train.shape)
'output:'
(2588, 39436)
(2588, 6)
model = keras.Sequential(
[
keras.Input(shape=(39436,1)),
layers.Conv1D(32, kernel_size=3, strides=5, activation="relu"),
layers.MaxPooling1D(pool_size=10),
layers.Conv1D(64, kernel_size=3, strides=5, activation="relu"),
layers.MaxPooling1D(pool_size=10),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
'After running following code, '
model.fit(X_train, y_train, batch_size=128, epochs=15, validation_split=0.3)
'I give this error:'
ValueError: in user code:
ValueError: Input 0 of layer sequential_1 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [None, 39436]
'It would be appreciated if you guide me what would be the issue?'
Your input array, as per the error message, has a shape [None, 39436]. However, in your Input layer, you pass in a shape [39436, 1], which matches to [None, 39436, 1] where None represents the samples dimension. This is the error that is being thrown.
You need to match the shapes, either by:
1. Reshaping your input data to have a shape of [samples, 39436, 1], leaving the model architecture unchanged.
This can be done as (suppose train_X are your input features):
train_X = np.expand_dims(train_X, axis=2)
np.expand_dims adds a new dimension to the array at index 2 of the shape of the array. So here it reshapes [samples, 39436] to [samples, 39436, 1].
Refer: NumPy docs for expand_dims
OR
2. Change the input_shape parameter in the Input layer to accept a shape of [39436,], so as to match your data.

UnimplementedError: Fused conv implementation does not support grouped convolutions for now

I am trying to build a CNN model to recognise human sketch using the TU-Berlin dataset. I downloaded the png zip file, imported the data to Google Colab and then split the data into train-test folders. Here is the model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters = 64, kernel_size = (5,5),padding = 'Same',
activation ='relu', input_shape = target_dims),
tf.keras.layers.Conv2D(filters = 64, kernel_size = (5,5),padding = 'Same',
activation ='relu'),
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same',
activation ='relu'),
tf.keras.layers.Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same',
activation ='relu'),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(256, kernel_size=4, strides=1, activation='relu', padding='same'),
tf.keras.layers.Conv2D(256, kernel_size=4, strides=2, activation='relu', padding='same'),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation = "relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(n_classes, activation= "softmax")
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit_generator(train_generator, epochs=10, validation_data=val_generator)
And I am getting the following error:
UnimplementedError: Fused conv implementation does not support grouped convolutions for now.
[[node sequential/conv2d/Relu (defined at <ipython-input-9-36d4624b896d>:1) ]] [Op:__inference_train_function_1358]
Function call stack:
train_function
I would be grateful to any kind of help that will solve this issue. Thank you.
(PS - I am running Tensorflow 2.2.0 and no GPU)
I had a similar error, the problem was with the number of channels for my image and the number of channels I specified in the model. So check the number of dimension of your image and check the value specified in the input shape ensure they are the same
I had this same error using the facial expression recognition dataset, here's how i solved this same error.
From what i understand the dataset is gray color,
when you use ImageDataGenerator of tensorflow and flow_from_directory to generate the train and validation set,
you need to specify the color_mode as grayscale or rgb based on the dataset/images, here it will be 'grayscale',
in the model the first layer Conv2D the input_shape should be
input_shape = (height, width, 1), 1 because its grayscale.
Just mention the color_mode="grayscale" in flow from directory and check your model input (height,width,1).
Just as #grande_cifer said, the issue pops up from an incompatibility of number of image channel specified and correct number of channels of real images.
If you are not sure of the exact number of channel, I advice you specify 1 in your parameter target_dims, and forcefully convert all images when loading them to your net as grayscale, using the parameter color_mode = "grayscale" when loading the images to your net.
For more info, check keras online doc.
You will find this error in 2 cases:
when the number of channels for your image and the number of channels you specified in the model are not same . Here the solution is to make them equal.
When you use group param of Conv2D from tensorflow.keras . Here they have not implemented it with the use of group param , which is Depthwise Convolution in real (use tf.keras.layers.DepthwiseConv2D). For me the work around was pip install tf-nightly==2.10.0.dev20220406 as this package also have some unimplemented keras APIs...as this was not mentioned anywhere when I encountered this error
I hope this is useful

ValueError: Error when checking target: expected max_pooling2d_1 to have 4 dimensions, but got array with shape (61, 1)

I am working in keras tensorflow backend on Windows 10.
I am not able to interpret the meaning of the error
Here is a snippet of my code
{
model = Sequential([
#Dense(32, input_shape=(1080,1920,2)),
Dense(32, input_shape=(250,250, 3)),
#Dense(32, input_shape=(3,1080,1920,2)),
Activation('relu'),
Dense(10),
Activation('softmax'),
Dropout(0.02),
])
layer = Dropout(0.02)
#further layers:
model.add(Dense(units=3)) #hidden layer 1
model.add(Dense(units=1)) #output layer
model.add(Conv2D(3, (3, 3)))
model.add(MaxPooling2D(pool_size=(2, 2),strides=None,padding='valid', data_format=None))
model.compile(loss=losses.mean_squared_error, optimizer='sgd')
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
test_generator = ImageDataGenerator()
validation_generator = test_generator.flow_from_directory(
'human_faces/validation',
target_size=(250,250),
batch_size=3,
class_mode=None,classes=0)
model.fit_generator(
train_generator,
steps_per_epoch=1,## batch_size,
#steps_per_epoch=3,
epochs=5,
validation_data=validation_generator,
# validation_steps=61 ) # batch_size)
validation_steps=1)
}
My error:
File "C:/Users/Owner/PycharmProjects/untitled1/work.py", line 89, in
validation_steps=1) ValueError: Error when checking target: expected max_pooling2d_1 to have 4 dimensions, but got array with
shape (61, 1)
There is a mismatch between the shapes of the output of your network (which is the output of the MaxPooling2D layer) and the output you seem to expect (based on the desired "true" output example you feed together with each input to model.fit_generator().
To investigate the mismatch you have to examine your (unshown) code of train_generator to see what output shape you are expecting, and can use model.summary() to see the conflicting output shape generated by the MaxPooling2D layer.
Try adding the following argument to Cov2D:
padding='SAME'
Like:
model.add(Conv2D(3, (3, 3),padding='SAME'))

Keras (+tensorflow) cannot predict with only part of the sequential

I am now working on building a stereo matching network using Keras with tensorflow as backend. The network has the following structure:
After training the whole network, I need to test it. However, training phase and testing phase are quite different. I have to split the model into two parts. The first part is CNN+Concatenate which only needs to be run once, while the fully-connected part (actually I modify it to be fully-conv form when testing) needs to be run for d times with slightly different input, where d varies from 100 to 228.
The first part network code:
# input image dimensions
img_rows, img_cols = X1.shape[0], X1.shape[1]
input_shape = (img_rows, img_cols, 1)
X1 = X1.reshape(1, img_rows, img_cols, 1)
X2 = X2.reshape(1, img_rows, img_cols, 1)
# number of conv filters to use
nb_filters = 112
# CNN kernel size
kernel_size = (3,3)
left_branch = Sequential()
left_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same', input_shape=input_shape))
left_branch.add(Activation('relu'))
left_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same'))
left_branch.add(Activation('relu'))
left_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same'))
left_branch.add(Activation('relu'))
left_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same'))
left_branch.add(Activation('relu'))
right_branch = Sequential()
right_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same', input_shape=input_shape))
right_branch.add(Activation('relu'))
right_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same'))
right_branch.add(Activation('relu'))
right_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same'))
right_branch.add(Activation('relu'))
right_branch.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='same'))
right_branch.add(Activation('relu'))
merged = Merge([left_branch, right_branch], mode='concat')
cnn = Sequential()
cnn.add(merged)
I load the weights gained from training phase into the first part of the network and try to get prediction of it.
def load_cnn_weights(filepath):
f = h5py.File(filepath, mode='r')
weights = []
for i in range(1, 9):
weights.append(f['model_weights/conv2d_{}/conv2d_{}/kernel:0'.format(i, i)][()])
weights.append(f['model_weights/conv2d_{}/conv2d_{}/bias:0'.format(i, i)][()])
f.close()
return weights
weights = load_cnn_weights("/home/users/shixin.li/segment/Lecun_stereo_rebuild/weights.hdf5")
cnn.set_weights(weights)
output_cnn = cnn.predict([X1, X2])
I already check that the weights are read successfully and can fit into the network according to calling get_weights() function. X1 and X2 are not zero, they are normalized gray scale image matrix. I even tried compile the network before predict. But the result output_cnn gives all zero.
I didn't see anyone have this problem and I am stuck for two days. The part which really confuses me is that the input and weights are all not zero, then why the result is zero? If you could help, I would really appreciate that!
You might want to try using tfdbg to find out exactly what the inputs to the op with all-zero outputs are, to try to understand what is going on.