Logits and Labels must have the same shape : Tensorflow - tensorflow

I m trying to classify Cats vs Dogs Using a CNN Network, However despite checking twice I am not able to find the error where it is coming . According to me the loss functions and shapes are in order , still I am not able to find the source of the error
!unzip cats_and_dogs.zip
PATH = 'cats_and_dogs'
train_dir = os.path.join(PATH, 'train')
train_image_generator = ImageDataGenerator(rescale=1./255)
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
model = Sequential()
model.add(Conv2D(25,kernel_size=3,input_shape=(IMG_HEIGHT, IMG_WIDTH, 3),activation="relu"))
model.add(MaxPooling2D())
model.add(Conv2D(25,kernel_size=3,activation="relu"))
model.add(MaxPooling2D())
model.add(Conv2D(25,kernel_size=3,activation="relu"))
model.add(MaxPooling2D())
model.add(Conv2D(25,kernel_size=3,activation="relu"))
model.add(Dense(64,activation="relu"))
model.add(Dense(1,activation="sigmoid"))
model.summary()
model.compile(optimizer="adam",metrics=['accuracy'],loss='binary_crossentropy')
history = model.fit_generator(train_data_gen)
The error that I'm struggling with is
ValueError: logits and labels must have the same shape ((None, 15, 15, 1) vs (None, 1))

I forgot to Flatten my Tensor before flowing it to Dense layers

Related

ValueError: Input 0 of layer sequential is incompatible with the layer:

model = keras.models.Sequential([
keras.layers.Dense(30, activation = "relu", input_shape=[8]),
keras.layers.Dense(100, activation = "relu"),
keras.layers.Dense(1)])
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))
checkpoint_cb = keras.callbacks.ModelCheckpoint("Model-{epoch:02d}.h5")
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid,y_valid),
callbacks=[checkpoint_cb])
I am trying to fit a model using the callbacks but I am getting the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 8 but received input with shape [None, 28, 28]
What can be the possible error?
The shape of your X_train is (None,28,28) but you are giving input of shape (None,8) to your dense layer.
Reshape your X_train
X_train = X_train.reshape(-1, 28*28)
Model should be
model = keras.models.Sequential([
keras.layers.Dense(30, activation = "relu", input_shape=(784,)),
keras.layers.Dense(100, activation = "relu"),
keras.layers.Dense(1)])

Error when checking target: expected dense_18 to have shape (1,) but got array with shape (10,)

Y_train = to_categorical(Y_train, num_classes = 10)#
random_seed = 2
X_train,X_val,Y_train,Y_val = train_test_split(X_train, Y_train, test_size = 0.1, random_state=random_seed)
Y_train.shape
model = Sequential()
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size = 86, epochs = 3,validation_data = (X_val, Y_val), verbose =2)
I have to classify the MNIST data into 10 classes. I am converting the Y_train into one hot encoded array. I have gone through a number of answers but none have helped. Kindly guide me in this regard as I am a novice in ML and neural network.
It seems there is no need to use model.add(Flatten()) in your first layer. Instead of doing so, you can use a dense layer with a specific input size like: model.add(Dense(64, input_shape=your_input_shape, activation="relu").
To ensure this issue happens because of the layers, you can check whether to_categorical() function works alone with jupyter notebook.
Updated Answer
Before the model, you should reshape your model. In that case 28*28 to 784.
train_images = train_images.reshape((-1, 784))
test_images = test_images.reshape((-1, 784))
I also suggest to normalize the data that could be done by simply dividing the images to 255
After that step you should create your model.
model = Sequential([
Dense(64, activation='relu', input_shape=(784,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax'),
])
Have you noticed input_shape=(784,) That is the shape of your flattened input.
Last step, compiling and fitting.
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
model.fit(
train_images,
train_labels,
epochs=10,
batch_size=16,
)
What you do is you have just flattened the input layer without feeding the network with an input. That's why you experience an issue. The point is you should manually reshape your inputs and feed forward to the Dense() layers with parameter input_shape

Keras: "ValueError: Error when checking target"

I am trying to build a model, which will classify video to certain category.
For this I used pretrained model - InceptionV3 and trained it on my own data. Training process was completed successfully, but when I tried to classify video I got the error:
ValueError: Error when checking : expected input_1 to have shape (None, None, None, 3) but got array with shape (1, 1, 104, 2048)
However for prediction I used the same video as for training process.
Defined model:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True,
rotation_range=10.,
width_shift_range=0.2,
height_shift_range=0.2)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
os.path.join('data', 'train'),
target_size=(299, 299),
batch_size=32,
classes=data.classes,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
os.path.join('data', 'test'),
target_size=(299, 299),
batch_size=32,
classes=data.classes,
class_mode='categorical')
base_model = InceptionV3(weights=weights, include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer
predictions = Dense(len(data.classes), activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.fit_generator(
train_generator,
steps_per_epoch=100,
validation_data=validation_generator,
validation_steps=10,
epochs=nb_epoch,
callbacks=callbacks)
Predictions:
#extract features from frames of video
files = [f for f in os.listdir('.') if os.path.isfile(f)]
for f in files:
features = extractor_model.extract(f)
sequence.append(features)
np.save(sequence_path, sequence)
sequences = np.load("data_final.npy")
#convert numpy array tp 4 dimensions
sequences = np.expand_dims(sequences, axis=0)
sequences = np.expand_dims(sequences, axis=0)
prediction = model.predict(sequences)
Features extractor:
def extract(self, image_path):
#print(image_path)
img = image.load_img(image_path, target_size=(299, 299))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
# Get the prediction.
features = self.model.predict(x)
if self.weights is None:
# For imagenet/default network:
features = features[0]
else:
# For loaded network:
features = features[0]
return features
Keras complains about the shape is not None...
However I expect to receive some predictions of the model, but got this error. Please help. Thanks

keras.model.predict raise ValueError: Error when checking input

I trained a basic Neural Network model on the MNIST dataset. Here's the code to the training: (imports omitted)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data(path='mnist.npz')
x_train, x_test = x_train/255.0, x_test/255.0
#1st Define the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape = (28,28)), #input layer
tf.keras.layers.Dense(512, activation=tf.nn.relu), #main computation layer
tf.keras.layers.Dropout(0.2), #Dropout layer to avoid overfitting
tf.keras.layers.Dense(10, activation=tf.nn.softmax) #output layer / Softmax is a classifier AF
])
#2nd Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#3rd Fit the model
model.fit(x_train, y_train, epochs=5)
#4th Save the model
model.save('models/mnistCNN.h5')
#5th Evaluate the model
model.evaluate(x_test, y_test)
I wanted to see how this model works with my own inputs, so I wrote a prediction script with help from this post. My prediction code is: (imports omitted)
model = load_model('models/mnistCNN.h5')
for i in range(3):
img = Image.open(str(i+1) + '.png').convert("L")
img = img.resize((28,28))
im2arr = np.array(img)
im2arr = im2arr/255
im2arr = im2arr.reshape(1, 28, 28, 1)
y_pred = model.predict(im2arr)
print('For Image',i+1,'Prediction = ',y_pred)
First, I don't understand the purpose of this line:
im2arr = im2arr.reshape(1, 28, 28, 1)
If some one could shed light on why this line necessary, that would be of great help.
Second, this very line throws the following error:
ValueError: Error when checking input: expected flatten_input to have 3 dimensions, but got array with shape (1, 28, 28, 1)
What am I missing here?
First dimension is used for batch size. It is added by keras.model internally. So this line just adds it to image array.
im2arr = im2arr.reshape(1, 28, 28, 1)
The error you get is because a single example from mnist dataset, that you used for training has shape (28, 28), so as your input layer. To get rid of this error you need to change this line to
im2arr = img.reshape((1, 28, 28))

input_shape parameter mismatch error in Convolution1D in keras

I want to classify a dataset using Convulation1D in keras.
DataSet Description:
train dataset size = [340,30] ; no of sample = 340 , sample dimension = 30
test dataset size = [230,30] ; no of sample = 230 , sample dimension = 30
label size = 2
Fist I try by the following code using the information from keras site https://keras.io/layers/convolutional/
batch_size=1
nb_epoch = 10
sizeX=340
sizeY=30
model = Sequential()
model.add(Convolution1D(64, 3, border_mode='same', input_shape=(sizeX,sizeY)))
model.add(Convolution1D(32, 3, border_mode='same'))
model.add(Convolution1D(16, 3, border_mode='same'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train_transformed, y_train, batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test_transformed, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
it gives the following error ,
ValueError: Error when checking model input: expected convolution1d_input_1 to have 3 dimensions, but got array with shape (340, 30)
Then I have transformed the Train and Test data into 3 dimension from 2 dimension by using the following code ,
X_train = np.reshape(X_train_transformed, (X_train_transformed.shape[0], X_train_transformed.shape[1], 1))
X_test = np.reshape(X_test_transformed, (X_test_transformed.shape[0], X_test_transformed.shape[1], 1))
Then I run the modified following code ,
batch_size=1
nb_epoch = 10
sizeX=340
sizeY=30
model = Sequential()
model.add(Convolution1D(64, 3, border_mode='same', input_shape=(sizeX,sizeY)))
model.add(Convolution1D(32, 3, border_mode='same'))
model.add(Convolution1D(16, 3, border_mode='same'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
But it shows the error ,
ValueError: Error when checking model input: expected convolution1d_input_1 to have shape (None, 340, 30) but got array with shape (340, 30, 1)
I am unable to find the dimension mismatch error here.
With the release of TF 2.0 and tf.keras, you can fairly easily update your model to work with these new versions. This can be done with the following code:
# import tensorflow 2.0
# keras doesn't need to be imported because it is built into tensorflow
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
batch_size = 1
nb_epoch = 10
# the model only needs the size of the sample as input, explained further below
size = 30
# reshape as you had before
X_train = np.reshape(X_train_transformed, (X_train_transformed.shape[0],
X_train_transformed.shape[1], 1))
X_test = np.reshape(X_test_transformed, (X_test_transformed.shape[0],
X_test_transformed.shape[1], 1))
# define the sequential model using tf.keras
model = tf.keras.Sequential([
# the 1d convolution layers can be defined as shown with the same
# number of filters and kernel size
# instead of border_mode, the parameter is padding
# the input_shape is (the size of each sample, 1), explained below
tf.keras.layers.Conv1D(64, 3, padding='same', input_shape=(size, 1)),
tf.keras.layers.Conv1D(32, 3, padding='same'),
tf.keras.layers.Conv1D(16, 3, padding='same'),
# Dense and Activation can be combined into one layer
# where the dense layer has 1 neuron and a sigmoid activation
tf.keras.layers.Dense(1, activation='sigmoid')
])
# the model can be compiled, fit, and evaluated in the same way
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
The problem you are having comes from the input shape of your model. According to the keras documentation the input shape of the model has to be (batch, step, channels). This means that the first dimension is the number of instances you have. The second dimension is the size of each sample. The third dimension is the number of channels which in your case would only be one. Overall, your input shape would be (340, 30, 1). When you actually define the input shape in the model, you only need to specify the the second and third dimension which means your input shape would be (size, 1). The model already expects the first dimension, the number of instances you have, as input so you do not need to specify that dimension.
Can you try this?
X_train = np.reshape(X_train_transformed, (1, X_train_transformed.shape[0], X_train_transformed.shape[1]))
X_test = np.reshape(X_test_transformed, (1, X_test_transformed.shape[0], X_test_transformed.shape[1]))