How to solve ValueError in model.predict()? - tensorflow

I am new in neural network problems. I have searched for couple of hours but could not understand what should I do to fix this issue! I'm working with nsl-kdd dataset for intrusion detection system with convolutional neural net.
I stuck with this problem : ValueError: Input 0 of layer dense_14 is incompatible with the layer: expected axis -1 of input shape to have value 3904 but received input with shape [None, 3712]
Shapes:
x_train (125973, 122)
y_train (125973, 5)
x_test (22544, 116)
y_test (22544,)
After reshape :
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1)) #(125973, 122, 1)
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1)) #(22544, 116, 1)
Model :
model = Sequential()
model.add(Convolution1D(64, 3, padding="same",activation="relu",input_shape = (x_train.shape[1], 1)))
model.add(MaxPooling1D(pool_size=(2)))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(5, activation="softmax"))
Compile :
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
model.fit(x_train, Y_train, epochs = 5, batch_size = 32)
pred = model.predict(x_test) #problem is occurring for this line
y_pred= np.argmax(pred, axis = 1)
model summary

Your x_test should have same dimensions as x_train.
x_train = (125973, 122, 1)
x_test = (22544, 116, 1) # the second parameter must match the train set
Code sample:
import tensorflow as tf
import pandas as pd
import numpy as np
from tensorflow.keras.layers import *
from tensorflow.keras import *
x1 = np.random.uniform(100, size =(125973, 122,1))
x2 = np.random.uniform(100, size =(22544, 122, 1))
y1 = np.random.randint(100, size =(125973,5), dtype = np.int32)
y2 = np.random.randint(2, size =(22544, ), dtype = np.int32)
def create_model2():
model = Sequential()
model.add(Convolution1D(64, 3, padding="same",activation="relu",input_shape = (x1.shape[1], 1)))
model.add(MaxPooling1D(pool_size=(2)))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(5, activation="softmax"))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
model = create_model2()
tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True)
You model looks like this:
Now if use your test set to create your model keeping your dimension as (22544, 116, 1).
You get a model that looks this.
As the dimensions are different the expected input and output of each layers are different.
When you have appropriate test dimensions the output works as expected:
pred = model.predict(x2)
pred
Output:
array([[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
...,
[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)

Problem: The problem is that your test set does have the same dimensions as your training set. The test set should look as if you took a sample from your training set. So if your training set has the dimensions x_train.shape = (125973, 122) and y_train.shape = (125973, 5). Then your test set should have the dimensions x_test.shape = (sample_num, 122) and y_test.shape = (sample_num, 5).
Possible Solution: An easy way to do testing if you didn't want to use your test set with be with a validation split in the .fit().
So this: model.fit(x_train, Y_train, epochs = 5, batch_size = 32)
would turn into this: model.fit(x_train, Y_train, epochs = 5, batch_size = 32, validation_split=0.2)
This would chop off 20% of your training data and use that for testing. Then after every epoch, TensorFlow will print how the network performed on that validation data so that you can see how your model performs on data it has never seen before.

Related

tf.keras model.predict each time provides different values

Each time I run:
y_true = np.argmax(tf.concat([y for x, y in train_ds], axis=0), axis=1)
y_pred = np.argmax(model.predict(train_ds), axis=1)
confusion_matrix(y_true, y_pred)
The result each time is different to my understanding the line:
y_pred = np.argmax(model.predict(train_ds), axis=1) is different each time.
Clarification: I run cell 1 (training) once. And cell 2 (inference) few times.
Why?
THE CODE:
Cell 1 (jupyter)
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, experimental
from tensorflow.keras.layers import MaxPool2D, Flatten, Dense
from tensorflow.keras import Model
from tensorflow.keras.losses import categorical_crossentropy
from sklearn.metrics import accuracy_score
image_size = (100, 100)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
directory,
label_mode='categorical',
validation_split=0.2,
subset="training",
seed=1337,
color_mode="grayscale",
image_size=image_size,
batch_size=batch_size,
)
inputs = Input(shape =(100,100,1))
x = experimental.preprocessing.Rescaling(1./255)(inputs)
x = Conv2D (filters =4, kernel_size =3, padding ='same', activation='relu')(x)
x = Conv2D (filters =4, kernel_size =3, padding ='same', activation='relu')(x)
x = MaxPool2D(pool_size =2, strides =2, padding ='same')(x)
x = Conv2D (filters =8, kernel_size =3, padding ='same', activation='relu')(x)
x = Conv2D (filters =8, kernel_size =3, padding ='same', activation='relu')(x)
x = MaxPool2D(pool_size =2, strides =2, padding ='same')(x)
x = Flatten()(x)
x = Dense(units = 4, activation ='relu')(x)
x = Dense(units = 4, activation ='relu')(x)
output = Dense(units = 5, activation ='softmax')(x)
model = Model (inputs=inputs, outputs =output)
model.compile(
optimizer=tf.keras.optimizers.Adam(1e-3),
loss=categorical_crossentropy,
metrics=["accuracy"])
model.fit(train_ds, epochs=5)
Cell 2:
print (Accuracy:)
y_pred = np.argmax(model.predict(train_ds), axis=1)
print (accuracy_score(y_true, y_pred))
y_pred = np.argmax(model.predict(train_ds), axis=1)
print (accuracy_score(y_true, y_pred))
OUTPUT:
118/118 [==============================] - 7s 57ms/step - loss: 0.1888 - accuracy: 0.9398
Accuracy:
0.593
0.586
Are you sure you do not train the model again every time you run the code? If the parameters of the model are the same the predicted result for the same input should be the same every time.
To my current understanding the reason of an above is the:
tf.keras.preprocessing.image_dataset_from_directory
While instance of it is:
type(train_ds)
tensorflow.python.data.ops.dataset_ops.BatchDataset
Reproduction:
First run:
[x for x, y in train_ds]
Output:
[<tf.Tensor: shape=(32, 100, 100, 1), dtype=float32, numpy= array([[[[157.],
[155.],
[159.],
Second run:
[x for x, y in train_ds]
Output:
[<tf.Tensor: shape=(32, 100, 100, 1), dtype=float32, numpy= array([[[[ 34.],
[ 36.],
[ 39.],
...,
The possible solution
imgs, y_true = [], []
for img, label in train_ds:
imgs.append(img)
y_true.append(label)
imgs = tf.concat(imgs, axis=0)
y_true = np.argmax(tf.concat(y_true, axis=0), axis=1)
y_pred = np.argmax(model.predict(imgs), axis=1)
print (accuracy_score(y_true, y_pred))
y_pred = np.argmax(model.predict(imgs), axis=1)
print (accuracy_score(y_true, y_pred))
OUTPUT
0.944044764
0.944044764
Is there any better solution?
UPDATE 2:
Maybe more appropriate apporach in case of validation dataset (here the train_ds is just for example is to add an argument Shuffle=False)
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
directory,
label_mode='categorical',
validation_split=0.2,
subset="training",
seed=1337,
color_mode="grayscale",
image_size=image_size,
batch_size=batch_size,
Shuffle=False
)
UPDATE 3:
Here it's probably the best option in case if your test images are in a separate folder.
path = 'your path to test folder'
test_generator = ImageDataGenerator().flow_from_directory(
directory=path,
class_mode='categorical',
shuffle=False,
batch_size=32,
target_size=(512, 512)
)
test_generator.reset()
This is better than OPTION 1, since it can work on dataset, which doesn't fits into memory (RAM).

Tensorflow ValueError: logits and labels must have the same shape ((None, 42) vs (None, 1))

Tensorflow error when i run model.fit(). This is my code.
train_data = pd.read_csv('train.csv')
train_data = shuffle(train_data).reset_index(drop=True)
split_data = np.array_split(train_data, 50)
train_image = []
for i in tqdm(range(split_data[0].shape[0])):
path = 'train/train/'+str(train_data['category'][i]).zfill(2)+'/'+train_data['filename'][i]
img = image.load_img(path,target_size=(400,400,3))
img = image.img_to_array(img)
img = img/255
train_image.append(img)
X = np.array(train_image) # X.shape (2108, 400, 400, 3)
y = np.array(split_data[0]['category']) # y.shape (2108,)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.1)
and this is my CNN model.
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(5, 5), activation="relu", input_shape=(400,400,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
...
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(42, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test), batch_size=64)
It's error when running model.fit()
ValueError: logits and labels must have the same shape ((None, 42) vs (None, 1))
value of X_train
array([[[[0.99607843, 0.99607843, 0.99607843],
[0.99607843, 0.99607843, 0.99607843],
[0.99607843, 0.99607843, 0.99607843],
...,
[1. , 1. , 1. ],
[1. , 1. , 1. ],
[1. , 1. , 1. ]],
...,
]]], dtype=float32)
and value of y_train
array([ 5, 41, 24, ..., 41, 19, 40], dtype=int64)
You are carrying out a multiclassification problem. your labels are also integer encoded
Use softmax as activation of the last layer: Dense(42, activation='softmax')
and sparse_categorical_crossentropy as loss function

keras model prediction only one and zero instead of probability

This is my model and data generator, when I predicted I got array([0., 0., 0., 1., 0., 0., 0., 0.] instead of probability. I think it should be a probability.
model = keras.models.Sequential()
model.add ...
model.add(keras.layers.Dense(num_class))
model.add(keras.layers.Softmax())
sgd_opt = keras.optimizers.SGD(lr=0.001, momentum=0.9)
cce_loss = keras.losses.categorical_crossentropy
model.compile(optimizer=sgd_opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=IMAGE_SIZE,
batch_size=batch_size,
class_mode='sparse')
validation_generator = train_datagen.flow_from_directory(
test_data_dir, # same directory as training data
target_size=IMAGE_SIZE,
batch_size=batch_size,
class_mode='sparse')
Looks like your model literally remembered labels for samples.
Validating on training data is bad practice. Try to split datasets:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
validation_split=0.2
)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=IMAGE_SIZE,
batch_size=batch_size,
subset='training',
class_mode='sparse'
)
validation_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=IMAGE_SIZE,
batch_size=batch_size,
subset='validation',
class_mode='sparse')

Tensorflow-keras : CNN predictions are very close to 1 or 0

I trained a CNN on 96x96 bacterias images. I have 3 classes : "bacterias", "flocs" and "nothing".
Then, to detect bacterias on a 1920x1080 image, I scan the image with 96x96 windows and I run my CNN for all scanned windows.
But my predictions are always [0,1,0] like. And I never have eg. [0.1, 0.8, 0.1]
Here is my model :
batch_size = 32
nb_epochs = 10
taille_image = (96,96)
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(96, 96, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
def Entrainer():
train_datagen = ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
vertical_flip=True,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
"Images_traitees/Vignettes_squared",
target_size = taille_image,
batch_size=batch_size,
class_mode="categorical",
shuffle=True,
save_to_dir="augmented_data",
save_prefix="augmented_",
save_format="jpeg",
subset="training"
)
validation_generator = train_datagen.flow_from_directory(
"Images_traitees/Vignettes_squared",
target_size = taille_image,
batch_size=batch_size,
class_mode="categorical",
subset="validation"
)
tbCallback = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs=nb_epochs,
callbacks = [tbCallback]
)
plt.plot(history.history['acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()
model.save("my_model.h5")
if __name__ == "__main__":
model.summary()
Entrainer()
tensorboard results
And this is the code to call my model and color the the detected class.
model = load_model("my_model.h5")
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
def Predire(img, vignettes, pos):
image = Image.open(img)
draw = ImageDraw.Draw(image, mode='RGBA')
for vignette in vignettes:
x = img_to_array(vignette)
x = np.expand_dims(x, axis=0)
y = model.predict(x)
if y[0][1] > max (y[0][0],y[0][2]):
draw.rectangle(pos[nb], outline='red', fill=(255,0,0,125))
proto +=1
if y[0][0] > max (y[0][1],y[0][2]):
draw.rectangle(pos[nb], outline='blue', fill=(0,0,255,125))
floc +=1
if y[0][2] > max(y[0][1],y[0][0]):
draw.rectangle(pos[nb], outline='black')
print (y)
return image
When I print y, it returns :
[[0. 0. 1.]]
[[0. 0. 1.]]
[[0. 0. 1.]]
[[0. 0. 1.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1.0000000e+00 1.7819082e-33 0.0000000e+00]]
[[1. 0. 0.]]
...
Each line represents the CNN prediction of a 96x96 sliding windows of the entire image.
I thought it was overfitting, but i tried with only 735 trainable parameters and i have still the same pb.
It might be the case that your model has heavily overfitted the data. Check your models accuracy on your training vs your test set and see if they are reasonable.
Update: Turns out preprocessing of the image data was the problem. Always make sure you apply the same preprocessing to both training, validation and test set.

Calculate gradients of intermediate nodes in tensorflow eager execution

I use tensorflow eager execution to do the following calculation:
y = x^2
z = y + 2.
My goal is to calculate dz/dx and dz/dy (the gradients of z over y and z)
dx, dy = GradientTape.gradient(z, [x, y]).
However, only dy is calculated and dx is None. Namely, only the gradients of tensors that directly rely on z can be calculated.
[None, <tf.Tensor: id=11, shape=(), dtype=float32, numpy=1.0>]
[None, <tf.Tensor: id=11, shape=(), dtype=float32, numpy=1.0>]
[None, <tf.Tensor: id=11, shape=(), dtype=float32, numpy=1.0>]
[None, <tf.Tensor: id=11, shape=(), dtype=float32, numpy=1.0>]
[None, <tf.Tensor: id=11, shape=(), dtype=float32, numpy=1.0>]
The following is the full code.
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
tfe = tf.contrib.eager
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "6"
import warnings
warnings.filterwarnings('ignore')
train_steps = 5
for i in range(train_steps):
x = tf.contrib.eager.Variable(0.)
with tf.GradientTape() as tape:
y = tf.square(x)
z = y + 2
print(tape.gradient(z, [x,y]))
Any solution?