Keras: "ValueError: Error when checking target" - tensorflow

I am trying to build a model, which will classify video to certain category.
For this I used pretrained model - InceptionV3 and trained it on my own data. Training process was completed successfully, but when I tried to classify video I got the error:
ValueError: Error when checking : expected input_1 to have shape (None, None, None, 3) but got array with shape (1, 1, 104, 2048)
However for prediction I used the same video as for training process.
Defined model:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True,
rotation_range=10.,
width_shift_range=0.2,
height_shift_range=0.2)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
os.path.join('data', 'train'),
target_size=(299, 299),
batch_size=32,
classes=data.classes,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
os.path.join('data', 'test'),
target_size=(299, 299),
batch_size=32,
classes=data.classes,
class_mode='categorical')
base_model = InceptionV3(weights=weights, include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer
predictions = Dense(len(data.classes), activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.fit_generator(
train_generator,
steps_per_epoch=100,
validation_data=validation_generator,
validation_steps=10,
epochs=nb_epoch,
callbacks=callbacks)
Predictions:
#extract features from frames of video
files = [f for f in os.listdir('.') if os.path.isfile(f)]
for f in files:
features = extractor_model.extract(f)
sequence.append(features)
np.save(sequence_path, sequence)
sequences = np.load("data_final.npy")
#convert numpy array tp 4 dimensions
sequences = np.expand_dims(sequences, axis=0)
sequences = np.expand_dims(sequences, axis=0)
prediction = model.predict(sequences)
Features extractor:
def extract(self, image_path):
#print(image_path)
img = image.load_img(image_path, target_size=(299, 299))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
# Get the prediction.
features = self.model.predict(x)
if self.weights is None:
# For imagenet/default network:
features = features[0]
else:
# For loaded network:
features = features[0]
return features
Keras complains about the shape is not None...
However I expect to receive some predictions of the model, but got this error. Please help. Thanks

Related

Ways to decrease validation loss % and increase validation accuracy %?

I'm trying to work with a image classification model for gravity waves detection.
So I want to check if there is something I could do to lower validation loss %, or more importantly, increase validation accuracy %.
The dataset is about a total of 460 images, split into
300 images that belong to 2 classes
60 images belonging to 2 classes
100 images belonging to 2 classes
For context, this is the code for pre processing:
batch_size = 32
Data Augmentation:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True,
)
test_datagen = ImageDataGenerator(
rescale=1./255)
The generator that reads images to generate batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'./train', target directory
target_size=(256, 256), images resized to 150x150
batch_size=batch_size,
# batch_size=40,
class_mode='binary')
validation_generator = train_datagen.flow_from_directory(
'./validation',
target_size=(256, 256),
batch_size=batch_size,
# batch_size=20,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(
'./test',
target_size=(256, 256),
# batch_size=batch_size,
batch_size=batch_size,
#class_mode=None,
class_mode= None,
shuffle=False)
And this is the model used:
from tensorflow.keras.applications.inception_v3 import InceptionV3
import tensorflow as tf
from keras import regularizers
base_model = InceptionV3(input_shape = (256,256,3), include_top = False, weights = 'imagenet')
x = layers.Flatten()(base_model.output)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(base_model.input, x)
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=0.001), loss = 'binary_crossentropy', metrics = ['accuracy'])
fitness = model.fit(
train_generator,
steps_per_epoch= 120,
epochs = 100,
validation_data=validation_generator,
validation_steps= 64)
So far the accuracy and loss % have been around:
Average training accuracy: 0.9237500047683715
Average training loss: 0.17309745135484264
Average validation accuracy: 0.6489999979734421
Average validation loss: 0.9121886402368545
The predicitons have been around:
validation predictions:
(24, 36)
test predictions:
(45, 55)
And the confusion matrix:
Confusion Matrix:
array([[12, 18],
[12, 18]])

Reshape the input for BatchDataset trained model

I trained my tensorflow model on images after convert it to BatchDataset
IMG_size = 224
INPUT_SHAPE = [None, IMG_size, IMG_size, 3] # 4D input
model.fit(x=train_data,
epochs=EPOCHES,
validation_data=test_data,
validation_freq=1, # check validation metrics every epoch
callbacks=[tensorboard, early_stopping])
model.compile(
loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"]
)
model.build(INPUT_SHAPE)
the train_data type is:
tensorflow.python.data.ops.dataset_ops.BatchDataset.
I want to run my model on a single numpy array or tensor constant, but it will be 3D input matrix not 4D as the input TensorShape([224, 224, 3]); how can i reshape it?
You can expand the dimensions of your image matrix by using this code:
newImage = tf.expand_dims(Original_Image, axis = 0)
then pass it to the predict function, it will work fine.
target sizes make all input into the same shape.
It is helpful with input shape or you can use the image function to expand the dimension. img_array = tf.expand_dims(image, 0) # Create a batch
Talking about your input INPUT_SHAPE = [None, IMG_size, IMG_size, 3] # 4D input you can arrange those input images by image training dataset and feeds into the model.
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
BATCH_SIZE = 16
IMG_SIZE = (160, 160)
PATH = 'F:\\datasets\\downloads\\sample\\cats_dogs\\training'
training_directory = os.path.join(PATH, 'train')
validation_directory = os.path.join(PATH, 'validation')
train_dataset = tf.keras.utils.image_dataset_from_directory(training_directory,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE,
seed=42)
validation_dataset = tf.keras.utils.image_dataset_from_directory(validation_directory,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE,
seed=42)
class_names = train_dataset.class_names
print( "class_names: " + str( class_names ) )
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
DataSet
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Model ( examine input layer )
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
inputs = tf.keras.Input(shape=(160, 160, 3))
model = tf.keras.Model(inputs, outputs)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit(train_dataset, epochs=initial_epochs, validation_data=validation_dataset)
...

How to fit data for augmentation to avoid out of memory error?

Doing augmentation for training segmentation but the total number of images is about 26,000+. That's the reason facing problem in making an array of images.
Tried:
def get_data():
X_data = []
train_images = sorted(glob('../input/fg_image/images/*.jpg', recursive=True))
size = 128, 128
X_data = np.empty((len(train_images),128, 128, 3), dtype=np.float32)
for i, image in enumerate(train_images):
X_data[i] = np.asarray(Image.open(image).thumbnail(size))
return X_data
X_train = get_data()
By following the above method I am collecting the X_train, Y_train. Up to this step, it's working fine.
But further when applying the below method for augmentation is the whole notebook was crashed.
def augmentation(X_data, Y_data, validation_split=0.2, batch_size=32, seed=42):
X_train, X_test, Y_train, Y_test = train_test_split(X_data,
Y_data,
train_size=1-validation_split,
test_size=validation_split,
random_state=seed)
data_gen_args = dict(rotation_range=45.,
width_shift_range=0.1,
height_shift_range=0.1)
X_datagen = ImageDataGenerator(**data_gen_args)
Y_datagen = ImageDataGenerator(**data_gen_args)
X_datagen.fit(X_train, augment=True, seed=seed)
Y_datagen.fit(Y_train, augment=True, seed=seed)
X_train_augmented = X_datagen.flow(X_train, batch_size=batch_size, shuffle=True, seed=seed)
Y_train_augmented = Y_datagen.flow(Y_train, batch_size=batch_size, shuffle=True, seed=seed)
train_generator = zip(X_train_augmented, Y_train_augmented)
return train_generator
train_generator = augmentation(X_train, Y_train)

Logits and Labels must have the same shape : Tensorflow

I m trying to classify Cats vs Dogs Using a CNN Network, However despite checking twice I am not able to find the error where it is coming . According to me the loss functions and shapes are in order , still I am not able to find the source of the error
!unzip cats_and_dogs.zip
PATH = 'cats_and_dogs'
train_dir = os.path.join(PATH, 'train')
train_image_generator = ImageDataGenerator(rescale=1./255)
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
model = Sequential()
model.add(Conv2D(25,kernel_size=3,input_shape=(IMG_HEIGHT, IMG_WIDTH, 3),activation="relu"))
model.add(MaxPooling2D())
model.add(Conv2D(25,kernel_size=3,activation="relu"))
model.add(MaxPooling2D())
model.add(Conv2D(25,kernel_size=3,activation="relu"))
model.add(MaxPooling2D())
model.add(Conv2D(25,kernel_size=3,activation="relu"))
model.add(Dense(64,activation="relu"))
model.add(Dense(1,activation="sigmoid"))
model.summary()
model.compile(optimizer="adam",metrics=['accuracy'],loss='binary_crossentropy')
history = model.fit_generator(train_data_gen)
The error that I'm struggling with is
ValueError: logits and labels must have the same shape ((None, 15, 15, 1) vs (None, 1))
I forgot to Flatten my Tensor before flowing it to Dense layers

CoreMLtools and Keras ValueError: need more than 1 value to unpack

I'm fine-tuning the Inception V3 model with Keras, in order to convert it with coremltools into a .mlmodel file.
However, when converting the model coremltools throws an error saying the following when the converter reaches the last layer of the model:
coremltools/models/neural_network.py", line 2501, in set_pre_processing_parameters
channels, height, width = array_shape
ValueError: need more than 1 value to unpack
I used the code from the Keras documentation on applications found here: https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
And added a piece of code loading my dataset from the VGG example found here: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
My final script looks like this, using TesorFlow as backend:
LOAD THE DATA
from keras.preprocessing.image import ImageDataGenerator
img_width, img_height = 299, 299
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 358
nb_validation_samples = 21
epochs = 1
batch_size = 15
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
TRAIN THE MODEL
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(7, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
for i, layer in enumerate(base_model.layers):
print(i, layer.name)
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save('finetuned_inception.h5')
I'm writing here in response to #SwimBikeRun's request (as I need a bit more space)
I was converting YOLO to Keras and then Keras to CoreML. For conversion I was using this script https://github.com/qqwweee/keras-yolo3/blob/master/convert.py
In the conversion-process the model was eventually created like that:
input_layer = Input(shape=(None, None, 3))
...
model = Model(inputs=input_layer, outputs=[all_layers[i] for i in out_index])
And those "None"-inputs was what made CoreML conversion fail. For CoreML the input-size to your model must be known. So I changed it to this:
input_layer = Input(shape=(416, 416, 3)
Your input-size will probably vary.
For your original question:
Maybe check your base_model.input size for the same problem.