I'm trying to train my model using transfer learning from pretrained model with 30 classes and 7200 images(80% train, 10% validation, 10% test). My model is always overfitting despite changing various parameters. After i read this link https://www.tensorflow.org/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets, i know batch normalization always update variance even though the convolutional base was freeze.
So, i set training = false in base_model. But, i'm still confused. Is my code correct? Because my image was augmented using ImageDataGenerator not like example where augmentation and preprocessing used as base model input.
This is my code
#Create the model
inputs = keras.Input(shape=(224, 224, 3))
x = base_model(inputs, training=False)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense((len(CLASS_NAMES)), activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
history = model.fit_generator(train_data_gen,
epochs=epochs,
steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(BATCH_SIZE))),
callbacks=[cm_callback,tensorboard_callback])
Output
576/576 [==============================] - 157s 273ms/step - loss: 0.0075 - accuracy: 0.9996
144/144 [==============================] - 26s 181ms/step - loss: 0.0092 - accuracy: 1.0000
[0.007482105916197825, 0.99956596]
[0.009182391463279297, 1.0]
If my code is correct, Is it good that the validation accuracy = 1(too accurate)?
Related
How is Accuracy defined when the loss function is mean square error? Is it mean absolute percentage error?
The model I use has output activation linear and is compiled with loss= mean_squared_error
model.add(Dense(1))
model.add(Activation('linear')) # number
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
and the output looks like this:
Epoch 99/100
1000/1000 [==============================] - 687s 687ms/step - loss: 0.0463 - acc: 0.9689 - val_loss: 3.7303 - val_acc: 0.3250
Epoch 100/100
1000/1000 [==============================] - 688s 688ms/step - loss: 0.0424 - acc: 0.9740 - val_loss: 3.4221 - val_acc: 0.3701
So what does e.g. val_acc: 0.3250 mean? Mean_squared_error should be a scalar not a percentage - shouldnt it? So is val_acc - mean squared error, or mean percentage error or another function?
From definition of MSE on wikipedia:https://en.wikipedia.org/wiki/Mean_squared_error
The MSE is a measure of the quality of an estimator—it is always
non-negative, and values closer to zero are better.
Does that mean a value of val_acc: 0.0 is better than val_acc: 0.325?
edit: more examples of the output of accuracy metric when I train - where the accuracy is increase as I train more. While the loss function - mse should decrease. Is Accuracy well defined for mse - and how is it defined in Keras?
lAllocator: After 14014 get requests, put_count=14032 evicted_count=1000 eviction_rate=0.0712657 and unsatisfied allocation rate=0.071714
1000/1000 [==============================] - 453s 453ms/step - loss: 17.4875 - acc: 0.1443 - val_loss: 98.0973 - val_acc: 0.0333
Epoch 2/100
1000/1000 [==============================] - 443s 443ms/step - loss: 6.6793 - acc: 0.1973 - val_loss: 11.9101 - val_acc: 0.1500
Epoch 3/100
1000/1000 [==============================] - 444s 444ms/step - loss: 6.3867 - acc: 0.1980 - val_loss: 6.8647 - val_acc: 0.1667
Epoch 4/100
1000/1000 [==============================] - 445s 445ms/step - loss: 5.4062 - acc: 0.2255 - val_loss: 5.6029 - val_acc: 0.1600
Epoch 5/100
783/1000 [======================>.......] - ETA: 1:36 - loss: 5.0148 - acc: 0.2306
There are at least two separate issues with your question.
The first one should be clear by now from the comments by Dr. Snoopy and the other answer: accuracy is meaningless in a regression problem, such as yours; see also the comment by patyork in this Keras thread. For good or bad, the fact is that Keras will not "protect" you or any other user from putting not-meaningful requests in your code, i.e. you will not get any error, or even a warning, that you are attempting something that does not make sense, such as requesting the accuracy in a regression setting.
Having clarified that, the other issue is:
Since Keras does indeed return an "accuracy", even in a regression setting, what exactly is it and how is it calculated?
To shed some light here, let's revert to a public dataset (since you do not provide any details about your data), namely the Boston house price dataset (saved locally as housing.csv), and run a simple experiment as follows:
import numpy as np
import pandas
import keras
from keras.models import Sequential
from keras.layers import Dense
# load dataset
dataframe = pandas.read_csv("housing.csv", delim_whitespace=True, header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:13]
Y = dataset[:,13]
model = Sequential()
model.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model asking for accuracy, too:
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y,
batch_size=5,
epochs=100,
verbose=1)
As in your case, the model fitting history (not shown here) shows a decreasing loss, and an accuracy roughly increasing. Let's evaluate now the model performance in the same training set, using the appropriate Keras built-in function:
score = model.evaluate(X, Y, verbose=0)
score
# [16.863721372581754, 0.013833992168483997]
The exact contents of the score array depend on what exactly we have requested during model compilation; in our case here, the first element is the loss (MSE), and the second one is the "accuracy".
At this point, let us have a look at the definition of Keras binary_accuracy in the metrics.py file:
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)
So, after Keras has generated the predictions y_pred, it first rounds them, and then checks to see how many of them are equal to the true labels y_true, before getting the mean.
Let's replicate this operation using plain Python & Numpy code in our case, where the true labels are Y:
y_pred = model.predict(X)
l = len(Y)
acc = sum([np.round(y_pred[i])==Y[i] for i in range(l)])/l
acc
# array([0.01383399])
Well, bingo! This is actually the same value returned by score[1] above...
To make a long story short: since you (erroneously) request metrics=['accuracy'] in your model compilation, Keras will do its best to satisfy you, and will return some "accuracy" indeed, calculated as shown above, despite this being completely meaningless in your setting.
There are quite a few settings where Keras, under the hood, performs rather meaningless operations without giving any hint or warning to the user; two of them I have happened to encounter are:
Giving meaningless results when, in a multi-class setting, one happens to request loss='binary_crossentropy' (instead of categorical_crossentropy) with metrics=['accuracy'] - see my answers in Keras binary_crossentropy vs categorical_crossentropy performance? and Why is binary_crossentropy more accurate than categorical_crossentropy for multiclass classification in Keras?
Disabling completely Dropout, in the extreme case when one requests a dropout rate of 1.0 - see my answer in Dropout behavior in Keras with rate=1 (dropping all input units) not as expected
The loss function (Mean Square Error in this case) is used to indicate how far your predictions deviate from the target values. In the training phase, the weights are updated based on this quantity. If you are dealing with a classification problem, it is quite common to define an additional metric called accuracy. It monitors in how many cases the correct class was predicted. This is expressed as a percentage value. Consequently, a value of 0.0 means no correct decision and 1.0 only correct decisons.
While your network is training, the loss is decreasing and usually the accuracy increases.
Note, that in contrast to loss, the accuracy is usally not used to update the parameters of your network. It helps to monitor the learning progress and the current performane of the network.
#desertnaut has said it very clearly.
Consider the following two pieces of code
compile code
binary_accuracy code
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)
Your labels should be integer,Because keras does not round y_true, and you get high accuracy.......
Taking the following example:
import tensorflow as tf
data = tf.keras.datasets.mnist
(training_images, training_labels), (val_images, val_labels) = data.load_data()
training_images = training_images / 255.0
val_images = val_images / 255.0
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(20, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=20, validation_data=(val_images, val_labels))
The result is something like this:
Epoch 1/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.4104 - accuracy: 0.8838 -
val_loss: 0.2347 - val_accuracy: 0.9304
Where does 1875 come from? What does that number represent? I am unable to see where it is coming from. The training_images has a shape of 60000x28x28 when I look at it.
1875 is the number of iterations the training need to complete entire dataset with batch size of 32.
1875 * 32 = 60k
Epoch An epoch describes the number of times the algorithm sees the
entire data set. So, each time the algorithm has seen all samples in
the dataset, an epoch has completed.
Iteration An iteration describes the number of times a batch of data
passed through the algorithm. In the case of neural networks, that
means the forward pass and backward pass. So, every time you pass a
batch of data through the NN, you completed an iteration.
For more, you can refer link-1 and link-2
1875 is the number of steps/batches trained on. For example, with the default batch size of 32, this tells us that you have 60 000 images (plus or minus 31, as the last batch may or may not be full).
I am predicting house prices using Deep Learning with Tensorflow 2 libraries.
I have data for 3 attributes (baths, bedrooms, area), and image of each house as my dataset. (nearly 2000 samples)
I am building 3 Deep Learning models:
Regression model (model_reg): using 3 attributes -- worked fine
CNN model (img_model) using images -- worked fine
Combining above two models (model_combined) -- erroring out
Regressoin model:
I built a Deep Neural Network (DNN) model uisng Tensorflow2.0 with 3 attributes as features and prices as labels.
I could fit the DNN and could predict the house prices.
Note: I used tf.data.Dataset combining X_train, y_train while building this model.
model_reg.fit(X_reg_train_dataset, batch_size=BATCH_SIZE, epochs=EPOCHS,
callbacks=[stop_callback], verbose=1)
63/63 [==============================] - 0s 2ms/step - loss: 0.3993 - mae: 0.4256 - mse: 0.3993
CNN Model:
Next, I build other CNN using house image as features and price as labels.
This too worked fine and I could predict the house prices.
Note: I used ImageDataGenerator from tensorflow.keras.preprocessing to build the generator.
hist = img_model.fit(
train_images_generator,
steps_per_epoch = train_images_generator.samples // BATCH_SIZE ,
validation_data = test_images_generator,
validation_steps = test_images_generator.samples // BATCH_SIZE,
epochs = EPOCHS,
callbacks=[stop_callback], verbose=1)
49/49 [==============================] - 59s 1s/step - loss: 1741.6321 - mae: 20.8221 - mse: 1741.6321 - val_loss: 755833241600.0000 - val_mae: 768718.3125 - val_mse: 755833241600.0000
Lastly, I merged these 2 models appropriately using Concatenate() layer.
Now I have 2 inputs that need to pass to the model.
Hence I defined a model using Functional API's with 2 inputs.
model_combined = Model(inputs=[input_layer_reg, img_input_layer], outputs=[output_layer_combined])
optimizer = tf.keras.optimizers.Adam(0.01)
model_combined.compile(loss='mae', optimizer=optimizer, metrics=['mae', 'mse'])
Till this point it worked fine and the generated model looks fine.
It's going error while trying to train it using fit:
model_combined.fit([X_reg_train_dataset, train_images_generator], epochs=EPOCHS,
callbacks=[stop_callback], verbose=1,
batch_size=BATCH_SIZE )
ValueError: Failed to find data adapter that can handle input:
(<class 'list'> containing values of types {"<class 'tensorflow.python.keras.preprocessing.image.DataFrameIterator'>",
"<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>"}),
<class 'NoneType'>
Question: How to pass 2 inputs: tf.data.Dataset and ImageDataGenerator to Tensorflow model?
model_combined.fit(ta,
steps_per_epoch = train_images_generator.samples // train_images_generator.batch_size,
validation_data = test_images_generator,
validation_steps = test_images_generator.samples // test_images_generator.batch_size,
epochs = 5,
callbacks=[stop_callback], verbose=1)
Epoch 1/5
36/49 [=====================>........] - ETA: 20s - loss: 0.4512 - mae: 0.4512 - mse: 0.5005
I had to write a custom generator and a dataset on top of that to make it work
ta = tf.data.Dataset.from_generator(train_image_dataset_generator,
output_signature=(
(tf.TensorSpec(shape=(None, 5,), dtype=tf.float64),
tf.TensorSpec(shape=(None, None, None, None), dtype=tf.float64)),
(tf.TensorSpec(shape=(None,), dtype=tf.float64),
tf.TensorSpec(shape=(None,), dtype=tf.float64)),
)).repeat()
def train_image_dataset_generator():
data_X_reg = enumerate(X_reg_train_dataset)
a_tuple = next(data_X_reg)
b_tuple = train_images_generator.next()
This is tf 2.3.0. During training, reported values for SparseCategoricalCrossentropy loss and sparse_categorical_accuracy seemed way off. I looked through my code but couldn't spot any errors yet. Here's the code to reproduce:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
x = np.random.randint(0, 255, size=(64, 224, 224, 3)).astype('float32')
y = np.random.randint(0, 3, (64, 1)).astype('int32')
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(32)
def create_model():
input_layer = tf.keras.layers.Input(shape=(224, 224, 3), name='img_input')
x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255, name='rescale_1_over_255')(input_layer)
base_model = tf.keras.applications.ResNet50(input_tensor=x, weights='imagenet', include_top=False)
x = tf.keras.layers.GlobalAveragePooling2D(name='global_avg_pool_2d')(base_model.output)
output = Dense(3, activation='softmax', name='predictions')(x)
return tf.keras.models.Model(inputs=input_layer, outputs=output)
model = create_model()
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['sparse_categorical_accuracy']
)
model.fit(ds, steps_per_epoch=2, epochs=5)
This is what printed:
Epoch 1/5
2/2 [==============================] - 0s 91ms/step - loss: 1.5160 - sparse_categorical_accuracy: 0.2969
Epoch 2/5
2/2 [==============================] - 0s 85ms/step - loss: 0.0892 - sparse_categorical_accuracy: 1.0000
Epoch 3/5
2/2 [==============================] - 0s 84ms/step - loss: 0.0230 - sparse_categorical_accuracy: 1.0000
Epoch 4/5
2/2 [==============================] - 0s 82ms/step - loss: 0.0109 - sparse_categorical_accuracy: 1.0000
Epoch 5/5
2/2 [==============================] - 0s 82ms/step - loss: 0.0065 - sparse_categorical_accuracy: 1.0000
But if I double check with model.evaluate, and "manually" checking the accuracy:
model.evaluate(ds)
2/2 [==============================] - 0s 25ms/step - loss: 1.2681 - sparse_categorical_accuracy: 0.2188
[1.268101453781128, 0.21875]
y_pred = model.predict(ds)
y_pred = np.argmax(y_pred, axis=-1)
y_pred = y_pred.reshape(-1, 1)
np.sum(y == y_pred)/len(y)
0.21875
Result from model.evaluate(...) agrees on the metrics with "manual" checking. But if you stare at the loss/metrics from training, they look way off. It is rather hard to see whats wrong since no error or exception is ever thrown.
Additionally, i created a very simple case to try to reproduce this, but it actually is not reproducible here. Note that batch_size == length of data so this isnt mini-batch GD, but full batch GD (to eliminate confusion with mini-batch loss/metrics:
x = np.random.randn(1024, 1).astype('float32')
y = np.random.randint(0, 3, (1024, 1)).astype('int32')
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(1024)
model = Sequential()
model.add(Dense(3, activation='softmax'))
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['sparse_categorical_accuracy']
)
model.fit(ds, epochs=5)
model.evaluate(ds)
As mentioned in my comment, one suspect is batch norm layer, which I dont have for the case that can't reproduce.
You get different results because fit() displays the training loss as the average of the losses for each batch of training data, over the current epoch. This can bring the epoch-wise average down. And the computed loss is employed further to update the model. Whereas, evaluate() is computed using the model as it is at the end of the training, resulting in a different loss. You can check the official Keras FAQ and the related StackOverflow post.
Also, try to increase the learning rate.
The big discrepancy seem in the metrics can be explained (or at least partially so) by presence of batch norm in the model. Will present 2 case where one is not reproducible vs. another that is reproduced if batch norm is introduced. In both case, batch_size is equal to full length of data (aka full gradient descent without 'stochastic') to minimize confusion over mini-batch statistics.
Not reproducible:
x = np.random.randn(1024, 1).astype('float32')
y = np.random.randint(0, 3, (1024, 1)).astype('int32')
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(1024)
model = Sequential()
model.add(Dense(10, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(3, activation='softmax'))
Reproducible:
model = Sequential()
model.add(Dense(10))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(10))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(10))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(3, activation='softmax'))
In fact, you can try model.predict(x), model(x, training=True) and you will see large difference in the y_pred. Also, per keras doc, this result also depend on whats in the batch. So prediction model(x[0:1], training=True) for x[0] will differ from model(x[0:2], training=True) by including an extra sample.
Probably best go to Keras doc and the original paper for the details, but I do think you will have to live with this and interprete what you see in the progress bar accordingly. It looks rather fishy if you try to use training loss/accuracy to see if you have a bias (not variance) issue. When in doubt, i think we can just run evaluate on the train set to be sure when after your model "converges" to a great minima. I sort of overlook this detail all together in my prior work 'cos underfitting (bias) is rare for deep net, and so I go by with the validation loss/metrics to determine when to stop training. But i probably would go back to the same model and evaluate on the train set (just to see if model has the capacity (not bias).
I'm finetuning Keras' Resnet pre trained on imagenet data to work on a specific classification with another dataset of images. My model is structured as follows: Resnet takes the inputs, and on the top of Resnet I added my own classifier. During all the experiments I tried, the model either underfitted or overfitted.
I mainly tried two approaches:
block a certain number n of layers towards the input, not to let them be updated during the training. In particular, Resnet has 175 layers, and I tried with n = 0, 10, 30, 50, 80, 175. In all these cases, the model underfits, obtaining an accuracy over the training set at most equal to 0.75, and on the validation at most 0.51.
block all the batch normalization layers, plus some n layers at the beginning (as before), with n = 0, 10, 30, 50. In these cases, the model overfits, obtaining more than 0.95 of accuracy on the training set, but around 0.5 on the validation.
Please note that changing from Resnet to InceptionV3, and blocking 50 layers, I obtain more than 0.95 of accuracy on both validation and test sets.
Here is the main part of my code:
inc_model = ResNet50(weights='imagenet',
include_top=False,
input_shape=(IMG_HEIGHT, IMG_WIDTH, 3))
print("number of layers:", len(inc_model.layers)) #175
#Adding custom Layers
x = inc_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation="relu")(x)
x = Dropout(0.5)(x)
x = Dense(512, activation="relu")(x)
predictions = Dense(2, activation="softmax")(x)
model_ = Model(inputs=inc_model.input, outputs=predictions)
# fine tuning 1
for layer in inc_model.layers[:30]:
layer.trainable = False
# fine tuning 2
for layer in inc_model.layers:
if 'bn' in layer.name:
layer.trainable = False
# compile the model
model_.compile(optimizer=SGD(lr=0.0001, momentum=0.9)
, loss='categorical_crossentropy'
, metrics=['accuracy'])
checkpointer = ModelCheckpoint(filepath='weights.best.inc.male.resnet.hdf5',
verbose=1, save_best_only=True)
hist = model_.fit_generator(train_generator
, validation_data = (x_valid, y_valid)
, steps_per_epoch= TRAINING_SAMPLES/BATCH_SIZE
, epochs= NUM_EPOCHS
, callbacks=[checkpointer]
, verbose=1
)
Can anyone suggest how to find a stable solution that learns something but doesn't overfit?
EDIT:
the output of the training phase is something like that:
Epoch 1/20
625/625 [==============================] - 2473s 4s/step - loss: 0.6048 - acc: 0.6691 - val_loss: 8.0590 - val_acc: 0.5000
Epoch 00001: val_loss improved from inf to 8.05905, saving model to weights.best.inc.male.resnet.hdf5
Epoch 2/20
625/625 [==============================] - 2432s 4s/step - loss: 0.4445 - acc: 0.7923 - val_loss: 8.0590 - val_acc: 0.5000
Epoch 00002: val_loss did not improve from 8.05905
Epoch 3/20
625/625 [==============================] - 2443s 4s/step - loss: 0.3730 - acc: 0.8407 - val_loss: 8.0590 - val_acc: 0.5000
Epoch 00003: val_loss did not improve from 8.05905
and so on.. Every time there's no improvements on the validation
You have many choices but did try early stopping? or you can try to do some data augmentation, or test with a simpler model.
In Resnet, if you are not using the preprocessing_function of resnet, try using it as shown below:
train_datagen = ImageDataGenerator(dtype='float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)
test_datagen = ImageDataGenerator(dtype='float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)
Keep everything else the same.