I am comparing two model the one uses binary_crossentropy(Model A) as optimizer, another uses mean_squared_error(Model B)
Model A)
self.seq_len = 2
in_out_neurons = 50
n_hidden = 500
model = Sequential()
model.add(LSTM(n_hidden, batch_input_shape=(None, self.seq_len, in_out_neurons), return_sequences=True))
model.add(Dense(in_out_neurons, activation="relu"))
optimizer = Adam(learning_rate=0.001)
#model.compile(loss="mean_squared_error", optimizer=optimizer)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
Epoch 1/10
718/718 [==============================] - 32s 42ms/step - loss: -0.0633 - val_loss: -0.0649
Epoch 2/10
718/718 [==============================] - 33s 46ms/step - loss: -0.0632 - val_loss: -0.0572
Epoch 3/10
718/718 [==============================] - 43s 60ms/step - loss: -0.0592 - val_loss: -0.0570
Epoch 4/10
718/718 [==============================] - 51s 71ms/step - loss: -0.0522 - val_loss: -0.0431
Epoch 5/10
718/718 [==============================] - 50s 69ms/step - loss: -0.0566 - val_loss: -0.0535
Epoch 6/10
718/718 [==============================] - 49s 68ms/step - loss: -0.0567 - val_loss: -0.0537
Epoch 7/10
718/718 [==============================] - 48s 67ms/step - loss: -0.0627 - val_loss: -0.0499
Epoch 8/10
718/718 [==============================] - 51s 71ms/step - loss: -0.0621 - val_loss: -0.0614
Epoch 9/10
718/718 [==============================] - 47s 65ms/step - loss: -0.0645 - val_loss: -0.0653
Epoch 10/10
718/718 [==============================] - 43s 60ms/step - loss: -0.0661 - val_loss: -0.0622
Model B)
self.seq_len = 2
in_out_neurons = 50
n_hidden = 500
model = Sequential()
model.add(LSTM(n_hidden, batch_input_shape=(None, self.seq_len, in_out_neurons), return_sequences=True))
model.add(Dense(in_out_neurons, activation="relu"))
optimizer = Adam(learning_rate=0.001)
model.compile(loss="mean_squared_error", optimizer=optimizer)
#model.compile(loss='binary_crossentropy', optimizer=optimizer)
Epoch 1/10
718/718 [==============================] - 36s 48ms/step - loss: 0.0189 - val_loss: 0.0190
Epoch 2/10
718/718 [==============================] - 46s 64ms/step - loss: 0.0188 - val_loss: 0.0189
Epoch 3/10
718/718 [==============================] - 48s 67ms/step - loss: 0.0187 - val_loss: 0.0189
Epoch 4/10
718/718 [==============================] - 58s 81ms/step - loss: 0.0187 - val_loss: 0.0188
Epoch 5/10
718/718 [==============================] - 62s 87ms/step - loss: 0.0186 - val_loss: 0.0188
Epoch 6/10
718/718 [==============================] - 72s 100ms/step - loss: 0.0186 - val_loss: 0.0188
Epoch 7/10
718/718 [==============================] - 73s 102ms/step - loss: 0.0185 - val_loss: 0.0187
Epoch 8/10
718/718 [==============================] - 60s 84ms/step - loss: 0.0185 - val_loss: 0.0187
Epoch 9/10
718/718 [==============================] - 64s 89ms/step - loss: 0.0185 - val_loss: 0.0187
Epoch 10/10
718/718 [==============================] - 64s 89ms/step - loss: 0.0185 - val_loss: 0.0187
Model B's loss is more than 0 , so it could be understood.
However,Model A's loss is less than 0, what does it mean??
Cross entropy is calculated as minus expected value of logarithm of the result. Usually it is used after sigmoid or softmax activation, where all values <= 1, their logarithms <= 0, and thus the result is >= 0. But you use it after relu activation that can give values > 1, that's why you obtain the result < 0. The moral is that the output layer activation and the loss should correspond to each other and must make sense from the point of view of the task you are trying to solve. Otherwise you may obtain senseless results.
Please advise how to confirm if ReduceLROnPlateau has been actually applied and what learning rates have been applied at each epoch.
The patience of ReduceLROnPlateau is set to 2 and monitor var_loss.
self._history = self.model.fit(
self.X.shuffle(1000).batch(self.batch_size).prefetch(1),
epochs=self.num_epochs,
batch_size=self.batch_size,
validation_data=self.V.shuffle(1000).batch(self.batch_size).prefetch(1),
callbacks=[
EaryStoppingCallback(patience=self.early_stop_patience),
ReduceLRCallback(patience=self.reduce_lr_patience), # <---- set to 2
TensorBoardCallback(self.log_directory)
]
)
class ReduceLRCallback(tf.keras.callbacks.ReduceLROnPlateau):
"""Reduce learning rate when a metric has stopped improving.
See https://keras.io/api/callbacks/reduce_lr_on_plateau/
"""
def __init__(self, patience=3):
assert patience > 0
super().__init__(
monitor="val_loss",
factor=0.3,
patience=patience,
)
The training val_loss have increased more than twice but have not seen any information if ReduceLROnPlateau has been applied.
_________________________________________________________________
Epoch 1/20
3990/3990 [==============================] - 860s 214ms/step - loss: 0.1705 - accuracy: 0.9386 - val_loss: 0.1626 - val_accuracy: 0.9456
Epoch 2/20
3990/3990 [==============================] - 847s 212ms/step - loss: 0.1618 - accuracy: 0.9412 - val_loss: 0.1433 - val_accuracy: 0.9456
Epoch 3/20
3990/3990 [==============================] - 846s 212ms/step - loss: 0.1593 - accuracy: 0.9425 - val_loss: 0.1478 - val_accuracy: 0.9438
Epoch 4/20
3990/3990 [==============================] - 846s 212ms/step - loss: 0.1567 - accuracy: 0.9427 - val_loss: 0.1428 - val_accuracy: 0.9468
Epoch 5/20
3990/3990 [==============================] - 846s 212ms/step - loss: 0.1558 - accuracy: 0.9425 - val_loss: 0.1502 - val_accuracy: 0.9425
Epoch 6/20
3990/3990 [==============================] - 843s 211ms/step - loss: 0.1554 - accuracy: 0.9433 - val_loss: 0.1453 - val_accuracy: 0.9456
Epoch 7/20
3990/3990 [==============================] - 843s 211ms/step - loss: 0.1482 - accuracy: 0.9454 - val_loss: 0.1362 - val_accuracy: 0.9477
Epoch 8/20
3990/3990 [==============================] - 843s 211ms/step - loss: 0.1475 - accuracy: 0.9449 - val_loss: 0.1373 - val_accuracy: 0.9471
Epoch 9/20
3990/3990 [==============================] - 845s 212ms/step - loss: 0.1468 - accuracy: 0.9460 - val_loss: 0.1362 - val_accuracy: 0.9485
Epoch 10/20
3990/3990 [==============================] - 843s 211ms/step - loss: 0.1448 - accuracy: 0.9462 - val_loss: 0.1344 - val_accuracy: 0.9489
Epoch 11/20
3990/3990 [==============================] - 846s 212ms/step - loss: 0.1447 - accuracy: 0.9458 - val_loss: 0.1346 - val_accuracy: 0.9483
Epoch 12/20
3990/3990 [==============================] - 843s 211ms/step - loss: 0.1444 - accuracy: 0.9460 - val_loss: 0.1342 - val_accuracy: 0.9483
Marco Cerliani's answer will probably solve it for you. That gives console output so you can verify the LR was actually reduced.
If you needed to check the model's learning rate in code, you can use
tf.keras.backend.get_value(model.optimizer.lr)
I am using tf.distribute.MirroredStrategy() to train the textcnn model, but when I set vocab_size=0 or other wrong values, no error will be reported in this mode. When tf.distribute.MirroredStrategy() is not used, the wrong vocab_size will immediately report an error
use wrong value fro vocab_size:
model=TextCNN(padding_size,vocab_size-10,embed_size,filter_num,num_classes)
model.compile(loss='sparse_categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(),metrics=['accuracy'])
model.fit(train_dataset, epochs=epoch,validation_data=valid_dataset, callbacks=callbacks)
Error:
2 root error(s) found.
(0) Invalid argument: indices[63,10] = 4726 is not in [0, 4726)
[[node text_cnn_1/embedding/embedding_lookup (defined at <ipython-input-7-6ef8a4397184>:37) ]]
[[Adam/Adam/update/AssignSubVariableOp/_45]]
(1) Invalid argument: indices[63,10] = 4726 is not in [0, 4726)
[[node text_cnn_1/embedding/embedding_lookup (defined at <ipython-input-7-6ef8a4397184>:37) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_234431]
but with strategy.scope() no Error and works well:
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
print(vocab_size)
model=TextCNN(padding_size,vocab_size-1000,embed_size,filter_num,num_classes)
model.compile(loss='sparse_categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(),metrics=['accuracy'])
model.fit(train_dataset, epochs=epoch,validation_data=valid_dataset, callbacks=callbacks)
log like this(looks very good):
Learning rate for epoch 1 is 0.0010000000474974513
2813/2813 [==============================] - 16s 6ms/step - loss: 0.8097 - accuracy: 0.7418 - val_loss: 0.4567 - val_accuracy: 0.8586 - lr: 0.0010
Epoch 2/15
2813/2813 [==============================] - ETA: 0s - loss: 0.4583 - accuracy: 0.8560
Learning rate for epoch 2 is 0.0010000000474974513
2813/2813 [==============================] - 14s 5ms/step - loss: 0.4583 - accuracy: 0.8560 - val_loss: 0.4051 - val_accuracy: 0.8756 - lr: 0.0010
Epoch 3/15
2810/2813 [============================>.] - ETA: 0s - loss: 0.3909 - accuracy: 0.8768
Learning rate for epoch 3 is 0.0010000000474974513
2813/2813 [==============================] - 14s 5ms/step - loss: 0.3909 - accuracy: 0.8767 - val_loss: 0.3853 - val_accuracy: 0.8844 - lr: 0.0010
Epoch 4/15
2811/2813 [============================>.] - ETA: 0s - loss: 0.2999 - accuracy: 0.9047
Learning rate for epoch 4 is 9.999999747378752e-05
2813/2813 [==============================] - 14s 5ms/step - loss: 0.2998 - accuracy: 0.9047 - val_loss: 0.3700 - val_accuracy: 0.8865 - lr: 1.0000e-04
Epoch 5/15
2807/2813 [============================>.] - ETA: 0s - loss: 0.2803 - accuracy: 0.9114
Learning rate for epoch 5 is 9.999999747378752e-05
2813/2813 [==============================] - 15s 5ms/step - loss: 0.2803 - accuracy: 0.9114 - val_loss: 0.3644 - val_accuracy: 0.8888 - lr: 1.0000e-04
Epoch 6/15
2803/2813 [============================>.] - ETA: 0s - loss: 0.2639 - accuracy: 0.9162
Learning rate for epoch 6 is 9.999999747378752e-05
2813/2813 [==============================] - 14s 5ms/step - loss: 0.2636 - accuracy: 0.9163 - val_loss: 0.3615 - val_accuracy: 0.8896 - lr: 1.0000e-04
Epoch 7/15
2805/2813 [============================>.] - ETA: 0s - loss: 0.2528 - accuracy: 0.9188
Learning rate for epoch 7 is 9.999999747378752e-05
2813/2813 [==============================] - 14s 5ms/step - loss: 0.2526 - accuracy: 0.9189 - val_loss: 0.3607 - val_accuracy: 0.8909 - lr: 1.0000e-04
More simply,like this,run and no error:
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = Sequential()
model.add(Embedding(1000, 64, input_length=20))
test_array=np.random.randint(10000,size=(32,20))
model.predict(test_array)
why???
ubuntu - 20.04
tensorflow 2.2
dataset used = MNIST
I am testing tensorflow and i notice that validation sparse_categorical_accuracy (accuracy) and validation SparseCategoricalCrossentropy (loss) both are increasing together which, does not make sense to me. I think the validation loss should be going down and validation accuracy increasing as the training progresses. Or, incase of overfitting, validation loss increasing and validation accuracy going down. But, validation loss and validation accuracy both are increasing as the training progresses. The training schedule however, is progressing according to expectation i.e training loss going down and training accuracy going up
Here is the code and the output:
#testing without preprocess monsoon
import tensorflow as tf
from tensorflow import keras as k
from tensorflow.keras import layers as l
import tensorflow_addons as tfa
mnist = tf.keras.datasets.mnist
(x_t,y_t),(x_te,y_te) = mnist.load_data()
x_t = x_t.reshape(60000,-1)
x_te = x_te.reshape(10000,-1)
d_x_t = tf.data.Dataset.from_tensor_slices(x_t)
d_y_t = tf.data.Dataset.from_tensor_slices(y_t)
dataset = tf.data.Dataset.zip((d_x_t,d_y_t)).shuffle(1000).batch(32)
d_x_te = tf.data.Dataset.from_tensor_slices(x_te)
d_y_te = tf.data.Dataset.from_tensor_slices(y_te)
dataset_test = tf.data.Dataset.zip((d_x_te,d_y_te)).shuffle(1000,seed=42).batch(32)
inp = k.Input((784,))
x = l.BatchNormalization()(inp)
x1 = l.Dense(1024,activation='relu',name='dense_1')(x)
x1=l.Dropout(0.5)(x1)
x1 = l.BatchNormalization()(x1)
x2 = l.Dense(512,activation='relu',name='dense_2')(x1)
x3 = l.Dense(512,activation='relu',name='dense_3')(x)
x = x3+x2
x=l.Dropout(0.5)(x)
x = l.BatchNormalization()(x)
x = l.Dense(10,activation='relu',name='dense_4')(x)
predictions = l.Dense(10,activation=None,name='preds')(x)
model = k.Model(inputs=inp,outputs=predictions)
opt=tfa.optimizers.MovingAverage(
k.optimizers.Adam(),
True,
0.99,
None,
'MovingAverage',
clipnorm=5
)
model.compile(optimizer=opt,
loss=k.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
print('# Fit model on training data')
history = model.fit(dataset,
epochs=30,
steps_per_epoch=1875,
validation_data = dataset_test,
validation_steps = 313)
print('\nhistory dict:', history.history)
model.evaluate(dataset_test,batch_size=32,steps=331)
The learning evolution that i am getting is:
# Fit model on training data
Epoch 1/30
WARNING:tensorflow:From /home/nitin/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
1875/1875 [==============================] - 49s 26ms/step - loss: 0.3614 - sparse_categorical_accuracy: 0.8913 - val_loss: 0.3355 - val_sparse_categorical_accuracy: 0.9548
Epoch 2/30
1875/1875 [==============================] - 49s 26ms/step - loss: 0.1899 - sparse_categorical_accuracy: 0.9427 - val_loss: 1.2028 - val_sparse_categorical_accuracy: 0.9641
Epoch 3/30
1875/1875 [==============================] - 51s 27ms/step - loss: 0.1546 - sparse_categorical_accuracy: 0.9521 - val_loss: 1.6385 - val_sparse_categorical_accuracy: 0.9673
Epoch 4/30
1875/1875 [==============================] - 38s 20ms/step - loss: 0.1357 - sparse_categorical_accuracy: 0.9585 - val_loss: 2.8285 - val_sparse_categorical_accuracy: 0.9697
Epoch 5/30
1875/1875 [==============================] - 38s 20ms/step - loss: 0.1253 - sparse_categorical_accuracy: 0.9608 - val_loss: 3.8489 - val_sparse_categorical_accuracy: 0.9697
Epoch 6/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.1149 - sparse_categorical_accuracy: 0.9646 - val_loss: 2.1872 - val_sparse_categorical_accuracy: 0.9699
Epoch 7/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.1094 - sparse_categorical_accuracy: 0.9646 - val_loss: 2.9429 - val_sparse_categorical_accuracy: 0.9695
Epoch 8/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.1066 - sparse_categorical_accuracy: 0.9667 - val_loss: 5.6166 - val_sparse_categorical_accuracy: 0.9710
Epoch 9/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0991 - sparse_categorical_accuracy: 0.9688 - val_loss: 3.9547 - val_sparse_categorical_accuracy: 0.9710
Epoch 10/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0948 - sparse_categorical_accuracy: 0.9701 - val_loss: 4.8149 - val_sparse_categorical_accuracy: 0.9713
Epoch 11/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0850 - sparse_categorical_accuracy: 0.9727 - val_loss: 7.4974 - val_sparse_categorical_accuracy: 0.9712
Epoch 12/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0879 - sparse_categorical_accuracy: 0.9719 - val_loss: 4.3669 - val_sparse_categorical_accuracy: 0.9714
Epoch 13/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0817 - sparse_categorical_accuracy: 0.9743 - val_loss: 9.2499 - val_sparse_categorical_accuracy: 0.9725
Epoch 14/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0805 - sparse_categorical_accuracy: 0.9737 - val_loss: 7.5436 - val_sparse_categorical_accuracy: 0.9716
Epoch 15/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0798 - sparse_categorical_accuracy: 0.9751 - val_loss: 14.2331 - val_sparse_categorical_accuracy: 0.9712
Epoch 16/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0745 - sparse_categorical_accuracy: 0.9757 - val_loss: 7.9517 - val_sparse_categorical_accuracy: 0.9715
Epoch 17/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0745 - sparse_categorical_accuracy: 0.9761 - val_loss: 7.9719 - val_sparse_categorical_accuracy: 0.9702
Epoch 18/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0741 - sparse_categorical_accuracy: 0.9763 - val_loss: 13.8696 - val_sparse_categorical_accuracy: 0.9665
Epoch 19/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0728 - sparse_categorical_accuracy: 0.9760 - val_loss: 20.2949 - val_sparse_categorical_accuracy: 0.9688
Epoch 20/30
1875/1875 [==============================] - 45s 24ms/step - loss: 0.0699 - sparse_categorical_accuracy: 0.9775 - val_loss: 8.8696 - val_sparse_categorical_accuracy: 0.9713
Epoch 21/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0699 - sparse_categorical_accuracy: 0.9777 - val_loss: 12.9682 - val_sparse_categorical_accuracy: 0.9723
Epoch 22/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0674 - sparse_categorical_accuracy: 0.9781 - val_loss: 61.1677 - val_sparse_categorical_accuracy: 0.9692
Epoch 23/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0651 - sparse_categorical_accuracy: 0.9798 - val_loss: 21.3270 - val_sparse_categorical_accuracy: 0.9697
Epoch 24/30
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0624 - sparse_categorical_accuracy: 0.9800 - val_loss: 62.2778 - val_sparse_categorical_accuracy: 0.9685
Epoch 25/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0665 - sparse_categorical_accuracy: 0.9792 - val_loss: 24.9327 - val_sparse_categorical_accuracy: 0.9687
Epoch 26/30
1875/1875 [==============================] - 46s 24ms/step - loss: 0.0605 - sparse_categorical_accuracy: 0.9805 - val_loss: 42.0141 - val_sparse_categorical_accuracy: 0.9700
Epoch 27/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0601 - sparse_categorical_accuracy: 0.9806 - val_loss: 54.8586 - val_sparse_categorical_accuracy: 0.9695
Epoch 28/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0583 - sparse_categorical_accuracy: 0.9811 - val_loss: 25.3613 - val_sparse_categorical_accuracy: 0.9680
Epoch 29/30
1875/1875 [==============================] - 29s 16ms/step - loss: 0.0576 - sparse_categorical_accuracy: 0.9811 - val_loss: 23.2299 - val_sparse_categorical_accuracy: 0.9710
Epoch 30/30
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0566 - sparse_categorical_accuracy: 0.9817 - val_loss: 16.5671 - val_sparse_categorical_accuracy: 0.9728
history dict: {'loss': [0.36135926842689514, 0.1898646354675293, 0.15456895530223846, 0.13569727540016174, 0.12525275349617004, 0.1148592159152031, 0.10943067818880081, 0.1066298857331276, 0.09912335127592087, 0.09476170688867569, 0.08501157909631729, 0.0879492461681366, 0.08170024305582047, 0.08047273010015488, 0.07976552098989487, 0.07453753799200058, 0.07450901716947556, 0.07413797080516815, 0.07278618961572647, 0.0698995441198349, 0.06988336145877838, 0.06740442663431168, 0.06507138162851334, 0.06242847815155983, 0.0665266141295433, 0.06050613150000572, 0.06005210056900978, 0.05830719694495201, 0.05763527378439903, 0.05664650723338127], 'sparse_categorical_accuracy': [0.8913000226020813, 0.9427499771118164, 0.9521499872207642, 0.9585333466529846, 0.9607999920845032, 0.9645500183105469, 0.9645666480064392, 0.9666833281517029, 0.9687666893005371, 0.9701166749000549, 0.9726999998092651, 0.9719499945640564, 0.9742666482925415, 0.9736999869346619, 0.9750999808311462, 0.9757000207901001, 0.9760833382606506, 0.9763166904449463, 0.9759833216667175, 0.977483332157135, 0.9777166843414307, 0.9780833125114441, 0.9798333048820496, 0.9800000190734863, 0.9792333245277405, 0.9805499911308289, 0.9805999994277954, 0.9810666441917419, 0.9810666441917419, 0.9816833138465881], 'val_loss': [0.33551061153411865, 1.2028071880340576, 1.6384732723236084, 2.828489065170288, 3.8488738536834717, 2.187160015106201, 2.9428975582122803, 5.6166462898254395, 3.954725503921509, 4.814915657043457, 7.4974141120910645, 4.366909503936768, 9.24986457824707, 7.543578147888184, 14.233136177062988, 7.951717853546143, 7.971870422363281, 13.869564056396484, 20.29490089416504, 8.869643211364746, 12.968180656433105, 61.167701721191406, 21.327049255371094, 62.27778625488281, 24.932708740234375, 42.01411437988281, 54.85857009887695, 25.361297607421875, 23.229896545410156, 16.56712532043457], 'val_sparse_categorical_accuracy': [0.954800009727478, 0.9641000032424927, 0.9672999978065491, 0.9696999788284302, 0.9696999788284302, 0.9699000120162964, 0.9695000052452087, 0.9710000157356262, 0.9710000157356262, 0.9713000059127808, 0.9711999893188477, 0.9714000225067139, 0.9725000262260437, 0.9715999960899353, 0.9711999893188477, 0.9714999794960022, 0.9702000021934509, 0.9664999842643738, 0.9688000082969666, 0.9713000059127808, 0.9722999930381775, 0.9692000150680542, 0.9696999788284302, 0.968500018119812, 0.9686999917030334, 0.9700000286102295, 0.9695000052452087, 0.9679999947547913, 0.9710000157356262, 0.9728000164031982]}
302/331 [==========================>...] - ETA: 0s - loss: 17.1192 - sparse_categorical_accuracy: 0.9725WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 331 batches). You may need to use the repeat() function when building your dataset.
313/331 [===========================>..] - 1s 3ms/step - loss: 16.5671 - sparse_categorical_accuracy: 0.9728
[16.567113876342773, 0.9728000164031982]
If training loss is decreasing and validation one is increasing when it's likely you've overfitted the model.
Also I have some doubts about this line:
x = x3+x2
As I understand, you want to create short connection. But in keras you should use Add layer to do this.
I've trained some networks with ResNetV2 50 ( https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4 ) and it work very well for my datasets.
Then I tried tf.keras.applications.ResNet50 and accuracy is very lower than the other.
Here two models:
The first (with hub)
base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(IMAGE_H, IMAGE_W, 3))
base_model.trainable = False
model = tf.keras.Sequential([
base_model ,
Dense(num_classes, activation='softmax')
])
The second (with keras.applications)
base_model = tf.keras.applications.ResNet50V2(input_shape=(IMAGE_H, IMAGE_W, 3), include_top=False, weights='imagenet', pooling='avg')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
Dense(num_classes, activation='softmax')
])
The optimizer is the same (Adam), epochs, steps, dataset (train and validation), learning rate are the same as well. But the first start with a val_accuracy near 80% and end with an accuracy near 99%, the second start with 85% of val_accuracy from first to last epoch, as it's overfitting. I got the same behavior changing dataset and parameters for each model.
What am I doing wrong?
Both tf.keras.applications.ResNet50V2 and hub.KerasLayer for ResNet50V2 are trained on the same image_net dataset and share the same weights, there is no difference between both.
Coming to the accuracy difference, I have tried both the API's to load base model and run on 10 epochs.
I saw a minor difference in the accuracy of both training and validation.
Then again after running the TensorFlow hub base model for the second time, again I found some minor change in accuracy.This may be due to the order of calculation of these and also could be floating point precision error.
Below are the outputs:
tf.keras.applications.ResNet50V2 as base model.
base_model = tf.keras.applications.ResNet50V2(input_shape=(96, 96, 3), include_top=False, weights='imagenet', pooling='avg')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
Dense(10, activation='softmax')
])
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
H = model.fit_generator(aug.flow(trainX, trainY, batch_size=16),
validation_data=(testX, testY), steps_per_epoch=len(trainX) // 16,
epochs=10)
Output 1:
Epoch 1/10
103/103 [==============================] - 55s 535ms/step - loss: 1.5820 - accuracy: 0.4789 - val_loss: 0.9162 - val_accuracy: 0.6949
Epoch 2/10
103/103 [==============================] - 57s 554ms/step - loss: 0.9539 - accuracy: 0.6534 - val_loss: 0.8376 - val_accuracy: 0.6852
Epoch 3/10
103/103 [==============================] - 55s 532ms/step - loss: 0.8610 - accuracy: 0.6944 - val_loss: 0.7104 - val_accuracy: 0.7240
Epoch 4/10
103/103 [==============================] - 55s 533ms/step - loss: 0.7671 - accuracy: 0.7214 - val_loss: 0.5988 - val_accuracy: 0.7918
Epoch 5/10
103/103 [==============================] - 55s 536ms/step - loss: 0.6994 - accuracy: 0.7526 - val_loss: 0.6029 - val_accuracy: 0.7676
Epoch 6/10
103/103 [==============================] - 55s 537ms/step - loss: 0.6880 - accuracy: 0.7508 - val_loss: 0.6121 - val_accuracy: 0.7724
Epoch 7/10
103/103 [==============================] - 55s 533ms/step - loss: 0.6588 - accuracy: 0.7593 - val_loss: 0.5486 - val_accuracy: 0.8015
Epoch 8/10
103/103 [==============================] - 55s 534ms/step - loss: 0.6640 - accuracy: 0.7630 - val_loss: 0.5287 - val_accuracy: 0.8232
Epoch 9/10
103/103 [==============================] - 54s 528ms/step - loss: 0.6004 - accuracy: 0.7881 - val_loss: 0.4598 - val_accuracy: 0.8426
Epoch 10/10
103/103 [==============================] - 55s 530ms/step - loss: 0.5583 - accuracy: 0.8016 - val_loss: 0.4605 - val_accuracy: 0.8426
Tensorflow Hub as base model.
tf.keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
base_model =hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(96, 96, 3))
base_model.trainable = False
model = tf.keras.Sequential([base_model ,Dense(10, activation='softmax')])
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
H = model.fit_generator(aug.flow(trainX, trainY, batch_size=16),
validation_data=(testX, testY), steps_per_epoch=len(trainX) // 16,
epochs=10)
Output 2:
Epoch 1/10
103/103 [==============================] - 54s 526ms/step - loss: 1.7543 - accuracy: 0.4464 - val_loss: 1.0185 - val_accuracy: 0.5981
Epoch 2/10
103/103 [==============================] - 53s 519ms/step - loss: 0.9827 - accuracy: 0.6283 - val_loss: 1.0067 - val_accuracy: 0.6416
Epoch 3/10
103/103 [==============================] - 56s 548ms/step - loss: 0.8719 - accuracy: 0.6944 - val_loss: 0.7195 - val_accuracy: 0.7240
Epoch 4/10
103/103 [==============================] - 54s 521ms/step - loss: 0.8177 - accuracy: 0.7208 - val_loss: 0.7490 - val_accuracy: 0.7385
Epoch 5/10
103/103 [==============================] - 54s 522ms/step - loss: 0.7641 - accuracy: 0.7379 - val_loss: 0.6325 - val_accuracy: 0.7797
Epoch 6/10
103/103 [==============================] - 54s 523ms/step - loss: 0.6551 - accuracy: 0.7655 - val_loss: 0.6431 - val_accuracy: 0.7579
Epoch 7/10
103/103 [==============================] - 55s 530ms/step - loss: 0.6538 - accuracy: 0.7734 - val_loss: 0.5824 - val_accuracy: 0.7797
Epoch 8/10
103/103 [==============================] - 55s 536ms/step - loss: 0.6387 - accuracy: 0.7691 - val_loss: 0.6254 - val_accuracy: 0.7772
Epoch 9/10
103/103 [==============================] - 56s 540ms/step - loss: 0.6394 - accuracy: 0.7685 - val_loss: 0.6539 - val_accuracy: 0.7554
Epoch 10/10
103/103 [==============================] - 55s 536ms/step - loss: 0.5816 - accuracy: 0.7955 - val_loss: 0.5703 - val_accuracy: 0.7990
Now if I run the Tensorflow hub model again.
Output 3:
Epoch 1/10
103/103 [==============================] - 55s 534ms/step - loss: 1.6412 - accuracy: 0.4764 - val_loss: 1.0697 - val_accuracy: 0.5738
Epoch 2/10
103/103 [==============================] - 54s 528ms/step - loss: 1.0312 - accuracy: 0.6412 - val_loss: 1.0196 - val_accuracy: 0.6077
Epoch 3/10
103/103 [==============================] - 59s 570ms/step - loss: 0.8710 - accuracy: 0.6975 - val_loss: 0.7088 - val_accuracy: 0.7240
Epoch 4/10
103/103 [==============================] - 54s 529ms/step - loss: 0.8108 - accuracy: 0.7128 - val_loss: 0.6539 - val_accuracy: 0.7458
Epoch 5/10
103/103 [==============================] - 54s 522ms/step - loss: 0.7311 - accuracy: 0.7440 - val_loss: 0.6029 - val_accuracy: 0.7676
Epoch 6/10
103/103 [==============================] - 54s 523ms/step - loss: 0.6683 - accuracy: 0.7612 - val_loss: 0.6621 - val_accuracy: 0.7506
Epoch 7/10
103/103 [==============================] - 54s 527ms/step - loss: 0.6518 - accuracy: 0.7753 - val_loss: 0.6166 - val_accuracy: 0.7700
Epoch 8/10
103/103 [==============================] - 54s 524ms/step - loss: 0.6147 - accuracy: 0.7795 - val_loss: 0.5611 - val_accuracy: 0.7797
Epoch 9/10
103/103 [==============================] - 54s 522ms/step - loss: 0.6671 - accuracy: 0.7667 - val_loss: 0.5126 - val_accuracy: 0.8087
Epoch 10/10
103/103 [==============================] - 54s 525ms/step - loss: 0.6090 - accuracy: 0.7832 - val_loss: 0.6355 - val_accuracy: 0.7627
Hope this answeres your question.