I am using tensorflow DNNClassifier for multi label classification, which uses accuracy as it uses its metric. I am evaluating the model using sklearn f1 metric, which is showing quite low score. Also score from sklearn accuracy is low. Is my implementation wrong somewhere?
DNN classifier
embedding_feats = hub.text_embedding_column(key='text',
module_spec='https://tfhub.dev/google/universal-sentence-encoder/2',
trainable=False)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feats],
n_classes=11,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005))
DNN classifier Train Output. Val acc is 0.40
Training for step = 8000
Train Time (s): 52.573952436447144
Eval Metrics (Train): {'accuracy': 0.44695774, 'average_loss': 1.516403, 'loss': 193.58235, 'global_step': 8200}
Eval Metrics (Validation): {'accuracy': 0.40303582, 'average_loss': 1.6520736, 'loss': 209.30502, 'global_step': 8200}
Sklearn F1 score
f1_score(y_test,predictions_test,labels=le.classes_,average='weighted')
0.1066998393248964
Sklearn Accuracy score
accuracy_score(y_test, predictions_test)
0.11804138735062664
One possible reason for this could be that you are not converting your predictions to whole number i.e. 0 or 1. Your neural network is generating an output in terms of probability of a record being of class 1. If you directly take this probabilities and evaluate it with your y_test, they will not match because 0.98 is not equal to 1.
Round predictions_test to the nearest whole number i.e. <0.5 will be 0 and >0.5 will be 1 and check accuracy.
Related
I have trained/fine-tuned a few Keras models and during that, I used 'accuracy' as the only metric to calculate. now after training everything which took a long time, I realized I need precision and recall. Is there a chance I have extract/compute that information after the training is finished with the saved model and the accuracies at hand?
You can apply the trained model on the test set, generate a list of the true labels and predicted labels, and then use the score metrics from scikit-learn. See documentation here:
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score
https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
For example, for a binary classification problem:
from sklearn.metrics import precision_score
# I assume y_true are the list of true labels of the test set
# I assume y_pred are the predictions of the model
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
I am training and optimizing my multi classification CNN with the following compile method of keras.
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=optimiser,
metrics=['accuracy', 'categorical_crossentropy'])
I used categorical_crossentropy as loss as well as metric to watch. After training the model for 10 epochs, I get the following values.
Evn though I have chosen categorical_crossentropy as loss and a metric, what can be the possible reasons for their values to be different?
I use a TensorFlow canned estimator (LinearClassifier) to predict game actions from situations favourizing best scores. Scores are included in train_data and used as weight and passed as weight column in the estimator.
I know weight values are multiplicated with loss (MSE in this case) but I want to know if loss minimization is done or if I have to define optimizer as:
optimizer=tf.train.AdamOptimizer(learning_rate=0.001, beta1= 0.9,beta2=0.99, epsilon = 1e-08,use_locking=False).minimize(loss),
model = tf.estimator.LinearClassifier(feature_columns=feature_columns,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001, beta1= 0.9,beta2=0.99, epsilon = 1e-08,use_locking=False),
weight_column=weights,
# dropout=0.1,
# activation_fn=tf.nn.softmax,
n_classes=10,
label_vocabulary=Action_vocab,
model_dir='./Models/ActionPlayerModel20/',
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE,
config=tf.estimator.RunConfig().replace(save_summary_steps=10))
Not at all sure what you mean by:
I know weight values are multiplicated with loss
but the classifier line is correct as you have it. You should pass the Optimizer object into the classifier and not the .minimize() operation. The estimator will generate & handle the minimize operation internally.
Has anybody trained Mobile Net V1 from scratch using CIFAR-10? What was the maximum accuracy you got? I am getting stuck at 70% after 110 epochs. Here is how I am creating the model. However, my training accuracy is above 99%.
#create mobilenet layer
MobileNet_model = tf.keras.applications.MobileNet(include_top=False, weights=None)
# Must define the input shape in the first layer of the neural network
x = Input(shape=(32,32,3),name='input')
#Create custom model
model = MobileNet_model(x)
model = Flatten(name='flatten')(model)
model = Dense(1024, activation='relu',name='dense_1')(model)
output = Dense(10, activation=tf.nn.softmax,name='output')(model)
model_regular = Model(x, output,name='model_regular')
I used Adam optimizer with a LR= 0.001, amsgrad = True and batch size = 64. Also normalized pixel data by dividing by 255.0. I am not using any Data Augmentation.
optimizer1 = tf.keras.optimizers.Adam(lr=0.001, amsgrad=True)
model_regular.compile(optimizer=optimizer1, loss='categorical_crossentropy', metrics=['accuracy'])
history = model_regular.fit(x_train, y_train_one_hot,validation_data=(x_test,y_test_one_hot),batch_size=64, epochs=100) # train the model
I think I am supposed to get at least 75% according to https://arxiv.org/abs/1712.04698
Am I am doing anything wrong or is this the expected accuracy after 100 epochs. Here is a plot of my validation accuracy.
Mobilenet was designed to train Imagenet which is much larger, therefore train it on Cifar10 will inevitably result in overfitting. I would suggest you plot the loss (not acurracy) from both training and validation/evaluation, and try to train it hard to achieve 99% training accuracy, then observe the validation loss. If it is overfitting, you would see that the validation loss will actually increase after reaching minima.
A few things to try to reduce overfitting:
add dropout before fully connected layer
data augmentation - random shift, crop and rotation should be enough
use smaller width multiplier (read the original paper, basically just reduce number of filter per layers) e.g. 0.75 or 0.5 to make the layers thinner.
use L2 weight regularization and weight decay
Then there are some usual training tricks:
use learning rate decay e.g. reduce the learning rate from 1e-2 to 1e-4 stepwise or exponentially
With some hyperparameter search, I got evaluation loss of 0.85. I didn't use Keras, I wrote the Mobilenet myself using Tensorflow.
The OP asked about MobileNetv1. Since MobileNetv2 has been published, here is an update on training MobileNetv2 on CIFAR-10 -
1) MobileNetv2 is tuned primarily to work on ImageNet with an initial image resolution of 224x224. It has 5 convolution operations with stride 2. Thus the GlobalAvgPool2D (penultimate layer) gets a feature map of Cx7x7, where C is the number of filters (1280 for MobileNetV2).
2) For CIFAR10, I changed the stride in the first three of these layers to 1. Thus the GlobalAvgPool2D gets a feature map of Cx8x8. Secondly, I trained with 0.25 on the width parameter (affects the depth of the network). I trained with mixup in mxnet (https://gluon-cv.mxnet.io/model_zoo/classification.html). This gets me a validation accuracy of 93.27.
3) Another MobileNetV2 implementation that seems to work well for CIFAR-10 is available here - PyTorch-CIFAR
The reported accuracy is 94.43. This implementation changes the stride in the first two of the original layers which downsample the resolution to stride 1. And it uses the full width of the channels as used for ImageNet.
4) Further, I trained a MobileNetV2 on CIFAR-10 with mixup while only setting altering the stride in the first conv layer from 2 to 1 and used the complete depth (width parameter==1.0). Thus the GlobalAvgPool2D (penultimate layer) gets a feature map of Cx2x2. This gets me an accuracy of 92.31.
When I was training a CNN to classify images of distorted digits varying from 0 to 9, the accuracy of training set and test set improved obviously.
Epoch[0] Batch [100] Train-multi-accuracy_0=0.296000
...
Epoch[0] Batch [500] Train-multi-accuracy_0=0.881900
In Epoch[1] and Epoch[2] the accuracy oscillate slightly between 0.85 and 0.95, however,
Epoch[3] Batch [300] Train-multi-accuracy_0=0.926400
Epoch[3] Batch [400] Train-multi-accuracy_0=0.105300
Epoch[3] Batch [500] Train-multi-accuracy_0=0.098200
Since then, the accuracy was around 0.1 which meant the network only gave random prediction.
I repeated the training several times, this case occurred every time. What's wrong with it?
Is the adapted learning rate strategy the reason?
model = mx.model.FeedForward(...,
optimizer = 'adam',
num_epoch = 50,
wd = 0.00001,
...,
)
What exactly is the model you're training? If you're using the mnist dataset, usually a simple 2-layer MLP trained with sgd with give you pretty high accuracy.