same train and eval dataset but get a different result - tensorflow

Versions:
TensorFlow: 1.8.0
TensorBoard: 1.8.0
What i did:
I'm training a model with imbalanced dataset with tf.estimator.DNNClassifier. When i did two times of training process both start from a totally new beginning(AKA, no checkpoint for each training) with the same data. I got two results which are very different from each other as shown in the following pictures.
1st-train
2nd-train
A few points to comment:
There is not difference between the two training process (no code or data changes), they both start from a new beginning.
The training dataset size is about 100M.
Both training results are from 6 epochs. (And each result cost $25 on google ml-engine.)
From the two pictures we can tell:
The 1st training learns nothing for 6 epochs.
The 2nd training learns (it got a AUC over 0.6).
Although the difference of AUC values between two trainings is only 0.1 (0.6 - 0.5), but it has big different in the meaning (a-random-guess versus a-non-random-guess).
Problems:
Why is this happen: same training data but get a totally different result?

Related

Training dataset repeatedly - Keras

I am doing an image classification task using Keras.
I used the vgg16 architecture, I thought it is easier to do, the task is to classify the image having tumor or not in MRI images.
As usual, I read and make all the images in same shape (224×224×3) and normalised by dividing all the images by 255. Then train test split, test dataset is 25% and training dataset is 75%.
train, test = train_test_split(X, y, test_size=0.25)
Then, I trained and got val_loss as 0.64 and val_accuracy as 0.7261.
I save the trained model in my google drive.
Next day, I used the same procedure, to improve the model performance by loading the saved model.
I didn't change the model architecture, I simply loaded the saved model which scores 0.7261 accuracy.
This time, I got better performance, the val_loss is 0.58 and val_accurqcy is 0.7976.
I wonder how this gets high accuracy. Then, I found that when splitting the dataset, the images will splits in random, and thus some of the test data in the 1st training process will become training data in the 2nd training process. So, the model learns the images and predicted well in 2nd training process.
I have to clarify, is this model is truly learns the tumor patterns or it is like that we train and test the model with same dataset or same image samples.
Thanks
When using train_test_split and validating in different sessions, always set your random seed. Otherwise, you will be using different splits, and leaking data like you stated. The model is not "learning" more, rather is being validated on data that it has already trained on. You will likely get worse real-world performance.

YOLOv4 loss too high

I am using YOLOv4-tiny for a custom dataset of 26 classes that I collected from Open Images Dataset. The dataset is almost balanced(850 images per class but different number of bounding boxes). When I used YOLOv4-tiny to train on just 3 classes the loss was near 0.5, it was fairly accurate. But for 26 classes as soon as the loss goes below 2 the model starts to overfit. The prediction are also very inaccurate.
I have tried to change the parameters like the learning rate, the momentum and the size but whatever I do the models becomes worse then before. Using regular YOLOv4 model rather then YOLO-tiny does not help either. How can I bring the loss further down?
Have you tried training with mAP? You can take a subset of your training set and make it the validation set. This can be done in the same way you made your training and test set. Then, you can run darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map. This will keep track of the loss in your validation set. When the error in the validation say goes up, this is the time to stop training and prevent overfitting (this is called: early stopping).
You need to run the training for (classes*2000)iterations. However, for the best scores, you need to train your model for at least 6000 iterations (also known as max_batches). Also please remember if you are using a b&w image, change the channels=3 to channels=1. You can stop your training once the avg loss becomes something like this: 0.XXXX.
Here's my mAP graph for 6000 iterations that ran for 6.2 hours:
avg loss with 6000 max_batches.
Moreover, you can follow this FAQ documentation here by Stéphane Charette.

TensorFlow image classification colab sheet from training material: newbie questions

Apologies if my questions are relatively simple, but I have been approaching the TensorFlow bit recently with the aim to learn new skills.
In the example, but there are several things I can't get:
in the explore data section, the size of the datasets return as 60/10k respectively for train and test.
where the size of the train/test size declared?
packages like SkLearn allows this to be specified in percentage when invoking the split methods.
in the training model part, when the 5 epochs are trained, the 1875 number appear below.
- what is that?
- I was expecting the training to run over the 60k items, but even by multiplying 1875 by 5 the number doesn't reach the 10k.
Dataset is loaded using tensorflow datasets API
The source itself has the split of 60K (Train) and 10K (Test)
https://www.tensorflow.org/datasets/catalog/fashion_mnist
An Epoch is a complete run with all the training samples. The training is done in batches. In the example you refer to, a batch size of 32 is used. So to complete one epoch, 1875 batches (60000 / 32) are run.
Hope this helps.

when to stop training object detection tensorflow

I am training faster rcnn model on fruit dataset using a pretrained model provided in google api(faster_rcnn_inception_resnet_v2_atrous_coco).
I made few changes to the default configuration. (number of classes : 12 fine_tune_checkpoint: path to the pretrained checkpoint model and from_detection_checkpoint: true). Total number of annotated images I have is around 12000.
After training for 9000 steps, the results I got have an accuracy percent below 1, though I was expecting it to be atleast 50% (In evaluation nothing is getting detected as accuracy is almost 0). The loss fluctuates in between 0 and 4.
What should be the number of steps I should train it for. I read an article which says to run around 800k steps but its the number of step when you train from scratch?
FC layers of the model are changed because of the different number of the classes but it should not effect those classes which are already present in the pre-trained model like 'apple'?
Any help would be much appreciated!
You shouldn't look at your training loss to determine when to stop. Instead, you should run your model through the evaluator periodically, and stop training when the evaluation mAP stops improving.

Keras training/testing results vary greatly after multiple runs

I am using Keras with TensorFlow backend. The dataset I am working with is sequence data with a Y value that is continuous between 0 and 1. The dataset is split into training with size 1900 and a testing with size 400. I am using the VGG19 architecture that I created from scratch in Keras. I am using an epoch of 30.
My question is, if I run this architecture multiple times, I get very different results. My results can be between 0.15 and 0.5 RMSE. Is this normal for this type of data? Is it because I am not running enough epochs? The loss from the network seems to stabilize around 0.024 at the end of the run. Any ideas?