Once a CNN is trained, should its ouputs be deterministic? - tensorflow

I just trained a CNN with Tensorflow/Keras and saved it as a model. I tried running about 1000 inputs through it multiple times, and each time got a slightly different prediction accuracy. The accuracy was good, and I am not concerned with the performance; however, I thought that CNN models, once trained, should be deterministic. That is, any input will always be classified the same way. Is this not the case? Is there variability in the way a model can predict once trained? If not, hopefully I can assume that I have programmed some variability into my code unawares. Any help would be appreciated.

Once a CNN is trained, should its ouputs be deterministic?
Well, in theory, yes. In practise, as Peter Duniho points out in his excellent explanatory comment, we can see very small deviations because of the way values are calculated, aggregated, etc.
In practice the probability of such small deviations changing the predicted category (and therefore the accuracy) of a classification model are so small that I'd be almost certain something else is at play in your example. Even over a sample size of 1000.
Have you left on some training regularisation like batch normalisation? Are you certain you are evaluating precisely the same 1000 inputs each time? Got to suspect the issue is in the code rather than rounding errors.
Can you determine which specific classification changes?

Related

Validation loss and accuracy has a lot of 'jumps'

Hello everyone so I made this cnn model.
My data:
Train folder->30 classes->800 images each->24000 all together
Validation folder->30 classes->100 images each->3000 all together
Test folder->30 classes -> 100 images each -> 3000 all together
-I've applied data augmentation. ( on the train data)
-I got 5 conv layers with filters 32->64->128->128->128
each with maxpooling and batch normalization
-Added dropout 0.5 after flattening layers
Train part looks good. Validation part has a lot of 'jumps' though. Does it overfit?
Is there any way to fix this and make validation part more stable?
Note: I plann to increase epochs on my final model I'm just experimenting to see what works best since the model takes a lot of time in order to train. So for now I train with 20 epochs.
I've applied data augmentation (on the train data).
What does this mean? What kind of data did you add and how much? You might think I'm nitpicking, but if the distribution of the augmented data is different enough from the original data, then this will indeed cause your model to generalize poorly to the validation set.
Increasing your epochs isn't going to help here, your training loss is already decreasing reasonably. Training your model for longer is a good step if the validation loss is also decreasing nicely, but that's obviously not the case.
Some things I would personally try:
Try decreasing the learning rate.
Try training the model without the augmented data and see how the validation loss behaves.
Try splitting the augmented data so that it's also contained in the validation set and see how the model behaves.
Train part looks good. Validation part has a lot of 'jumps' though. Does it overfit?
the answer is yes. The so-called 'jumps' in the validation part may indicate that the model is not generalizing well to the validation data and therefore your model might be overfitting.
Is there any way to fix this and make validation part more stable?
To fix this you can use the following:
Increasing the size of your training set
Regularization techniques
Early stopping
Reduce the complexity of your model
Use different hyperparameters like learning rate

TensorFlow: Fit gives great val_acc, but evaluate gives low acc

I have some data (reuters).
I combine the training and test data into one large data set.
I shuffle the entire data set.
I then split the data set into 50%. 50% for training and 50% for testing.
I then setup a sequential model (Embedding, GRU, BatchNormalization, Dense).
I compile with adam optimizer, and loss = sparse_categorical_crossentropy
I run fit on it, for 5 epochs, with validation_split = 0.2
I get a val_acc of 95%. I am very happy with this.
I then call evaluate with the test data, and get acc = 71%
When I repeat evaluate with the train data (instead of test data), just to see what happens, I get 95% acc (as I should).
I am trying to understand what is wrong with my acc score with the test data.
The data is shuffled between train and test each time, so it can not be related to the data.
I tried the checkpoint save/restore trick, that did not seem to help.
I have reduced epochs in case of overfitting, but this has not helped.
I am curious what is going on. The only thing I can think of is there is some sort of overfitting going on that is carrying over to test, but is NOT impacting val_acc.
(Side note: the 20% data saved for validation during fit should not cause an overfitting problem, correct?)
Any ideas what could be wrong with my approach?
Thank you.
Edit 1:
Okey, I might be onto something. I noticed something odd about the test data that is included with the reuters dataset, vs the training data.
In previous experiments, results were always poor evaluating against test set, vs an unused portion of the training data.
This time, I combined the training and test data sets into one set, and shuffled.
Then shuffled some more, and then generated pseudo data around it, and shuffled some more.
Then I peeled off 20% to use as test data.
This time it worked. I got 95% for training, validation and evaluation accuracy.
I tried searching for information on the test set to see if anyone came up with anything but found no results. So I'm going to presently chalk it up to test data that is significantly different from the training set.
Edit 2:
Nevermind edit 1. I think I was corrupting my test data with pseudo-generated version of training data, that was close enough to work.
The only conclusion I can draw is that there is a lot of overfitting going on during my training AND using validation data during training is misleading, as it is also being overfit.
(Why validation data is being used to help overfit, I do not know).
Overfitting occurs due to many reasons, to overcome this problem you can try different methods like:
L1/L2 regularization.
Dropout layers: By using dropout layers in the network, we ignore a subset of units of our network with a set probability. Using dropout, we can reduce interdependent learning among units, which may have led to overfitting.
Early stopping: Monitors the performance of the model for every epoch on a held-out validation set during the training, and terminates the training conditional on the validation performance.
Feature selection: Improves the machine learning process and increases the predictive power of machine learning algorithms by selecting the most important variables and eliminating unnecessary and irrelevant features.
For more details refer to this link.

How do I know when to stop training my CNN?

I've been training my CNN and got the following as results:
I just know that the training and validation accuracy needs to both be high, but are these numbers good enough? How do I know when to stop? Should I concern myself with the losses, or only accuracy? Which epoch shows the best result so far?
Loss value implies how poorly or well a model behaves after each iteration of optimization. where as accuracy of a model is usually determined after the model parameters and is calculated in the form of a percentage.
Yes, training and validation should be high. Numbers always depend on subject area where we are dealing. In case of medical domain these numbers not good.
If you have serious class imbalance, your model will maximize accuracy by simply always picking the most common class, but this would not be a useful model. In this case cross entropy or log-loss would be a better loss function to optimize.
Generally the lower the loss the better a model unless the model has overfitted to the training data.
10th epoch is best where you got higher validation accuracy and lower validation loss.

Tensorflow: Increasing number of duplicate predictions while training

I have a multilayer perceptron with 5 hidden layers and 256 neurons each. When I start training, I get different prediction probabilities for each train sample until epoch 50, but then the number of duplicate predictions increases, on epoch 300 I already have 30% of duplicate predictions which does not make sense since the input data is different for all training samples. Any idea what causes this behavior?
Clarifications:
with "duplicate predictions", I mean items with the exactly same predicted probability to belong to class A (it's a binary classification problem)
I have 4000 training samples with 200 features each and all samples are different, it does not make sense that the number of duplicate predictions increases to 30% while training. So I wonder what can cause this behavior.
One point, you say you are doing a binary prediction, and when you say "duplicate predictions", even with your clarification it's hard to understand your meaning. I am guessing that you have two outputs for your binary classifier, one for class A and one for class B and you are getting roughly the same value for a given sample. If that's the case, then the first thing to do is to use 1 output. A binary classification problem is better modeled with 1 output that ranges between 0 and 1 (sigmoid the output neuron). This way there will be no ambiguity, the network will have to choose one or the other, or when it's confused you'll get ~0.5 and it will be clear.
Second, it is very common for a network to start learning well and then to perform more poorly after overtraining. Especially with small datasets such as what you have. In fact, even with the little knowledge I have of your dataset I would put a small bet on you getting better performance out of an algorithm like XGA Boost than a neural network (I assume you're using a neural net and not literally a perceptron).
But regarding the performance degrading over time. When this happens you want to look into something called "early stopping". At some point the network will start memorizing the input, and may be part of what's happening. Essentially you train until the performance on your held out test data starts to worsen.
To address this you can apply various forms of regularization (L2 regularization, dropout, batch normalization all come to mind). You can also reduce the size of your network. 5 layers of 256 neurons sounds too big for the problem. Try trimming this down and I bet your results will improve. There is a sweet spot for architecture size in neural networks. When your network is too large it can, and often will, over fit. When it's too small it won't be expressive enough for the data. Angrew Ng's coursera class has some helpful practical advice on dealing with this.

Using unlabeled dataset in Keras

Usually, when using Keras, the datasets used to train the neural network are labeled.
For example, if I have a 100,000 rows of patients with 12 field per each row, then the last field will indicate if this patient is diabetic or no (0 or 1).
And then after training is finished I can insert a new record and predict if this person is diabetic or no.
But in the case of unlabeled datasets, where I can not label the data due to some reasons, how can I train the neural network to let him know that those are the normal records and any new record that does not match this network will be malicious or not accepted ?
This is called one-class learning and is usually done by using autoencoders. You train an autoencoder on the training data to reconstruct the data itself. The labels in this case is the input itself. This will give you a reconstruction error. https://en.wikipedia.org/wiki/Autoencoder
Now you can define a threshold where the data is benign or not, depending on the reconstruction error. The hope is that the reconstruction of the good data is better than the reconstruction of the bad data.
Edit to answer the question about the difference in performance between supervised and unsupervised learning.
This cannot be said with any certainty, because I have not tried it and I do not know what the final accuracy is going to be. But for a rough estimate supervised learning will perform better on the trained data, because more information is supplied to the algorithm. However if the actual data is quite different to the training data the network will underperform in practice, while the autoencoder tends to deal better with different data. Additionally, per rule of thumb you should have 5000 examples per class to train a neural network reliably, so labeling could take some time. But you will need some data to test anyways.
It sounds like you need fit two different models:
a model for bad record detection
a model for prediction of a patient's likelihood to be diabetic
For both of these models, you will need to have labels. For the first model your labels would indicate whether the record is good or bad (malicious) and the second would be whether the patient is diabetic or not.
In order to detect bad records, you may find that simple logistic regression or SVM performs adequately.