I am new to machine learning and I built a neural network with 2 dense layers. When I was experimenting, I had the following observations:
When I decreased the number of nodes in each dense layer, I seemed to get better training and prediction accuracy. This was surprising to me because I would assume the more nodes in a dense layer, the more the model can understand the data. Why does decreasing node number improve accuracy?
The model also yielded better results when the number of nodes in each dense layer was not consistent. For example, I got the best result when one dense layer had 5 nodes and the other layer had 10, than both layers having 5 nodes or 10 nodes. Why is that? Is it common that inconsistent node counts in the dense layers improve accuracy?
To answer your questions sequentially:
a) When you decreased the number of neurons in each dense layer and you got better training and accuracy, you reduced the overfitting phenomenon in your problem. The act of removing some neurons from your layers behaved like a regularizer on your problem, and thus mitigated the overfitting effect. This is not an uncommon situation; according to your dataset and overall architecture of the neural network, decreasing the number of neurons in some layers may very well lead to better generalization of your model.
b) The answer a) does not apply if only the training accuracy improved when decreasing the number of nodes, since overfitting increses training accuracy, but reduces the test/holdout-accuracy.
The second question is case-dependent; When building neural networks from scratch, there is no guarantee that your problem will work better with approach A or approach B; this is why we do hyperparameter search and optimization, in order to seek for the best overall parameters in order to minimize our loss on the validation set.
For common heuristics applied when build a model from scratch, particularly with Dense layers, please consult the next link: https://towardsdatascience.com/17-rules-of-thumb-for-building-a-neural-network-93356f9930af. Some of the heuristics applicable are available for Dense layers as a whole; it does not matter if the input, like in your problem, will come from an LSTM processing.
Related
I am training an autoencoder DNN for a regression question. Need suggestions on how to improve the training process.
The total number of training sample is about ~100,000. I use Keras to fit the model, setting validation_split = 0.1. After training, I drew loss function change and got the following picture. As can be seen here, validation loss is unstable and mean values are very close to training loss.
My question is: based on this, what is the next step I should try to improve the training process?
[Edit on 1/26/2019]
The details of network architecture are as follows:
It has 1 latent layer of 50 nodes. The input and output layer have 1000 nodes,respectively. The activation of hidden layer is ReLU. Loss function is MSE. For optimizer, I use Adadelta with default parameter settings. I also tried to set lr=0.5, but got very similar results. Different features of the data have scaled between -10 and 10, with mean of 0.
By observing the graph provided, the network could not approximate the function which establishes a relation between the input and output.
If your features are too diverse. That one of them is large and others have a very small value, then you should normalize the feature vector. You can read more here.
For a better training and testing result, you can follow these tips,
Use a small network. A network with one hidden layer is enough.
Perform activations in the input as well as hidden layers. The output layer must have a linear function. Use ReLU activation function.
Prefer small learning rate like 0.001. Use RMSProp optimizer. It works fine on most regression problems.
If you are not using mean squared error function, use it.
Try slow and steady learning and not fast learning.
I have a FFNN with 2 hidden layers for a regression task that overfits almost immediately (epoch 2-5, depending on # hidden units). (ReLU, Adam, MSE, same # hidden units per layer, tf.keras)
32 neurons:
128 neurons:
I will be tuning the number of hidden units, but to limit the search space I would like to know what the upper and lower bounds should be.
Afaik it is better to have a too large network and try to regularize via L2-reg or dropout than to lower the network's capacity -- because a larger network will have more local minima, but the actual loss value will be better.
Is there any point in trying to regularize (via e.g. dropout) a network that overfits from the get-go?
If so I suppose I could increase both bounds. If not I would lower them.
model = Sequential()
model.add(Dense(n_neurons, 'relu'))
model.add(Dense(n_neurons, 'relu'))
model.add(Dense(1, 'linear'))
model.compile('adam', 'mse')
Hyperparameter tuning is generally the hardest step in ML, In general we try different values randomly and evalute the model and choose those set of values which give the best performance.
Getting back to your question, You have a high varience problem (Good in training, bad in testing).
There are eight things you can do in order
Make sure your test and training distribution are same.
Make sure you shuffle and then split the data into two sets (test and train)
A good train:test split will be 105:15K
Use a deeper network with Dropout/L2 regularization.
Increase your training set size.
Try Early Stopping
Change your loss function
Change the network architecture (Switch to ConvNets, LSTM etc).
Depending on your computation power and time you can set a bound to the number of hidden units and hidden layers you can have.
because a larger network will have more local minima.
Nope, this is not quite true, in reality as the number of input dimension increases the chance of getting stuck into a local minima decreases. So We usually ignore the problem of local minima. It is very rare. The derivatives across all the dimensions in the working space must be zero for a local/global minima. Hence, it is highly unlikely in a typical model.
One more thing, I noticed you are using linear unit for last layer. I suggest you to go for ReLu instead. In general we do not need negative values in regression. It will reduce test/train error
Take this :
In MSE 1/2 * (y_true - y_prediction)^2
because y_prediction can be nagative value. The whole MSE term may blow up to large values as y_prediction gets highly negative or highly positive.
Using a ReLu for last layer makes sure that y_prediction is positive. Hence low error will be expected.
Let me try to substantiate some of the ideas here, referenced from Ian Goodfellow et. al. Deep Learning book which is available for free online:
Chapter 7: Regularization The most important point is data, one can and should avoid regularization if they have large amounts of data that best approximate the distribution. In you case, it looks like there might be a significant discrepancy between training and test data. You need to ensure the data is consistent.
Section 7.4: Data-augmentation With regards to data, Goodfellow talks about data-augmentation and inducing regularization by injecting noise (most likely Gaussian) which mathematically has the same effect. This noise works well with regression tasks as you limit the model from latching onto a single feature to overfit.
Section 7.8: Early Stopping is useful if you just want a model with the best test error. But again this only works if your data allows the training to infer the test data. If there is an immediate increase in test error the training would stop immediately.
Section 7.12: Dropout Just applying dropout to a regression model doesn't necessarily help. In fact "when extremely few labeled training examples are available, dropout is less effective". For classification, dropout forces the model to not rely on single features, but in regression all inputs might be required to compute a value rather than classify.
Chapter 11: Practicals emphasises the use of base models to ensure that the training task is not trivial. If a simple linear regression can achieve similar behaviour than you don't even have a training problem to begin with.
Bottom line is you can't just play with the model and hope for the best. Check the data, understand what is required and then apply the corresponding techniques. For more details read the book, it's very good. Your starting point should be a simple regression model, 1 layer, very few neurons and see what happens. Then incrementally experiment.
1.) Batchnorm is always used in deep convolutional neural networks. But is it also used in not-CNN. In NN. In networks with just fully-connected layers?
2.) Is batchnorm used in shallow CNNs?
3.) If I have a CNN with an input image and an input array IN_array, the output is an array after the last fully-connected layer. I call this array FC_array. If I want to concat that FC_array with the IN_array.
CONCAT_array = tf.concat(values=[FC_array, IN_array])
Is it useful to have a bachnorm after the concat layer? Or should that batchnorm be just after the FC_array before the concat layer?
For information, the IN_array is a tf.one_hot() vector.
Thank you
TL;DR: 1. Yes 2. Yes 3. No
TS;WM:
Batch normalization was a great invention by Sergey Ioffe and Christian Szegedy early 2015. Back in those days, battling vanishing or exploding gradients was an everyday problem. Read that article if you want to gain a deep understanding. but basically this quote from the abstract should give you some idea:
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.
They did in fact first use batch normalization for DCNNs, which allowed them to beat human performance in the top-5 ImageNet classification, but any network where there are nonlinearities can benefit from batch normalization. Including a network consisting of fully-connected layers.
Yes, it is used for shallow CNN-s too. Any network with more than one layer can benefit from it, albeit it is true that more benefit comes to deeper networks.
First of all, one-hot vectors should never be normalized. Normalization means you subtract the mean and divide by the variance, thus creating a dataset with 0 mean and 1 variance. If you do this to a one-hot vector, then the cross-entropy loss calculation will be completely off. Second, there is no point in normalizing a concat layer separately, since it does not change the values, just concatenates them. Batch normalization is done on the input of a layer, so the one after the concat, that will get the concatenated values, can do it if necessary.
Context:
Suppose we have a simple 3-layer feed-forward network. The hidden size of the first linear layer is 100000 -- W1[input_size, 100000] in which input_size is a number much smaller than 100000. Some of neurons won't be learning any thing. I want to select and shutdown these neurons using pruning.
Expected outcomes
After pruning the selected neurons, we will have a smaller network with the less neurons in the first layer, say reduced to 500. And this smaller network turns out to have the same predicting capacity to the large one.
My implementation:
According to some criterion (some metrics applied to check weight similarities after each backpropagation update), I have cheery picked the indices of neurons I want to shut down, e.g., [1,7,8 ...].
Zero out the weights represented by the indices in W1, W1[:, 1, 7, 8 ...] = 0. So that no information will be passed forward via these neurons to the next layer.
Will that be enough? Should I be manually intervening the backpropagation as well? Zero out neurons stop only computations passing forward, but for learning/updating on weights, backpropgation matters more. Since I am using pytorch, it will be great if illustrations are provided in pytorch, other frameworks like tensorflow, Keras are also fine.
I have a 3-D convolutional neural network [keras, tensorflow] and 3D brain images of people with advanced alzheimer's, early alzheimer's and healthy people (3 classes). I have training set of 324 images and test set of 74 images. When I trained my CNN, I had about 65-70% accuracy but for the test set I had only 30-40%. When I used the test data as validation data then for training set I had no more than 37% accuracy as well and the loss stayed at the same level the whole time. Nevermind which parameters I change, the result is the same. I load my prepared and normalized data from .h5 file into Python, and the input have shape (None, 90, 120, 80, 1). I don't have an idea what may be wrong, I checked the code many times and everything seems to be correct.
My CNN have 4 conv3D layers, 3 max-pooling, activations:relu and batch_normalizations, 3 dense layers and dropout, softmax
I appreciate any help or ideas.
If you only have 65/70% accuracy on your training data that is really poor and indicates your neural network is not converging properly. Your network should be capable of at least overfitting the training data if the structure is complex enough, by effectively learning to hardcode the outputs from the small input sample. By the sound of it, your structure is complex enough.
The first thing to try is to reduce the learning rate by a factor of 10, and turn off validation/early stopping/normalisation/regularisation and any other ways to prevent overfitting. Then rinse, repeat - more iterations, each reducing the LR by a factor of 10 - until you can overfit the training data to where it gets close to 100% on the training data.
You can then work on putting in the proper early stopping, dropout, normalisation, regularisation etc to prevent overfitting with a learning rate you know works.
If dropping the LR doesn't even overfit however small the LR then you have some issue with your NN structure.