The order of pooling and normalization layer in convnet - tensorflow

I'm looking at TensorFlow implementation of ORC on CIFAR-10, and I noticed that after the first convnet layer, they do pooling, then normalization, but after the second layer, they do normalization, then pooling.
I'm just wondering what would be the rationale behind this, and any tips on when/why we should choose to do norm before pool would be greatly appreciated. Thanks!

It should be pooling first, normalization second.
The original code link in the question no longer works, but I'm assuming the normalization being referred to is batch normalization. Though, the main idea will probably apply to other normalization as well. As noted by the batch normalization authors in the paper introducing batch normalization, one of the main purposes is "normalizing layer inputs". The simplified version of the idea being: if the inputs to each layer have a nice, reliable distribution of values, the network can train more easily. Putting the normalization second allows for this to happen.
As a concrete example, we can consider the activations [0, 99, 99, 100]. To keep things simple, a 0-1 normalization will be used. A max pooling with kernel 2 will be used. If the values are first normalized, we get [0, 0.99, 0.99, 1]. Then pooling gives [0.99, 1]. This does not provide the nice distribution of inputs to the next layer. If we instead pool first, we get [99, 100]. Then normalizing gives [0, 1]. Which means we can then control the distribution of the inputs to the next layer to be what we want them to be to best promote training.

Normalization is just normalization. After normalizaion, max value is still max value among all.
So Normalization->Pooling or Pooling->Normaliazaion results same.

Related

Keras : Shuffling dataset while using LSTM

Correct me if I am wrong but according to the official Keras documentation, by default, the fit function has the argument 'shuffle=True', hence it shuffles the whole training dataset on each epoch.
However, the point of using recurrent neural networks such as LSTM or GRU is to use the precise order of each data so that the state of the previous data influence the current one.
If we shuffle all the data, all the logical sequences are broken. Thus I don't understand why there are so much examples of LSTM where the argument is not set to False. What is the point of using RNN without sequences ?
Also, when I set the shuffle option to False, my LSTM model is less performant eventhought there are dependencies between the data: I use the KDD99 dataset where the connections are linked.
If we shuffle all the data, all the logical sequences are broken.
No, the shuffling happens on the batches axis, not on the time axis.
Usually, your data for an RNN has a shape like this: (batch_size, timesteps, features)
Usually, you give your network not only one sequence to learn from, but many sequences. Only the order in which these many sequences are being trained on gets shuffled. The sequences themselves stay intact.
Shuffling is usually always a good idea because your network shall only learn the training examples themselves, not their order.
This being said, there are cases where you have indeed only one huge sequence to learn from. In that case you have the option to still divide your sequence into several batches. If this is the case, you are absolutely right with your concern that shuffling would have a huge negative impact, so don't do that in this case!
Note: RNNs have a stateful parameter that you can set to True. In that case the last state of the previous batch will be passed to the following one which effectively makes your RNN see all batches as one huge sequence. So, absolutely do this, if you have a huge sequence over multiple batches.

Neural network immediately overfitting

I have a FFNN with 2 hidden layers for a regression task that overfits almost immediately (epoch 2-5, depending on # hidden units). (ReLU, Adam, MSE, same # hidden units per layer, tf.keras)
32 neurons:
128 neurons:
I will be tuning the number of hidden units, but to limit the search space I would like to know what the upper and lower bounds should be.
Afaik it is better to have a too large network and try to regularize via L2-reg or dropout than to lower the network's capacity -- because a larger network will have more local minima, but the actual loss value will be better.
Is there any point in trying to regularize (via e.g. dropout) a network that overfits from the get-go?
If so I suppose I could increase both bounds. If not I would lower them.
model = Sequential()
model.add(Dense(n_neurons, 'relu'))
model.add(Dense(n_neurons, 'relu'))
model.add(Dense(1, 'linear'))
model.compile('adam', 'mse')
Hyperparameter tuning is generally the hardest step in ML, In general we try different values randomly and evalute the model and choose those set of values which give the best performance.
Getting back to your question, You have a high varience problem (Good in training, bad in testing).
There are eight things you can do in order
Make sure your test and training distribution are same.
Make sure you shuffle and then split the data into two sets (test and train)
A good train:test split will be 105:15K
Use a deeper network with Dropout/L2 regularization.
Increase your training set size.
Try Early Stopping
Change your loss function
Change the network architecture (Switch to ConvNets, LSTM etc).
Depending on your computation power and time you can set a bound to the number of hidden units and hidden layers you can have.
because a larger network will have more local minima.
Nope, this is not quite true, in reality as the number of input dimension increases the chance of getting stuck into a local minima decreases. So We usually ignore the problem of local minima. It is very rare. The derivatives across all the dimensions in the working space must be zero for a local/global minima. Hence, it is highly unlikely in a typical model.
One more thing, I noticed you are using linear unit for last layer. I suggest you to go for ReLu instead. In general we do not need negative values in regression. It will reduce test/train error
Take this :
In MSE 1/2 * (y_true - y_prediction)^2
because y_prediction can be nagative value. The whole MSE term may blow up to large values as y_prediction gets highly negative or highly positive.
Using a ReLu for last layer makes sure that y_prediction is positive. Hence low error will be expected.
Let me try to substantiate some of the ideas here, referenced from Ian Goodfellow et. al. Deep Learning book which is available for free online:
Chapter 7: Regularization The most important point is data, one can and should avoid regularization if they have large amounts of data that best approximate the distribution. In you case, it looks like there might be a significant discrepancy between training and test data. You need to ensure the data is consistent.
Section 7.4: Data-augmentation With regards to data, Goodfellow talks about data-augmentation and inducing regularization by injecting noise (most likely Gaussian) which mathematically has the same effect. This noise works well with regression tasks as you limit the model from latching onto a single feature to overfit.
Section 7.8: Early Stopping is useful if you just want a model with the best test error. But again this only works if your data allows the training to infer the test data. If there is an immediate increase in test error the training would stop immediately.
Section 7.12: Dropout Just applying dropout to a regression model doesn't necessarily help. In fact "when extremely few labeled training examples are available, dropout is less effective". For classification, dropout forces the model to not rely on single features, but in regression all inputs might be required to compute a value rather than classify.
Chapter 11: Practicals emphasises the use of base models to ensure that the training task is not trivial. If a simple linear regression can achieve similar behaviour than you don't even have a training problem to begin with.
Bottom line is you can't just play with the model and hope for the best. Check the data, understand what is required and then apply the corresponding techniques. For more details read the book, it's very good. Your starting point should be a simple regression model, 1 layer, very few neurons and see what happens. Then incrementally experiment.

Is batchnorm used in neural networks that are not CNN?

1.) Batchnorm is always used in deep convolutional neural networks. But is it also used in not-CNN. In NN. In networks with just fully-connected layers?
2.) Is batchnorm used in shallow CNNs?
3.) If I have a CNN with an input image and an input array IN_array, the output is an array after the last fully-connected layer. I call this array FC_array. If I want to concat that FC_array with the IN_array.
CONCAT_array = tf.concat(values=[FC_array, IN_array])
Is it useful to have a bachnorm after the concat layer? Or should that batchnorm be just after the FC_array before the concat layer?
For information, the IN_array is a tf.one_hot() vector.
Thank you
TL;DR: 1. Yes 2. Yes 3. No
TS;WM:
Batch normalization was a great invention by Sergey Ioffe and Christian Szegedy early 2015. Back in those days, battling vanishing or exploding gradients was an everyday problem. Read that article if you want to gain a deep understanding. but basically this quote from the abstract should give you some idea:
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.
They did in fact first use batch normalization for DCNNs, which allowed them to beat human performance in the top-5 ImageNet classification, but any network where there are nonlinearities can benefit from batch normalization. Including a network consisting of fully-connected layers.
Yes, it is used for shallow CNN-s too. Any network with more than one layer can benefit from it, albeit it is true that more benefit comes to deeper networks.
First of all, one-hot vectors should never be normalized. Normalization means you subtract the mean and divide by the variance, thus creating a dataset with 0 mean and 1 variance. If you do this to a one-hot vector, then the cross-entropy loss calculation will be completely off. Second, there is no point in normalizing a concat layer separately, since it does not change the values, just concatenates them. Batch normalization is done on the input of a layer, so the one after the concat, that will get the concatenated values, can do it if necessary.

Tensorflow for image segmentation: Batch normalization has worst performance

I'm using TensorFlow for a multi-target regression problem. Specifically, in a fully convolutional residual network for pixel-wise labeling with the input being an image and the label a mask. In my case I am using brain MR as images and the labels are mask of the tumors.
I have accomplish a fairly decent result using my net:
Although I am sure there is still room for improvement. Therefore, I wanted to add batch normalization. I implemented it as follows:
# Convolutional Layer 1
Z10 = tf.nn.conv2d(X, W_conv10, strides = [1, 1, 1, 1], padding='SAME')
Z10 = tf.contrib.layers.batch_norm(Z10, center=True, scale=True, is_training = train_flag)
A10 = tf.nn.relu(Z10)
Z1 = tf.nn.conv2d(Z10, W_conv1, strides = [1, 2, 2, 1], padding='SAME')
Z1 = tf.contrib.layers.batch_norm(Z1, center=True, scale=True, is_training = train_flag)
A1 = tf.nn.relu(Z1)
for each the conv and transpose layers of my net. But the results are not what I expected. the net with batch normalization has a terrible performance. In orange is the loss of the net without batch normalization while the blue has it:
Not only the net is learning slower, the predicted labels are also very bad in the net using batch normalization.
Does any one know why this might be the case?
Could it be my cost function? I am currently using
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits = dA1, labels = Y)
cost = tf.reduce_mean(loss)
Batch normalization is a terrible normalization choice for tasks related to semantic information being passed through the network. Look into conditional normalization methods - Adaptive Instance Normalization, etc to understand my point. Also, this paper - https://arxiv.org/abs/1903.07291. Batch normalization washes away all the semantic information of the network.
It might be a naive guess, but maybe your batch size is too little. Normalizing might be good if the batch is large enough to represent the distribution of input values for the layer. If the batch is too small, information might be lost by normalization.
I also had problem with batch normalization on a semantic segmentaion task because the batch size had to be small (<10) due the input image size (1600x1200x3).
I tried Batch Normalization on FCN - 8 architecture with the PASCAL VOC2012 dataset.
And it gave terrible results, as others have mentioned above, but the model performed good without the Batch Normalization layers. One of my hypothesis for the network to perform badly is that in the decoder architecture we are mainly concerned with upsampling the feature space in a learnable fashion using CNN as a medium, because the feature map for the problem is set in the 1x1 conv performed at the end of base net which extracts the features.
We even add previous layers output from the encoder to the decoder (inspired from resnet architecture) and the reason to do so is to reduce the effect of vanishing gradient problem in deeper architectures.
And Batch Normalization works really well when we want to predict some classes from a picture or a sub - region of the picture, because there we don't have a decoder architecture to upsample the predicted feature space.
Please correct me if I am wrong.

TensorFlow - Batch normalization failing on regression?

I'm using TensorFlow for a multi-target regression problem. Specifically, in a convolutional network with pixel-wise labeling with the input being an image and the label being a "heat-map" where each pixel has a float value. More specifically, the ground truth labeling for each pixel is lower bounded by zero, and, while technically having no upper bound, usually gets no larger than 1e-2.
Without batch normalization, the network is able to give a reasonable heat-map prediction. With batch normalization, the network takes much long to get to reasonable loss value, and the best it does is making every pixel the average value. This is using the tf.contrib.layers conv2d and batch_norm methods, with the batch_norm being passed to the conv2d's normalization_fn (or not in the case of no batch normalization). I had briefly tried batch normalization on another (single value) regression network, and had trouble then as well (though, I hadn't tested that as extensively). Is there a problem using batch normalization on regression problems in general? Is there a common solution?
If not, what could be some causes batch normalization failing on such an application? I've attempted a variety of initializations, learning rates, etc. I would expect the final layer (which of course does not use batch normalization) could use weights to scale the output of the penultimate layer to the appropriate regression values. Failing that, I removed batch norm from that layer, but with no improvement. I've attempted a small classification problem using batch normalization and saw no problem there, so it seems reasonable that it could be due somehow to the nature of the regression problem, but I don't know how that could cause such a drastic difference. Is batch normalization known to have trouble on regression problems?
I believe your issue is in the labels. Batch norm will scale all input values between 0 and 1. If the labels are not scaled to a similar range the task will be more difficult. This is because it requires the NN to learn values of a different scale.
By removing the batch norm from the penultimate layer, the task may be improved slightly, but you are still requiring an NN layer to learn to downscale values of its input while subsequently normalizing back to the range 0 - 1 (opposite to your objective).
To solve this problem, apply a 0 - 1 scaler to the labels such that your upper bound is no longer 1e-2. During inference, transform the predictions back with the same function to get the actual prediction.