I searched many articles about convolutional neural networks and found that there are some good structures that I can refer to. For example, AlexNet, VGG, GoogleNet.
However, if I want to customize CNN architecture by myself, how to arrange/order different layers? E.g. convolution layer, dropout, max pooling... Is there any standard? or just keep trying different combination to produce the good result?
According to me there isn't a standard per say,But combinations
1-Like if you want to create a deeper network you can use residual block to avoid facing vanishing gradient problem.
2-The standard of using a 3,3 convolution is because it reduces computational cost ex 3 simultaneous 3,3 convolution can achieve a 7,7 convolution for a smaller cost
3-The main reason for dropout is to introduce regularization ,which can also be achieved by batch normalization as the author claims.
4-Before what to enhanced and how to enhanced ,one must understand the problem he/she is trying to solve.
You can go through the case study which was taught at Standford
Standford case study
The video can help you understand much of these combinations and how they result in model improvement and can help you built your network.
You generally want to put a pooling layer after a convolutional layer. Also, you can think of dropout as a parameter that is applied to a layer, and not a separate layer altogether -- whichever is easier for you to envision.
Related
I am trying to understand the effect of adding more layers vs the effect of increasing the number of filters in the existing layers in a CNN. Consider a CNN with 2 hidden layers and 64 filters each. Consider a second CNN with 4 hidden layers and 32 filters each. The kernel sizes are same in both CNNs. The number of parameters are also same in both cases.
Which one should I expect to perform better? I am thinking in terms of hyperparameter tuning, batch sizes, training time etc.
In CNNs the deeper layers correspond to higher-level features in images. In the first layer, you are going to get low-level features such as edges, lines, etc. So the network uses these layers to create more complex features; using edges to find circles, using circles to find wheels, using wheels to find a car (I skipped a couple of steps but you get the gist).
So to answer your question, you need to consider how complex your problem is. If you are working on something like imagenet, you can expect model with more layers to have the edge. On the other hand, for problems like mnist, you don't need high-level features.
I use Faster RCNN to classify 33 items. But most of them are misclassified among each other. All items are snack packets and sweet packets like in the link below.
https://redmart.com/product/lays-salt-and-vinegar-potato-chips
https://www.google.com/search?q=ice+breakers&source=lnms&tbm=isch&sa=X&ved=0ahUKEwj5qqXMofHfAhUQY48KHbIgCO8Q_AUIDigB&biw=1855&bih=953#imgrc=TVDtryRBYCPlnM:
https://www.google.com/search?biw=1855&bih=953&tbm=isch&sa=1&ei=S5g-XPatEMTVvATZgLiwDw&q=disney+frozen+egg&oq=disney+frozen+egg&gs_l=img.3..0.6353.6886..7047...0.0..0.43.116.3......1....1..gws-wiz-img.OSreIYZziXU#imgrc=TQVYPtSi--E7eM:
So color and shape are similar.
What could be the best way to solve this misclassification problem?
Fine tuning is a way to use features, learned on some big dataset, in our problem, which means instead of training the complete network again, we freeze out weights of the lower layer of the network and add few layers at the end of network, as per requirement. Now we train it on our data-set again. So the advantage here is that, we don't need to train all-millions of parameters, but few only. Another is that we don't need large-dataset to fine-tune.
More you can find here. This is another-useful resource, where author has explained this in more detail(with code).
Note: This is also known as transfer-learning.
I know CNN has a lot of good features like weight sharing, save memory and feature extracting. However, this question makes me very confused. Is there any possible situation that fully connected network better than CNN? Why?
Thanks a lot guys!
Is there any possible situation that fully connected network better than CNN?
Well, I think we should first define what we mean by "better". Accuracy and precision are not the only things to consider: computational time, degrees of freedom and difficulty of the optimization should also be taken into account.
First, consider an input of size h*w*c. Feeding this input to a convolutional layer with F featuremaps and kernel size s will result in at about F*s*s*c learnable parameters (assuming there are no constraints on the ranks of the convolutions, otherwise we even have less parameters.). Feeding the same input into a fully connected layer with the same number of featuremaps will result in F*d_1*d_2*w*h*c, (where d_1,d_2 are the dimensions of each featuremap) which is clearly in the order of billions of learnable parameters given any input image with decent resolution.
While it can be tempting to think that we can get away with shallower networks (we already have lots of parameters, right?), fully connected layers are just linear layers after all, so we still need to insert many non-linearities in order for the network to gain reasonable representational power. So, this will mean that you will still need a deep network, however with so many parameters that it would be untractable. In addition, a larger network will have more degrees of freedom, and will therefore model much more than what we want: it will model noise unless we feed it some data or constrain it.
So yes, there might be a fully connected network that in theory could give us better performance, but we don't know how to train it yet. Finally, and this is purely based on intuition and therefore might be wrong, but it seems unlikely to me that such a fully connected network would converge to a dense solution. Since many convolutional networks achieve very high levels of accuracy (99% and up) on many tasks, I think that the optimal solution the fully connected network would converge to would be close to the convolutional network. So, we don't really need to train the fully connected one, but just a subset of its architecture.
I plot all my weights of my neural network on tensorboard, I found that some
weights of some layer is normally distributed:
but, some are not.
what does this imply? should I increase or decrease the capacity of this layer?
Update:
My network is a LSTM-based netowrk. the non-normal distributed weights is the weights multiply with input feature, the normal distributed weights is the weights multiply with states.
one explanation base on convolutional networks might be this(I don't know if this is true for any other kind of artificial neural models or not), hence the first layer tries to find distinct small features weights are distributed very widely and network tries to find any useful feature it can, then in the next layers combination of these distinct features are used, which make sense to put a normal distribution of weights hence every one of the previous features are going to be part of a single bigger or more representative feature in next layers.
but this was only my intuition I am not sure if this is the reason with proof now.
I'm using TensorFlow to train a Convolutional Neural Network (CNN) for a sign language application. The CNN has to classify 27 different labels, so unsurprisingly, a major problem has been addressing overfitting. I've taken several steps to accomplish this:
I've collected a large amount of high-quality training data (over 5000 samples per label).
I've built a reasonably sophisticated pre-processing stage to help maximize invariance to things like lighting conditions.
I'm using dropout on the fully-connected layers.
I'm applying L2 regularization to the fully-connected parameters.
I've done extensive hyper-parameter optimization (to the extent possible given HW and time limitations) to identify the simplest model that can achieve close to 0% loss on training data.
Unfortunately, even after all these steps, I'm finding that I can't achieve much better that about 3% test error. (It's not terrible, but for the application to be viable, I'll need to improve that substantially.)
I suspect that the source of the overfitting lies in the convolutional layers since I'm not taking any explicit steps there to regularize (besides keeping the layers as small as possible). But based on examples provided with TensorFlow, it doesn't appear that regularization or dropout is typically applied to convolutional layers.
The only approach I've found online that explicitly deals with prevention of overfitting in convolutional layers is a fairly new approach called Stochastic Pooling. Unfortunately, it appears that there is no implementation for this in TensorFlow, at least not yet.
So in short, is there a recommended approach to prevent overfitting in convolutional layers that can be achieved in TensorFlow? Or will it be necessary to create a custom pooling operator to support the Stochastic Pooling approach?
Thanks for any guidance!
How can I fight overfitting?
Get more data (or data augmentation)
Dropout (see paper, explanation, dropout for cnns)
DropConnect
Regularization (see my masters thesis, page 85 for examples)
Feature scale clipping
Global average pooling
Make network smaller
Early stopping
How can I improve my CNN?
Thoma, Martin. "Analysis and Optimization of Convolutional Neural Network Architectures." arXiv preprint arXiv:1707.09725 (2017).
See chapter 2.5 for analysis techniques. As written in the beginning of that chapter, you can usually do the following:
(I1) Change the problem definition (e.g., the classes which are to be distinguished)
(I2) Get more training data
(I3) Clean the training data
(I4) Change the preprocessing (see Appendix B.1)
(I5) Augment the training data set (see Appendix B.2)
(I6) Change the training setup (see Appendices B.3 to B.5)
(I7) Change the model (see Appendices B.6 and B.7)
Misc
The CNN has to classify 27 different labels, so unsurprisingly, a major problem has been addressing overfitting.
I don't understand how this is connected. You can have hundreds of labels without a problem of overfitting.