Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm working on a tensorflow project, in which I have a neural network in a reinforcement learning system, used to predict the Q values. I have 50 inputs and 10 outputs. Some of the inputs are in the range 30-70, and the rest are between 0-1, so I normalize only the first group, using this formula:
x_new = (x - x_min)/(x_max - x_min)
Although I know the mathematical base of neural networks, I do not have experience applying them in real cases, so I do not really know if the hyperparameters I am using are correctly chosen. The ones I have currently are:
2 hidden layers with 10 and 20 neurons each
Learning rate of 0.5
Batch size of 10 (I have tried with different values until 256 obtaining the same result)
The problem I'm not able to solve is that the weights of this neural network only change in the first two or three iterations, and stay fixed afterwards.
What I had read in other posts is that the algorithm is finding a local optima, and that the normalization of the inputs is a good idea to solve it. However, after normalizing the inputs, I am still in the same state. So, my question is if anyone knows where the problem may be, and if there is any other technique (like normalization) that I should add to my pipeline.
I haven't added any line of code in the question, because I think my problem is rather conceptual. However, in case more details were needed, I would insert it.
Some pointers you can check:
50 input data points with 10 classes?... The data is too small for the netowrk to learn anything at all if this is the case
Which activation function are you using. Try ReLU instead of sigmoid or tanh:
activation functions
How deep is your network? maybe all your graident is either vanishing or exploding:
vanishing or exploding gradients
check if your training data overfits. if not your network is not learning anything
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm currently training a CNN to detect if a person wears a mask. Unfortunately, I do not understand why my validation loss is so high. As I noticed, the data I am validating on is in is sorted after classes (which are the output of the net). Does that have any impact on my validation accuracy and loss?
I tested the model with the use of Computer Vision and it works excellent but the validation loss and accuracy still looks very wrong.
What are the reasons to that?
This phenomenon, at an intuitive level, can take place due to several factors:
It may be the case that you are using very big batch sizes (>=128) which can cause those fluctuations since the convergence can be negatively impacted if the batch size is too high. There are several papers that have studied this phenomenon. This may or may not be the case for you.
It is probable that your validation set is too small. I experienced such fluctuations when the validation set was too small (in number, not necessarily percentage split between training-validation). In such circumstances, a change in weights after an epoch has a more visible impact on the validation loss (and automatically on the validation accuracy to a great extent).
In my opinion and according to my experience, if you consider/checked that your model works well in the real life, you can decide to train only for 50 epochs, since you can see from the graph that it is a optimal cut-off point, as the fluctuations intensify after that point and also a small overfitting phenomenon may be observed.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I read that ResNet solves the problem of vanishing gradient problem by using skip functions. But are they not already solved using RELU? Is there some other important thing I'm missing about ResNet or does Vanishing gradient problem occur even after using RELU?
The ReLU activation solves the problem of vanishing gradient that is due to sigmoid-like non-linearities (the gradient vanishes because of the flat regions of the sigmoid).
The other kind of "vanishing" gradient seems to be related to the depth of the network (e.g. see this for example). Basically, when backpropagating gradient from layer N to N-k, the gradient vanishes as a function of depth (in vanilla architectures). The idea of resnets is to help with gradient backpropagation (see for example Identity mappings in deep residual networks, where they present resnet v2 and argue that identity skip connections are better at this).
A very interesting and relatively recent paper that sheds light on the working on resnets is resnets behaves as ensembles of relatively small networks. The tl;dr of this paper could be (very roughly) summarized as this: residual networks behave as an ensemble: removing a single layer (i.e. a single residual branch, not its skip connection) doesn't really affect performance, but performance decreases in an smooth manner as a function of the number of layers that are removed, which is the way in which ensembles behave. Most of the gradient during training comes from short paths. They show that training only this short paths doesn't affect performance in a statistically significant way compared to when all paths are trained. This means that the effect of residual networks doesn't really come from depth as the effect of long paths is almost non-existant.
The main purpose of ResNet is to create more deeper models. In theory deeper models (speaking about VGG models) must show better accuracy, but in the real life they usually do not. But if we add short-connection to the model, we can increase the number of layers and accuracy as well
While the ReLU activation function does solve the problem of vanishing gradients, it does not provide the deeper layers with extra information as in the case of ResNets. The idea of propagating the original input data as deep as possible through the network hence helping the network learn much more complex features is why ResNet architecture was introduced and achieves such high accuracy on a variety of tasks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am playing with the code listed here by Daniel Persson on Youtube. His code at github.
I am playing with his code for my classification project and I got accuracy of about 88% (I am using a GPU). BUt I got about 93% with InceptionV3 and RasNEt50 transfer learning. I am new to ML and I managed to setup basic training models using Keras. I am using 3 classes (120x120 pxl RGB images). In the above code, I could not find how to change cross-entropy to categorial-cross-entropy.
What are the other methods to improve the accuracy level? I feel the output should be better since images differences are trivial to humans.
Will this improve increasing hidden layers ?
A number of nodes in existing layers ?
Also I would like to know how I could use sklearn kit to plot confusion matrix here.
Thank you in advance.
I think that your question is the question that most of us doing ML are trying to answer everyday, that is, how to improve the performance of our models. To make it short and to try answering your questions:
In the above code, I could not find how to change cross-entropy to categorial-cross-entropy
try this link to another answer as I think code you provided already computes categorical cross entropy.
Will this improve increasing hidden layers ? A number of nodes in existing layers ?
Yes and no. Read about ovefitting CNN's. More layers/nodes might end up overfitting your data which will skyrocket your training accuracy but will kill validation accuracy. Another method you can try is adding Dropout layers which I tend to use. You can also read about L1 and L2 regularization.
Another thing that comes to mind when using Deep Learning is that your training and validation loss curves should look as similar as posible. If they are not, this most surely is a reflection of over/underfitting.
Also I would like to know how I could use sklearn kit to plot confusion matrix here.
try:
from sklearn.metrics import confusion_matrix
confusion_matrix(ground_truth_labels, predicted_labels)
and to visualize:
import seaborn as sns
import matplotlib.pyplot as plt
sns.heatmap(annot=True, fmt="d",data=confusion_matrix(ground_truth_labels, predicted_labels),linewidths=1,cmap='Blues')
plt.xlabel('true')
plt.ylabel('predicted')
plt.show()
Hope this helps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I trained doc2vec model in TensorFlow. So now I have embeded vectors for words in dictionary and vectors for the documents.
In the paper
"Distributed Representations of Sentences and Documents"
Quoc Le, Tomas Mikolov
authors write
“the inference stage” to get paragraph vectors D for new paragraphs
(never seen before) by adding more columns in D and gradient
descending on D while holding W,U,b fixed.
I have pretrained model so we have W, U and b as graph variables. Question is how to implement inference of D(new document) efficiently in Tensorflow?
For most neural networks, the output of the network (class for classification problems, number for regression,...) if the value you are interested in. In those cases, inference means running the frozen network on some new data (forward propagation) to compute the desired output.
For those cases, several strategies can be used to deliver quickly the desired output for multiple new data points : scaling horizontally, reduce the complexity of calculation through quantisation of the weights, optimising the freezed graph computation (see https://devblogs.nvidia.com/tensorrt-3-faster-tensorflow-inference/),...
doc2Vec (and word2vec) are different use case is however different : the neural net is used to compute an output (prediction of the next word), but the meaningful and useful data are the weights used in the neural network after training. The inference stage is therefore different : you do not want to get the output of the neural net to get a vector representation of a new document, you need to train the part of the neural net that provides you the vector representation of your document. Part of the neural net is then frozen (W,U,b).
How can you efficiently compute D (document vector) in Tensorflow :
Make experiments to define the optimal learning rate (a smaller value might be a better fit for shorter document) as it defines how quick your neural network representation of a document.
As the other part of the neural net are frozen, you can scale the inference on multiple processes / machines
Identify the bottle necks : what is currently slow ? model computation ? Text retrieval from disk of from external data source ? Storage of the results ?
Knowing more about your current issues, and the context might help.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to design a CNN for a binary image classification task, which is to detect a small object present or absent in the images. The images are greyscale (unsigned short) with size 512x512 (dowmsampled from 2048x2048 already), and I have thousands of those images for training and test.
It's my first time using CNN for this kind of task, and I hope to achieve ~80% accuracy to start, so I'd like to know, IN GENERAL, how to design the CNN such that I have the best chance to achieve my goal.
My specific questions are:
How many convolution layers and fully-connected layers should I use?
How many feature maps are in each convolution layer and how many nodes in each fully-connected layer?
What's the filter size in each convolution layer?
I'm trying to implement the CNN using Keras with TensorFlow backend, and my computer's specs are: 8 Intel Xeon CPUs # 3.5 GHz; 32 GB memory; 2 Nvidia GPUs: GeForce GTX 980 and Quadro K4200
With those hardware and software, I'd also like to know the computational time of the training. Specifically,
How long will it take to train the CNN (with above structure) with 1000 images mentioned above in epoch, and (in general) how many epochs are needed to achieve ~80% accuracy?
The reason I want to know the typical computational time is to make sure I set up everything properly.
Hope I didn't ask too many questions in my first post.
You'd probably go very well if you take one of the already existing models that keras makes available for that task, such as VGG16, VGG19, InceptionV3 and others: https://keras.io/applications/.
You may experiment on them, try different paramters, little tweaks here and there, and stuff like that. Since you've got only one class, you can probably try smaller versions of them.
All the codes can be found in https://github.com/fchollet/keras/tree/master/keras/applications
Speed is very very relative. It's impossible to tell the speed because each installation method, each driver, each version, each operational system may or may not actually use your hardware capabilities properly or entirely.
But with your specifications, it should be pretty fast, if everything is set up well.