Tensorflow: cross-validation and test error graph [closed] - tensorflow

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
With boosting and random forests, I used early stopping and plotted the train AND test error while training in order to get a feeling when the model is overfitting.
a) in tensorflow (multilayer perceptron), I can plot the training error while training (cost/epoch graph), but how can I get the test error graph while training?
b) is there a built-in cross-validation function in tensorflow (eg 5-fold cv)? If not, what is the most efficient way to do cv with tensorflow?

Related

Tracing the region of an Image that contributes to a location in the CNN feature map [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I(x, y, no of channels) is the image, and Fi(x, y, no of filters ) is the feature map at some layer 'i'.
Given the architecture of a Convolutional Neural Network like VGGNet and a feature map after a certain layer Fi, is there an efficient way to find which pixels of the input image I, that contribute to a location in the feature map?
I will want to implement this in python.

why can't I reimplement my tensorflow model with pytorch? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am developing a model in tensorflow and find that it is good on my specific evaluation method. But when I transfer to pytorch, I can't achieve the same results. I have checked the model architecture, the weight init method, the lr schedule, the weight decay, momentum and epsilon used in BN layer, the optimizer, and the data preprocessing. All things are the same. But I can't get the same results as in tensorflow. Anybody have met the same problem?
I did a similar conversion recently.
First you need to make sure that the forward path produces the same results: disable all randomness, initialize with the same values, give it a very small input and compare. If there is a discrepancy, disable parts of the network and compare enabling layers one by one.
When the forward path is confirmed, check the loss, gradients, and updates after one forward-backward cycle.

Keras report highly inconsistent loss between 2 optimization runs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying to classify some images if they contain object : Yes or No.
I firstly run 50 epochs with rmsprop optimizer and continue with second run of SGD optimizer with more 50 epochs.
My first run ends with loss ~ 0.4 and model is saved . Second run starts with the saved model.
The problem that at start of second run Keras shows that loss is ~ 0.8 at 1 epoch.
Why could it happen for same loss function ?
Your new SGD optimizer is not optimized for that model. The moments and the adaptive learning rate were forgotten.
Thus, there is indeed a high chance that this new compilation start badly.
You can try to restart with lower learning rates, and also try to add moment to the SGD.

CNN image classification: accuracy values shakes greatly [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I try 2 class (dog/cat) classification with cnn.
But I found its graph of training is strange.
Why accuracy values shakes greatly? And is it correct training?
optimizer: adam
learning rate: 1e-4
network: https://gist.github.com/elect000/130acbdb0a3779910082593db4296254
optimizer: adam
learning rate: 1e-6
Likely your learning rate is too high.
When the learning rate is too high, the network takes large leaps when changing the weights, and this can cause it to overshoot the local minimum it's approaching.
Have a read of this article for a better description, and a nice diagram:
https://www.quora.com/In-an-artificial-neural-network-algorithm-what-happens-if-my-learning-rate-is-wrong-too-high-or-too-low

validation accuracy of convolutional neural network [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Hi I'm new to deep learning and convolutional neural network. Could someone please explain the problem in the figure below? Someone told me that the fluctuation of validation accuracy is the problem here. But I don't quite understand the negative effect of this fluctuation. Why don't we just look at the last point of the figure?
enter image description here
When training a deep learning module you have to validate it.
Which means you are showing the unseen data to algorithm.
So validation accuracy can be less that the training accuracy. Because there's an scenario called over-fitting. Where your training algorithm is too much attached to training data and does not generalize well to other unseen data.
On the fluctuating issue it can be normal. Because we training and testing the algorithm is a stochastic manner.