validation accuracy of convolutional neural network [closed] - tensorflow

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Hi I'm new to deep learning and convolutional neural network. Could someone please explain the problem in the figure below? Someone told me that the fluctuation of validation accuracy is the problem here. But I don't quite understand the negative effect of this fluctuation. Why don't we just look at the last point of the figure?
enter image description here

When training a deep learning module you have to validate it.
Which means you are showing the unseen data to algorithm.
So validation accuracy can be less that the training accuracy. Because there's an scenario called over-fitting. Where your training algorithm is too much attached to training data and does not generalize well to other unseen data.
On the fluctuating issue it can be normal. Because we training and testing the algorithm is a stochastic manner.

Related

predict the position of an image in another image [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
If one image is a part of another image, then how to compute the accurate location in deep learning way?
Now I could compute this by extracting and matching key points using OpenCV, but I hope to solve it with neural networks.
Any ideas to design the networks and loss functions?
Thanks very much.
This is a detection problem. The simplest approach to do it is to create a a network with two heads, one for classification and the other for the bounding box (regression).
you feed your network with the image and respective label, and sum the lossess and do a backward. train for some epochs and you'll get your self a detection model that you can use to detect what you need. but its just a simple approach and it can get much more complex.
You may as well skip this and use an existing detection architecture or better framework which simplifies your life much better.
For Tensorflow I belive you can use ObjectDetctionAPI and for Pytorch you can use Detectron, Detectron2, mmdetection among others.

Semantic Segmentation with a dominant class [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am training a semantic segmentation model consists of 3 classes(counting with the background).
The background is the dominant class, and the problem is that the model predicts every pixel as background.
I am currently using cross entropy loss function.
What are the solutions for this situation?
This is a typical strong imbalance for image segmentation; down below there are a couple of solutions to tackle this problem.
Use Jaccard(IoU) loss or dice loss; rather than optimizing for accuracy, you will optimise for the intersection over union, for example, and it has been demonstrated that they work much better than cross_entropy in case of imbalanced problems.
You may try to use class weights(sample weights in Keras/TF) in order to assign a greater importance for class 2 and 3 which are not background.
The Focal Loss has shown improvements in MLPs or other deep learning tasks, in which the dataset is strongly imbalanced. Focal loss can be combined with a loss from (1) and (3); it has the potential to improve your results.
You should expect to get the best performance improvement by employing (1) alone.

Machine learning - Train medical image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to create Deep Neural Network based classifier for chest x-ray to check there is TB or not. I read that transfer Learning technique can be used for this using inception model v3. My question is inception model is created by training with imagenet(physical object) right? How can this be used for medical image training?
One intuition is that physical objects and medical images do share some similarities especially in low-level features such as edges, curves and small object regions.
Experiments indicate that pretraining a network on ImageNet can benefit most computer vision tasks even if the images from the target domain look very different from what are in the ImageNet.
To achieve best performance, you can use a pretrained network from Imagenet and fine-tune the last layer or all layers with small learning rates on your dataset.

why can't I reimplement my tensorflow model with pytorch? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am developing a model in tensorflow and find that it is good on my specific evaluation method. But when I transfer to pytorch, I can't achieve the same results. I have checked the model architecture, the weight init method, the lr schedule, the weight decay, momentum and epsilon used in BN layer, the optimizer, and the data preprocessing. All things are the same. But I can't get the same results as in tensorflow. Anybody have met the same problem?
I did a similar conversion recently.
First you need to make sure that the forward path produces the same results: disable all randomness, initialize with the same values, give it a very small input and compare. If there is a discrepancy, disable parts of the network and compare enabling layers one by one.
When the forward path is confirmed, check the loss, gradients, and updates after one forward-backward cycle.

CNN image classification: accuracy values shakes greatly [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I try 2 class (dog/cat) classification with cnn.
But I found its graph of training is strange.
Why accuracy values shakes greatly? And is it correct training?
optimizer: adam
learning rate: 1e-4
network: https://gist.github.com/elect000/130acbdb0a3779910082593db4296254
optimizer: adam
learning rate: 1e-6
Likely your learning rate is too high.
When the learning rate is too high, the network takes large leaps when changing the weights, and this can cause it to overshoot the local minimum it's approaching.
Have a read of this article for a better description, and a nice diagram:
https://www.quora.com/In-an-artificial-neural-network-algorithm-what-happens-if-my-learning-rate-is-wrong-too-high-or-too-low