I trained a keras model inspired by the Inception Model on Tensorflow backend.
The problem is, the ouput is always the same, for differents images I tested.
However, model.evaluate give me a high accuracy percentage, so, the model seems to work.
Have you an idea ? Thanks!
Finally found the answer.
I just forgot to preprocess my input for the prediction.
All is clear now!
Related
What is the difference between rescaling and not rescaling images for predicting using a tf.keras Resnet50 pre-trained on ImageNet?
Is it necessary? How much of an impact does it have on the predictions?
It is the difference between the model working as expected, and not working at all, usually if you do not apply the proper normalization that was applied to the training set, then the model performs weird, like always producing the same output, which is undesirable.
So always use the exact same scaling and normalization used to train a model.
I followed the tutorial of adanet:
https://github.com/tensorflow/adanet/tree/master/adanet/examples/tutorials
and was able to apply adanet to my own binary classification problem.
But how can I predict using the train model? I have a very little knowledge of TensorFlow. Any helps would be really appreciated
You can immediately call estimator.predict on a new unlabeled example and get a prediction.
Here is the result from tensorboard. hyperlinks.
As we saw, the loss has decreased and the Accuracy (both train and test) has increased. Hence, the model has already been trained successfully.
However, Distribution of parameters, weights/bias doesn't change (might change little), while the bias change a lot.
Could someone help me to explain what causes this ?
I knew this is normal result but why?
Thank you!
Edition: Could someone share the experience of training Deep Neural network?
what will be the real expectation of changing for parameters in successful trained model?
Thank you.
I'm using the pretrained tensorflow inception v3 model and transfer learning to do some image classification on a new image training set I have. I'm following the instructions laid out here:
https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html
However, I'm getting some severe overfitting (training accuracy is in the high 90s but CV/test accuracy is in the 50s).
Besides doing some image augmentation to try to increase my training sample size, I was wondering if doing some dropout in the retrain phase might help.
I am using this file (that came with tensorflow) as the base/template for my retraining/transfer learning:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
Looking at the inception v3 model, dropout is in there. However, I don't see any dropout added in the retrain.py file.
Does it make sense that I could try to add dropout to the retraining to solve my overfitting? If so, where would I add that? If not, why?
Thanks
From Max's comment above, which was a good answer:
Max got some good improvement adding dropout to the retrain.py source. If you want to try it, you can reference his forked script. It has some additional updates, but the main part you should look at starts on line 784
I just started to work on tensorflow not so long. I'm working on the seq2seq model and using seq2seq example code.
I want to modify seq2seq model code to get top-k outputs (k is 5 or 10) for the Reinforcement learning model, not to get top-1 output.
First, I think I should modify decoder part of the seq2seq somehow, but I don't know which part is to change.
Is there any references or codes for the problem?
check out https://github.com/tensorflow/tensorflow/issues/654. There are some discussions on this, but no worked example yet.
tf.contrib.seq2seq.BeamSearchDecoder would do the magic for you.