How can I compute model metrics during training with canned estimators? - tensorflow

Using Keras, one typically gets metrics (e.g. accuracy) as part of the progress bar for free. Using the example here:
https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py
After running e.g.
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
Keras will start fitting the model, and will show progress output with something like:
3584/60000 [>.............................] - ETA: 10s - loss: 0.0308 - acc: 0.9905
Suppose I wanted to accomplish the same thing using a TensorFlow canned estimator -- extract the current accuracy for a classifier, and display that as part of a progress bar (done by e.g. a SessionRunHook).
It seems like accuracy metrics aren't provided as part of the default set of operations on a graph. Is there a way I can manually add it myself with a session run hook?
(It looks like it's possible to add operations to the graph as part of the begin() hook, but I'm not sure how I can e.g. request the computation of the model accuracy there.)

accuracy is one of the default metrics in canned classifiers. But it will be calculated by Estimator.evaluate call not by Estimator.train. You can create a for loop to do what you want:
for ...
estimator.train(training_data)
metrics = estimator.evaluate(evaluation_data)

Related

Evaluate trained and loaded CNN keras model problem

I am new in the deep neuron network world. I tried to train my one model using the TensorFlow Keras toolkit.
I managed to train a model using the fit function. The accuracy, for 50 epochs, was good - around 96% and the model predicts well with new data. The problem is when I try to evaluate the loaded model the results are like the model wasn't trained at all (accuracy around 50%).
I prepare the small test. I evaluate the model after a fit. Then I save the model, load it, and evaluate it once again. The results are so different. I thought that maybe weights aren't loaded properly, but the documentation suggests that save and load functions operate on the whole model. Here is my code:
CNNmodelHistory = model.fit(train_data, batch_size=batch_size, validation_data=test_data, steps_per_epoch=train_data.samples // batch_size,
epochs=echos)
scores = model.evaluate(test_data, verbose=0)
print(f'Test loss: {scores[0]} / Test accuracy: {scores[1]}')
# save the model to disk
model.save('gender_detection.modelTest')
modelLoaded = keras.models.load_model('gender_detection.modelTest')
scores = modelLoaded.evaluate(test_data, verbose=0)
print(f'Test loss: {scores[0]} / Test accuracy: {scores[1]}')
And here are the results:
Do you have any tips on what I am doing wrong?

Training and testing in convolutional neural networks

I would like to know at what stage testing dataset is used CNNs? Is it used after completion of each batch or one epoch during training or is it used after completion of all the epochs ? I am a bit confused as to how these two processes run together ? Similarly gradient updation is done after each batch or each epoch ?
model.fit_generator(
aug.flow(x_train, y_train, batch_size=BATCH_SIZE),
validation_data=(x_test, y_test),
steps_per_epoch=len(x_train) // BATCH_SIZE,
epochs=EPOCHS, verbose=1, callbacks = callbacks)
From fit_generator it is only clear that images are loaded batch by batch onto memory.
Keras is using validation datasets in the end of every epoch (if you didn't change validation_freq in the fit function). Each epoch your model trains on the whole train dataset and later evaluates itself on the validation dataset

How to calculate TF object detection API accuracy over custom dataset?

I am using TF object detection API to detect object on a custom dataset but when it comes to accuracy I have no idea how to calculate it so,
How to calculate the accuracy of the object detection model over a custom dataset? And find the confident score of the model over the test dataset?
I tried to use eval.py but it is not helpful.
Are you talking about training accuracy, validation accuracy or test accuracy? As the names suggest there are 3 different values for accuracy:
Training accuracy: accuracy of the model on the training set
Validation accuracy: accuracy of the model on the validation set
Test accuracy: accuracy of the model on the test set
Training and validation accuracy are outputs of the training, for the test accuracy you need to run the model on the test set.
Did you retrain the model (from a checkpoint, fine tuning...) or did you use the model as you got it? If you have retrained the model you should have training and validation accuracy easily, actually you have those values for each epoch.
If you haven't retrained the model you can only check the test accuracy, given that the test dataset is labelled.
This link helped me to run eval.py and get mAP value for training data.
Just need to run using CUDA like this:
CUDA_VISIBLE_DEVICES="" python3 eval.py --logtostderr --pipeline_config_path=pre-trained-model/ssd_inception_v2_coco.config --checkpoint_dir=training/ --eval_dir=eval/

TensorBoard Callback in Keras does not respect initial_epoch of fit?

I'm trying to train multiple models in parallel on a single graphics card. To achieve that I need to resume training of models from saved weights which is not a problem. The model.fit() method has even a parameter initial_epoch that lets me tell the model which epoch the loaded model is on. However when i pass a TensorBoard callback to the fit() method in order to monitor the training of the models, on Tensorboard all data is shown on x=0.
Is there a ways to overcome this and adjust the epoch on tensorboard?
By the way: Im running Keras 2.0.6 and Tensorflow 1.3.0.
self.callbacks = [TensorBoardCallback(log_dir='./../logs/'+self.model_name, histogram_freq=0, write_graph=True, write_images=False, start_epoch=self.step_num)]
self.model.fit(x=self.data['X_train'], y=self.data['y_train'], batch_size=self.input_params[-1]['batch_size'], epochs=1, validation_data=(self.data['X_test'], self.data['y_test']), verbose=verbose, callbacks=self.callbacks, shuffle=self.hyperparameters['shuffle_data'], initial_epoch=self.step_num)
self.model.save_weights('./weights/%s.hdf5'%(self.model_name))
self.model.load_weights('./weights/%s.hdf5'%(self.model_name))
self.model.fit(x=self.data['X_train'], y=self.data['y_train'], batch_size=self.input_params[-1]['batch_size'], epochs=1, validation_data=(self.data['X_test'], self.data['y_test']), verbose=verbose, callbacks=self.callbacks, shuffle=self.hyperparameters['shuffle_data'], initial_epoch=self.step_num)
self.model.save_weights('./weights/%s.hdf5'%(self.model_name))
The resulting graph on Tensorboard looks like this which is not what i was hoping for:
Update:
When passing epochs=10 to the first model.fit() the 10 epoch results are displayed in TensorBoard (see picture).
However when reloading the model and running it (with the same callback attached) the on_epoch_end method of the callback gets never called.
Turns out that when i pass the number of episodes to model.fit() to tell it how long to train, it has to be the number FROM the initial_epoch specified. So if initial_epoch=self.step_num then , epochs=self.step_num+10 if i want to train for 10 episodes.
Say we just started fitting our model and our first time epoch count is 30
(please ignore other paramterers just look at epochs and initial_epoch)
model.fit(train_dataloader,validation_data = test_dataloader,epochs =30,steps_per_epoch = len(train_dataloader),callbacks = callback_list)
Now say ,after 30 epoch we want to start again from 31st epoch (you can see this in tesnorboard) by changing our Adam optimizer(or nay optimizer) learning rate
so we can do is
model.optimizer.learning_rate = 0.0005
model1.fit(train_dataloader,validation_data = test_dataloader,initial_epoch=30,epochs =55,steps_per_epoch = len(train_dataloader),callbacks = callback_list)
=> So here initial_epoch= where we have left training last time;
epochs= initial_epoch+num_epoch we want to run for this second fit

What is the difference between model.fit() an model.evaluate() in Keras?

I am using Keras with TensorFlow backend to train CNN models.
What is the between model.fit() and model.evaluate()? Which one should I ideally use? (I am using model.fit() as of now).
I know the utility of model.fit() and model.predict(). But I am unable to understand the utility of model.evaluate(). Keras documentation just says:
It is used to evaluate the model.
I feel this is a very vague definition.
fit() is for training the model with the given inputs (and corresponding training labels).
evaluate() is for evaluating the already trained model using the validation (or test) data and the corresponding labels. Returns the loss value and metrics values for the model.
predict() is for the actual prediction. It generates output predictions for the input samples.
Let us consider a simple regression example:
# input and output
x = np.random.uniform(0.0, 1.0, (200))
y = 0.3 + 0.6*x + np.random.normal(0.0, 0.05, len(y))
Now lets apply a regression model in keras:
# A simple regression model
model = Sequential()
model.add(Dense(1, input_shape=(1,)))
model.compile(loss='mse', optimizer='rmsprop')
# The fit() method - trains the model
model.fit(x, y, nb_epoch=1000, batch_size=100)
Epoch 1000/1000
200/200 [==============================] - 0s - loss: 0.0023
# The evaluate() method - gets the loss statistics
model.evaluate(x, y, batch_size=200)
# returns: loss: 0.0022612824104726315
# The predict() method - predict the outputs for the given inputs
model.predict(np.expand_dims(x[:3],1))
# returns: [ 0.65680361],[ 0.70067143],[ 0.70482892]
In Deep learning you first want to train your model. You take your data and split it into two sets: the training set, and the test set. It seems pretty common that 80% of your data goes into your training set and 20% goes into your test set.
Your training set gets passed into your call to fit() and your test set gets passed into your call to evaluate(). During the fit operation a number of rows of your training data are fed into your neural net (based on your batch size). After every batch is sent the fit algorithm does back propagation to adjust the weights in your neural net.
After this is done your neural net is trained. The problem is sometimes your neural net gets overfit which is a condition where it performs well for the training set but poorly for other data. To guard against this situation you run the evaluate() function to send new data (your test set) through your neural net to see how it performs with data it has never seen. There is no training occurring, this is purely a test. If all goes well then the score from training is similar to the score from testing.
fit(): Trains the model for a given number of epochs (this is for training time, with the training dataset).
predict(): Generates output predictions for the input samples (this is for somewhere between training and testing time).
evaluate(): Returns the loss value & metrics values for the model in test mode (this is for testing time, with the testing dataset).
While all the above answers explain what these functions : fit(), evaluate() or predict() do however more important point to keep in mind in my opinion is what data you should use for fit() and evaluate().
The most clear guideline that I came across in Machine Learning Mastery and particular quote in there:
Training set: A set of examples used for learning, that is to fit the parameters of the classifier.
Validation set: A set of examples used to tune the parameters of a classifier, for example to choose the number of hidden units in a neural network.
Test set: A set of examples used only to assess the performance of a fully-specified classifier.
: By Brian Ripley, page 354, Pattern Recognition and Neural Networks, 1996
You should not use the same data that you used to train(tune) the model (validation data) for evaluating the performance (generalization) of your fully trained model (evaluate).
The test data used for evaluate() should be unseen/not used for training(fit()) in order to be any reliable indicator of model evaluation (for generlization).
For Predict() you can use just one or few example(s) that you choose (from anywhere) to get quick check or answer from your model. I don't believe it can be used as sole parameter for generalization.
One thing which was not mentioned here, I believe needs to be specified. model.evaluate() returns a list which contains a loss figure and an accuracy figure. What has not been said in the answers above, is that the "loss" figure is the sum of ALL the losses calculated for each item in the x_test array. x_test would contain your test data and y_test would contain your labels. It should be clear that the loss figure is the sum of ALL the losses, not just one loss from one item in the x_test array.
I would say the mean of losses incurred from all iterations, not the sum. But sure, that's the most important information here, otherwise the modeler would be slightly confused.