How to visualize my training history in pytorch? - matplotlib

How do you guys visualize the training history of your pytorch model like in keras here.
I have a pytorch trained model and I want to see the graph of its training.
Can I do this using only matplotlib? If yes, can someone give me resources to follow.

You have to save the loss while training. A trained model won't have history of its loss. You need to train again.
Save the loss while training then plot it against the epochs using matplotlib. In your training function, where loss is being calculated save that to a file and visualize it later.
Also, you can use tensorboardX if you want to visualize in realtime.
This is a tutorial for tensorboardX: http://www.erogol.com/use-tensorboard-pytorch/

Related

Is it possible to bias the training of an object detection model towards classification in tensorflow ModelMaker?

I'm using Tensorflow 2 Model Maker to perform transfer training of EfficientDet-Lite (ultimately to run on a Coral EdgeTPU) and I care much more about the classification output and much less about the precision of the bounding boxes. Is there a way to modify some training parameters to improve the accuracy of the classes at the expense of the accuracy of the bounding boxes? Or does this not make sense?
Unfortunately, TensorFlow 2 Model Maker doesn't support such customization at this moment.
If you want to do so, you can bypass Model Maker and directly use AutoML repo. The technical detail is to adjust weights for different losses by adding loss_weights in compile() function.

keras prioritizes metrics or loss?

I'm struggling with understanding how keras model works.
When we train model, we give metrics(like ['accuracy']) and loss function(like cross-entropy) as arguments.
What I want to know is which is the goal for model to optimize.
After fitting, leant model maximize accuracy? or minimize loss?
The model optimizes the loss; Metrics are only there for your information and reporting results.
https://en.wikipedia.org/wiki/Loss_function
Note that metrics are optional, but you must provide a loss function to do training.
You can also evaluate a model on metrics that were not added during training.
Keras models work by minimizing the loss by adjusting trainable model parameters via back propagation. The metrics such as training accuracy, validation accuracy etc are provided as information but can also be used to improve your model's performance through the use of Keras callbacks. Documentation for that is located here. For example the callback ReduceLROnPlateau (documentation is here) can be used to monitor a metric like validation loss and adjust the model's learning rate if the loss fails to decrease after a certain number (patience parameter) of consecutive epochs.

Adversarially Robust Googlenet model

How train a googlenet model adversarially on an own image classification dataset?
For example: Using cleverhans library, the data that has batches to run the attacks on are MNIST and CIFAR.
I trained an image classifier with my own data (Googlenet) using Tensorflow, now I want to train the model with the adversarial examples. Any ideas that I can do with the cleverhans library. Thanks.
The easiest is probably to start from your own code to train GoogleNet and modify its loss. You can find an example modification of the loss that adds a penalty to train on adversarial examples in the CleverHans tutorial. It uses the loss implementation found here to define a weighted average between the cross-entropy on clean images and the cross-entropy on adversarial images.

Access accuracy and cross-entropy information in tensorboard

I am using object_detection from the models/research tensorflow repository. I managed to successfully train a model, but I miss the accuracy and cross-entropy information when controlling the progress of my training with tensorboard.
Do I need to calculate the accuracy and add it to tb myself or is it already there and I am doing something wrong? In case I have to add it, would trainer.py be the right place to do so?
TensorBoard calculates these metrics for you, there is no need to go down this way. When you open tensorboard via tensorboard --logdir tf_files/training_summaries & (in Terminal), where tf_files/training_summaries is the path to where your trained model is, TensorBoard will provide so called summaries - for scalar variables (accuracy and cross-entropy), histograms, and images. You are also free to recalculate these if you wish, but the point, other than testing, would be none.

How to minimize two loss using TensorFlow?

I am working on a project which is to localize object in a image. The method I am going to adopt is based on the localization algorithm in CS231n-8.
The network structure has two optimization heads, classification head and regression head. How can I minimize both of them when training the network?
I have one idea that summarizing both of them into one loss. But the problem is classification loss is softmax loss and regression loss is l2 loss, which means they have different range. I don't think this is the best way.
It depends on your network status.
If your network is just able to extract features [you're using weights kept from some other net], you can set this weights to be constants and then train separately the two classification heads, since the gradient will not flow trough the constants.
If you're not using weights from a pre-trained model, you
Have to train the network to extract features: thus train the network using the classification head and let the gradient flow from the classification head to the first convolutional filter. In this way your network now can classify objects combining the extracted features.
Convert to constant tensors the learned weights of the convolutional filters and the classification head and train the regression head.
The regression head will learn to combine the features extracted from the convolutional layer adapting its parameters in order to minimize the L2 loss.
Tl;dr:
Train the network for classification first.
Convert every learned parameter to a constant tensor, using graph_util.convert_variables_to_constants as showed in the 'freeze_graph` script.
Train the regression head.