GAN - loss and evaluation of model - tensorflow

I'm struggling with understanding how to "objectively" evaluate a GAN (that is, not simply look at what it generates saying "this looks good/bad").
My understanding is that the discriminator should get a head start and, in theory, discriminator loss and generator loss both ought to converge to 0.5 - at which point both are equally "good".
I'm currently training a model, and I get discriminator loss beginning at 0.7 but quickly converging toward 0.25, and generator loss beginning at 50 and converging toward 0.35 (possibly less with further training).
This doesn't entirely make sense. How can both be better than 0.5?
Are my loss functions incorrect, or what else am I missing? How should performance be measured?

In a GAN setting, it is normal for you to have the losses be better because you are training only one of the networks at a time (thus beating the other network).
You can evaluate the generated output with some of the metrics PSNR, SSIM, FID, L2, Lpips, VGG, or something similar (depending on your particular task). This is still an ongoing area of research on how to objectively evaluate an image, and they are generally used as loss objectives in certain tasks.
I recommend looking at something like Analysis and Evaluation of Image Quality Metrics
I would recommend you look at the generator metrics over time to see if its improving, and obviously confirm that visually as well. You can use logging to see the metric changes or some visualization tools, tensorboard, or wandb for this.

Related

RNN/GRU Increasing validation loss but decreasing mean absolute error

I am new to deep learning and I try to implement an RNN (with 2 GRU layers).
At first, the network seems to do it's job quite fine. However, I am currently trying to understand the loss and accuracy curve. I attached the pictures below. The dark-blue line is the training set and the cyan line is the validation set.
After 50 epochs the validation loss increases. My assumption is that this indicates overfitting. However, I am unsure why the validation mean absolute error still decreases. Do you maybe got an idea?
One idea I had in mind was that this could be caused by some big outliers in my dataset. Thus I already tried to clean it up. I also tried to scale it properly. I also added a few dropout layers for further regularization (rate=0.2). However these are just normal dropout layers because cudnn does not seem to support recurrent_dropout from tensorflow.
Remark: I am using the negative log-likelihood as loss function and a tensorflow probability distribution as the output dense layer.
Any hints what I should investigate?
Thanks in advance
Edit: I also attached the non-probabilistic plot as recommended in the comment. Seems like here the mean-absolute-error behaves normal (does not improve all the time).
What are the outputs of your model? It sounds pretty strange that you're using the negative log-likelihood (which basically "works" with distributions) as the loss function but the MAE as a metric, which is suited for deterministic continuous values.
I don't know what is your task and perhaps this is meaningful in your specific case, but perhaps the strange behavior comes out from there.

Does the training lost diagram showing over-fitting? Deep Q-learning

below diagram is the training loss values against epoch. Based on the diagram, does it mean I have make it over-fitting? If not, what is causing the spike in loss values along the epoch? In overall, it can be observed that the loss value is in decreasing trend. How should I tune my setting in deep Q-learning?
Such a messy loss trajectory would usually mean that the learning rate is too high for the given smoothness of the loss function.
An alternative interpretation is that the loss function is not at all predictive of the success at the given task.

Why would I choose a loss-function differing from my metrics?

When I look through tutorials in the internet or at models posted here at SO, I often see that the loss function differs from the metrics used to evaluate the model. This might look like:
model.compile(loss='mse', optimizer='adadelta', metrics=['mae', 'mape'])
Anyhow, following this example, why wouldn't I optimize 'mae' or 'mape' as loss instead of 'mse' when I don't even care about 'mse' in my metrics (hypothetically speaking when this would be my model)?
In many cases the metric you are interested might not be differentiable, so you cannot use it as a loss, this is the case for accuracy for example, where the cross entropy loss is used instead as it is differentiable.
For metrics that are already differentiable, you just want to get additional information from the learning process, as each metrics measures something different. For example the MSE has a scale that is squared from the scale of the data/predictions, so to get the same scale you have to use RMSE or the MAE. The MAPE gives you relative (not absolute) error, so all of these metrics measure something different that might be of interest.
In the case of accuracy, this metric is used because it is easily interpretable by a human, while cross entropy loss are less intuitive to interpret.
That is a very good question.
Knowing your modeling, you should use a convenience loss function to minimize to achieve your goals.
But to evaluate your model, you will use metrics to report the quality of your generalization using some metrics.
For many reasons, the evaluation part might differ from the optimization criteria.
Giving you an example, in Generative Adversarial Networks, many papers suggest that a mse loss minimization leads to more fuzzy images although mae helps to get a more clear output. You might want to trace both of them in your evaluation to see how it really changes the things.
Another possible case is when you have a customized loss, but you still want to report the evaluation based on accuracy.
I can think of possible cases where you set the loss function in a way to converge faster, better and etc, but you might measure the quality of the model with some other metrics as well.
Hope this can help.
I just asked myself that question when I came across a GAN Implementation that uses mae as loss. I already knew that some metrics are not differentiable and thought that mae is an ecample, albeit only at x=0. So is there simply an exception like just assume a slope of 0? That would make sense to me.
I also wanted to add that I learned to use mae instead of mae because a small error stays smaller when squared while bigger errors increase on relative magnitude. So bigger are being penalized more with mse.

tf-slim batch norm: different behaviour between training/inference mode

I'm attempting to train a tensorflow model based on the popular slim implementation of mobilenet_v2 and am observing behaviour I cannot explain related (I think) to batch normalization.
Problem Summary
Model performance in inference mode improves initially but starts producing trivial inferences (all near-zeros) after a long period. Good performance continues when run in training mode, even on the evaluation dataset. Evaluation performance is impacted by batch normalization decay/momentum rate... somehow.
More extensive implementation details below, but I'll probably lose most of you with the wall of text, so here are some pictures to get you interested.
The curves below are from a model which I tweaked the bn_decay parameter of while training.
0-370k: bn_decay=0.997 (default)
370k-670k: bn_decay=0.9
670k+: bn_decay=0.5
Loss for (orange) training (in training mode) and (blue) evaluation (in inference mode). Low is good.
Evaluation metric of model on evaluation dataset in inference mode. High is good.
I have attempted to produce a minimal example which demonstrates the issue - classification on MNIST - but have failed (i.e. classification works well and the problem I experience is not exhibited). My apologies for not being able to reduce things further.
Implementation Details
My problem is 2D pose estimation, targeting Gaussians centered at the joint locations. It is essentially the same as semantic segmentation, except rather than using a softmax_cross_entropy_with_logits(labels, logits) I use tf.losses.l2_loss(sigmoid(logits) - gaussian(label_2d_points)) (I use the term "logits" to describe unactivated output of my learned model, though this probably isn't the best term).
Inference Model
After preprocessing my inputs, my logits function is a scoped call to the base mobilenet_v2 followed by a single unactivated convolutional layer to make the number of filters appropriate.
from slim.nets.mobilenet import mobilenet_v2
def get_logtis(image):
with mobilenet_v2.training_scope(
is_training=is_training, bn_decay=bn_decay):
base, _ = mobilenet_v2.mobilenet(image, base_only=True)
logits = tf.layers.conv2d(base, n_joints, 1, 1)
return logits
Training Op
I have experimented with tf.contrib.slim.learning.create_train_op as well as a custom training op:
def get_train_op(optimizer, loss):
global_step = tf.train.get_or_create_global_step()
opt_op = optimizer.minimize(loss, global_step)
update_ops = set(tf.get_collection(tf.GraphKeys.UPDATE_OPS))
update_ops.add(opt_op)
return tf.group(*update_ops)
I'm using tf.train.AdamOptimizer with learning rate=1e-3.
Training Loop
I'm using the tf.estimator.Estimator API for training/evaluation.
Behaviour
Training initially goes well, with an expected sharp increase in performance. This is consistent with my expectations, as the final layer is rapidly trained to interpret the high-level features output by the pretrained base model.
However, after a long period (60k steps with batch_size 8, ~8 hours on a GTX-1070) my model begins to output near-zero values (~1e-11) when run in inference mode, i.e. is_training=False. The exact same model continues to improve when run in *training mode, i.e.is_training=True`, even on the valuation set. I have visually verified this is.
After some experimentation I changed the bn_decay (batch normalization decay/momentum rate) from the default 0.997 to 0.9 at ~370k steps (also tried 0.99, but that didn't make much of a difference) and observed an immdeiate improvement in accuracy. Visual inspection of the inference in inference mode showed clear peaks in the inferred values of order ~1e-1 in the expected places, consistent with the location of peaks from training mode (though values much lower). This is why the accuracy increases significantly, but the loss - while more volative - does not improve much.
These effects dropped off after more training and reverted to all zero inference.
I further dropped the bn_decay to 0.5 at step ~670k. This resulted in improvements to both loss and accuracy. I'll likely have to wait until tomorrow to see the long-term effect.
Loss and an evaluation metric plots given below. Note the evaluation metric is based on the argmax of the logits and high is good. Loss is based on the actual values, and low is good. Orange uses is_training=True on the training set, while blue uses is_training=False on the evaluation set. The loss of around 8 is consistent with all zero outputs.
Other notes
I have also experimented with turning off dropout (i.e. always running the dropout layers with is_training=False), and observed no difference.
I have experimented with all versions of tensorflow from 1.7 to 1.10. No difference.
I have trained models from the pretrained checkpoint using bn_decay=0.99 from the start. Same behaviour as using default bn_decay.
Other experiments with a batch size of 16 result in qualitatively identical behaviour (though I can't evaluate and train simultaneously due to memory constraints, hence quantitatively analysing on batch size of 8).
I have trained different models using the same loss and using tf.layers API and trained from scratch. They have worked fine.
Training from scratch (rather than using pretrained checkpoints) results in similar behaviour, though takes longer.
Summary/my thoughts:
I am confident this is not an overfitting/dataset problem. The model makes sensible inferences on the evaluation set when run with is_training=True, both in terms of location of peaks and magnitude.
I am confident this is not a problem with not running update ops. I haven't used slim before, but apart from the use of arg_scope it doesn't look too much different to the tf.layers API which I've used extensively. I can also inspect the moving average values and observe that they are changing as training progresses.
Chaning bn_decay values significantly effected the results temporarily. I accept that a value of 0.5 is absurdly low, but I'm running out of ideas.
I have tried swapping out slim.layers.conv2d layers for tf.layers.conv2d with momentum=0.997 (i.e. momentum consistent with default decay value) and behaviour was the same.
Minimal example using pretrained weights and Estimator framework worked for classification of MNIST without modification to bn_decay parameter.
I've looked through issues on both the tensorflow and models github repositories but haven't found much apart from this. I'm currently experimenting with a lower learning rate and a simpler optimizer (MomentumOptimizer), but that's more because I'm running out of ideas rather than because I think that's where the problem lies.
Possible Explanations
The best explanation I have is that my model parameters are rapidly cycling in a manner such that the moving statistics are unable to keep up with the batch statistics. I've never heard of such behaviour, and it doesn't explain why the model reverts to poor behaviour after more time, but it's the best explanation I have.
There may be a bug in the moving average code, but it has worked perfectly for me in every other case, including a simple classification task. I don't want to file an issue until I can produce a simpler example.
Anyway, I'm running out of ideas, the debug cycle is long, and I've already spent too much time on this. Happy to provide more details or run experiments on demand. Also happy to post more code, though I'm worried that'll scare more people off.
Thanks in advance.
Both lowering the learning rate to 1e-4 with Adam and using Momentum optimizer (with learning_rate=1e-3 and momentum=0.9) resolved this issue. I also found this post which suggests the problem spans multiple frameworks and is an undocumented pathology of some networks due to the interaction between optimizer and batch-normalization. I do not believe it is a simple case of the optimizer failing to find a suitable minimum due to the learning rate being too high (otherwise performance in training mode would be poor).
I hope that helps others experiencing the same issue, but I'm a long way from satisfied. I'm definitely happy to hear other explanations.

How to remove "glitches" in the loss graph of training phase?

When training a deep learning model, I noticed that the training loss was little weird. There were some "glitches" at certain epochs, as seen in the figure below.
Please let me know the reasons and how to get rid of them?
Thank you
This may be completely normal and due to how the learning process works.
In practice, since using stochastic gradient descend (SGD) you optimize the loss function by approximating the whole-dataset loss landscape with the current minibatch loss landscape, the optimization process becomes noisy and peaked.
In fact, in each iteration, you evaluate the loss obtained by the model on the current minibatch and then update the model parameters based on this loss. However, this loss value is not necessarily the value you would have obtained by making a prediction on the whole dataset. For example, in a binary classification problem, imagine what happens if due to randomness your current minibatch contains only samples of one class A instead of both the classes A and B: the current loss doesn't take into account the class B and you will update the model parameters only based on the results of one class (A). As a consequence, if the next minibatch had to contain an equal number of samples of the classes A and B your results will be worst than usual.
Even if the unbalance among classes is usually addressed by using balanced minibatches or weighted loss functions, more in general you must think that what I described may happen also inside one class. Suppose you have a large heterogeneity inside the class A: your model could update the parameters more based on certain features than others.
For more theoretical aspects, which I really encourage you to read, you can have a look at this:
http://ruder.io/optimizing-gradient-descent/