I am building a Unet image segmentation model with only one foreground and a background (binary segmentation).
For the loss function I sum the dice loss and binary focal loss
I am wondering if it is important to ensure the order of magnitude of dice loss and focal loss to be similar
As you see in the below extract, the binary focal loss is ~ 0.0x and the dice loss is in 0.x. will the loss optimization focus on the dice loss more than the focal loss in this case? Should i be adding a multiplier to the binary focal loss?
I am a newbie to the deep learning paradigm as well. However, according to this paper: https://ieeexplore.ieee.org/abstract/document/9180275/ usually a multiplier must be added for a combo loss. In the paper the combo loss of focal loss and dice loss is calculated using the following equation:
combo loss= β*focalloss - (log (dice loss))
Kindly report your results if you wish to use any other combination of these losses.
Related
My model arch is
I have two outputs, I want to train a model based on two outputs such as mse, and cross-entropy. At first, I used two keras loss
model1.compile(loss=['mse','sparse_categorical_crossentropy'], metrics = ['mse','accuracy'], optimizer='adam')
it's working fine, the problem is the cross entropy loss is very unstable, sometimes gives accuracy 74% in the next epoch shows 32%. I'm confused why is?
Now if define customer loss.
def my_custom_loss(y_true, y_pred):
mse = mean_squared_error(y_true[0], y_pred[0])
crossentropy = binary_crossentropy(y_true[1], y_pred[1])
return mse + crossentropy
But it's not working, it showed a negative loss in total loss.
It is hard to judge the issues depending on the information given. A reason might be a too small batch size or a too high learning rate, making the training unstable. I also wonder, that you use sparse_categorical_crossentropy in the top example and binary_crossentropy in the lower one. How many classes do you actually have?
I am unable to find the explanation for the loss function of yolov4.
First, to understand the YOLOv4 loss, I think you should read about the original YOLO loss that was released in YOLO first paper (https://arxiv.org/abs/1506.02640), you can find it here.
In YOLOv4, you will have the exact same ideas, but with:
Binary cross entropy for the objectness and classification scores,
Box-per-cell level prediction instead of cell level prediction for the class probabilities, so a slightly different penalization for the classification terms,
CIoU Loss instead of MSE for the regression terms (x,y,w,h). CIoU stands for Complete Intersection over Union, and is not so far from the MSE loss. It proposes to compare width and height a bit more interestingly (consistency between aspect ratios), but it keeps the MSE for the comparison between bounding box centers. You can find more details in this paper.
Finally, YOLOv4 loss can be written this way. With the complete CIoU loss terms, it looks like this.
I've trained a multi-label multi-class image classifier by using sigmoid as output activation function and binary_crossentropy as loss function.
The accuracy curve for validation is showing up-down fluctuation while loss curve at few epochs is showing weird(very high) values.
Following is the Accuracy and loss-curve for fine-tuned(last block) VGG19 model with Dropout and BatchNormalization.
Accuracy curve
loss curve
Accuracy and loss-curve for fine-tuned(last block) VGG19 model with Dropout, BatchNormalization and Data Augmentation.
accuracy curve with data augmentation
loss curve with data augmentation
I've trained the classifier with 1800 training images(5-labels) with 100 validation images. The optimizer I'd used is SGD((lr=0.001, momentum=0.99).
Can anyone explain why loss-curve is getting so much weird or high values at some eochs?
Should I use different loss-function? If yes, which one?
Don't worry - all is well. Your loss curve doesn't say much, especially 'spikes in the loss curve'. They're totally allowed, your model is still training. You should look at your accuracy curve, and that one goes up pretty normal I think.
I'm currently using the Cross Entropy Loss function but with the imbalance data-set the performance is not great.
Is there better lost function?
It's a very broad subject, but IMHO, you should try focal loss: It was introduced by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar to handle imbalance prediction in object detection. Since introduced it was also used in the context of segmentation.
The idea of the focal loss is to reduce both loss and gradient for correct (or almost correct) prediction while emphasizing the gradient of errors.
As you can see in the graph:
Blue curve is the regular cross entropy loss: it has on the one hand non-negligible loss and gradient even for well classified examples, and on the other hand it has weaker gradient for the erroneously classified examples.
In contrast, focal loss (all other curves) has smaller loss and weaker gradient for the well classified examples and stronger gradients for the erroneously classified examples.
Can anyone kindly explain what basically classification loss and localization loss mean in tensorflow?
I am getting this losses during SSD training procedure using tensorflow API but not understanding both of this two losses at all.
Here I read that localization loss is the loss of the Bounding Box regressor which arises a new question and that is what is bounding box regressor?
Can anyone brief it please?
Hope this helps, I tried to give a brief explanation as I understand it.
what basically classification loss and localization loss mean in tensorflow?
classification /localisation loss values are the result of loss functions and represent the "price paid for inaccuracy of predictions" in the classification/localisation problems (respectively).
The loss value given is a sum of the classification loss and the localisation loss.
The optimisation algorithms are trying to reduce these loss values until your loss sum reaches a point where you are happy with the results and consider your network 'trained'.
You can generally think of loss as a score where 'lower score equals better model'.
what is bounding box regressor?
The bounding box regressor is a trained model to obtain a more accurate bounding box in relation to the ROI in image classification problems I believe.