Regression accuracy with neural network in low density regions - tensorflow

I am developing a neural net which needs to predict values between -1 and 1. However, I am only really concerned about the values at the ends of scale, say between -1 and -0.7 and between 0.7 and 1.
I do not mind if 0.6, for example, gets predicted to be 0.1. However, I do want to know if it's 0.8 or 0.9.
The distribution of my data is roughly normal, so there are many more samples in the range where I'm not concerned about the accuracy. It seems therefore that the training process is likely to lead to greater accuracy in the centre.
How can I configure the training or engineer my expected result to overcome this?
Thanks very much.

You could assign the observations to deciles, turn it into a classification problem and either assign a greater weight to the ranges you care about in the loss or just simply oversample them during training. By default, I'd go with weighing the classes in the loss function, as it is straight-forward to match with a weighted metric. Oversampling can be useful if you know that the distribution of your training data is different from the real data distribution.
To assign certain classes a greater weight in the loss function with Keras, you can pass a class_weight parameter to Model.fit. If label 0 is the first decile and label 9 is the last decile, you could double the weight of the first and last two deciles as follows:
class_weight = {
0: 2,
1: 2,
2: 1,
3: 1,
4: 1,
5: 1,
6: 1,
7: 1,
8: 2,
9: 2
}
model.fit(..., class_weight=class_weight)
To oversample certain classes, you'd include them more often in the batches than the class distribution would suggest. The simplest way to implement this is to sample observation indices with numpy.random.choice that has an optional parameter to specify probabilities for each entry. (Note that Keras Model.fit also has a sample_weight parameter where you can assign weights to each observation in the training data that will be applied when computing the loss function, but the intended use case is to weigh samples by the confidence in their labels, so I don't think it's applicable here.)

Related

Keras class weight for multi-label binary classification on temporal data

I'm training a network with temporal data, and determine which of ~60 outputs are "active" at any given timestep (classified as 1 or 0 in the label data) - so I have an output of 60x1 floats that should represent a probability.
My input data is shaped as (X, 1, frames, dataPoints) - where X is the number of recorded sequences I have (I'm new to ML, I think this is 'batches'), frames is how long the longest sequence is (the rest are -1 padded and masked), and dataPoints is the actual input data for any given frame.
This is mostly an LTSM layer with return_sequences, but my input data is unbalanced.
For any given timestep, odds are ~85% that AN output is activated - but for any given output it's likely active at most 5% of the time.
When I attempted to apply a class weight of {0: 0.01, 1:0.99} (pending tuning), I get an error stating "class_weight not supported for 3+ dimensional targets". I've done some googling and people are suggesting compiling with sample_weight_mode of temporal and modifying sample weight, but (A) that doesn't seem right for my data (no individual sample is more important, but each 1 classification within all the samples is important), and (B) I don't understand the dimensionality of what that's doing.
How can I apply the class weighting to help balance each 1 classification with this data structure?
Side note: I'm rescaling the output of the LSTM to 0->1 since it uses tanh activation (and must use tanh activation for CUDA acceleration), and from_logits=False in my binary cross entropy loss.
Extra points if I can just use built-in tf/keras stuff and not have to write a custom loss function.
EDIT to include some code:
I have a data generator that outputs x and y in the shape of:
x.shape == (1, frameCount, inputFeatureLength) where frameCount is the number of frames in the temporal sequence, and inputFeatureLength is the size of the input data (around 100).
y.shape == (1, frameCount, outputSize) where outputSize is about 60 features.
I can successfully compile the mode, but when I try to model.fit with class_weight={0:0.01, 1:0.99} as an argument, I get the error ValueError: class_weight not supported for 3+ dimensional targets.
I've looked into sample weights, but as far as I can tell even using sample_weight_mode="temporal" on model.fit it'll let me give sample weights per frame of output, but not per each of the ~60 outputs per frame.

Validation Loss doesnt Change

i am using a custom loss trying to decrease peak average power ratio of ofdm symbols. to break it down the input is of length N length that can take only 4 values. the output can take any floating value from [-1,1] (because i cant go over the power threshold). i generate the training and validation set randomly since it is.. the data can take any random combination of the 4 values.
The problem is changing and tweaking the model and parameters only improves the training loss, validation loss is constant from the first epoch.
I am using a custom loss function that only concatenates the output of the model and spreads it in the middle of the input and using ifft operation then calculating the max / mean of all elements.
in short its reserving some of the array elements (tones) to pick so that it removes the peaks of the input sacrificing those element but getting less peaks in the final signal.
i am sending input data as one hot encoded for each of the 4 values, and sending them once more as labels in their complex form so i can do operations on them in the custom loss function below.
def PAPR_Loss(y_true, y_pred):
Reserved_phases = [0, 32, 62, 93, 124, 155, 186, 217, 248]
data = tf.concat([tf.concat([y_true[:, Reserved_phases[i]:Reserved_phases[i+1]], tf.complex(y_pred[:, 4*(i+1)-4] - y_pred[:, 4*(i+1)-2], y_pred[:, 4*(i+1)-3] - y_pred[:, 4*(i+1)-1])[:, tf.newaxis]], 1) for i in range(L)], 1)
x = tf.signal.ifft(data)
temp = tf.square(tf.abs(x))
loss = tf.reduce_max(temp, axis=-1) / tf.reduce_mean(temp, axis=-1)
return 10*tf.experimental.numpy.log10(loss)
Loss and Validation Loss vs Epochs
i am using 80k unique data combinations as training and 20k different combinations as validation
also i am using dropout after each layer so i dont think its an overfitting problem.
when i remove the tanh activation at the output (meaning the output can take any values) i start getting improvements on the validation and better loss on training as well but i suspect this occurs because we just let the model add the mean power term which is inversly proportional to the loss but it doesnt learn where the peaks and how to cancel those peaks. it just increase the mean as much as possible so that the max isnt that big in relation to it anymore.
also could the model not train because of the concatenation and using input in a different form as a label? i thought i could get away with this since the input isnt trainable so it doesnt matter.
Note: The model doesnt even beat the classical method without using deep learning which just search in a candidate limited set for the best combinations that decrease this peaks. the problem with the classical model that it is computationally expensive if i can even match this performance this approach will be very rewarding.
what could be going wrong here? what can i try changing next?
Thanks in advance.

Tensorflow & Keras prediction threshold

What is the threshold value that is used by TF by default to classify an input image as being a certain class?
For example, say I have 3 classes 0, 1, 2, and the labels for images are one-hot encoded like so: [1, 0, 0], meaning this image has label of class 0.
Now when a model outputs a prediction after softmax like this one: [0.39, 0.56, 0.05] does TF use 0.5 as the threshold so the class it predicts is class 1?
What if all the predictions were below 0.5 like [0.33, 0.33, 0.33] what would TF say the result is?
And is there any way to specify a new threshold for example 0.7 and ensure TF says that a prediction is wrong if no class prediction is above that threshold?
Also would this logic carry over to the inference stage too where if the network is uncertain of the class then it will refuse to give a classification for the image?
when a model outputs a prediction after softmax like this one: [0.39, 0.56, 0.05] does TF use 0.5 as the threshold so the class it predicts is class 1?
No. There is not any threshold involved here. Tensorflow (and any other framework, for that matter) will just pick up the maximum one (argmax); the result here (class 1) would be the same even if the probabilistic output was [0.33, 0.34, 0.33].
You seem to erroneously believe that a probability value of 0.5 has some special significance in a 3-class classification problem; it has not: a probability value of 0.5 is "special" only in a binary classification setting (and a balanced one, for that matter). In an n-class setting, the respective "special" value is 1/n (here 0.33), and by definition, there will always be some entry in the probability vector greater than or equal to this value.
What if all the predictions were below 0.5 like [0.33, 0.33, 0.33] what would TF say the result is?
As already implied, there is nothing strange or unexpected with all probabilities being below 0.5 in an n-class problem with n>2.
Now, if all the probabilities happen to be equal, as in the example you show (although highly improbable in practice, the question is valid, at least in theory), ideally, such ties should be resolved randomly (i.e. pick a class in random); in practice, since usually this stage is handled by the argmax method of Numpy, the prediction will be the first class (i.e. class 0), which is not difficult to demonstrate:
import numpy as np
x = np.array([0.33, 0.33, 0.33])
np.argmax(x)
# 0
due to how such cases are handled by Numpy - from the argmax docs:
In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned.
To your next question:
is there any way to specify a new threshold for example 0.7 and ensure TF says that a prediction is wrong if no class prediction is above that threshold?
Not in Tensorflow (or any other framework) itself, but this is always something that can be done in a post-processing stage during inference: irrespectively of what is actually returned by your classifier, it is always possible to add some extra logic such that whenever the max probability value is less that a threshold, your system (i.e. your model plus the post-processing logic) returns something like "I don't know / I am not sure / I can't answer". But again, this is external to Tensorflow (or any other framework used) and the model itself, and it can be used only during inference and not during training (in any case, it doesn't make sense during training, because during training only predicted class probabilities are used, and not hard classes).
In fact, we had implemented such a post-processing module in a toy project some years ago, which was an online service to classify dog races from images: when the max probability returned by the model was less than a threshold (which was the case, say, when the model was presented with an image of a cat instead of a dog), the system was programmed to respond with the question "Are you sure this is a dog"?, instead of being forced to make a prediction among the predefined dog races...
the threshold is used in the case of binary classification or multilabel classification, in the case of multi class classification you use argmax, basically the class with the highest activation is your output class, all classes rarely equal each other, if the model is trained well there should be one dominant class

Correct way to calculate MSE for autoencoders with batch-training

Suppose you have a network representing an autoencoder (AE). Let's assume it has 90 inputs/outputs. I want to batch-train it with batches of size 100. I will denote my input with x and my output with y.
Now, I want to use the MSE to evaluate the performance of my training process. To my understanding, the input/output dimensions for my network are of size (100, 90).
The first part of the MSE calculation is performed element-wise, which is
(x - y)²
so I end up with an matrix of size (100, 90) again. For better understanding of my problem, I will arbitrarily draw a matrix of how this looks now:
[[x1 x2 x3 ... x90], # sample 1 of batch
[x1 x2 x3 ... x90], # sample 2 of batch
.
.
[x1 x2 x3 ... x90]] # sample 100 of batch
I have stumbled across various versions of calculating the error from now on. Goal of all versions is to reduce the matrix to a scalar, which can then be optimized.
Version 1:
Sum over the quadratic errors in the respective sample first, then calculate the mean of all samples, e.g.:
v1 =
[ SUM_of_qerrors_1, # equals sum(x1 to x90)
SUM_of_qerrors_2,
...
SUM_of_qerrors_100 ]
result = mean(v1)
Version 2:
Calculate mean of quadratic errors per sample, then calculate the mean over all samples, e.g.:
v2 =
[ MEAN_of_qerrors_1, # equals mean(x1 to x90)
MEAN_of_qerrors_2,
...
MEAN_of_qerrors_100 ]
result = mean(v2)
Personally, I think that version 1 is the correct way to do it, because the commonly used crossentropy is calculated in the same manner. But if I use version 1, it isn't really the MSE.
I've found a keras example here (https://keras.io/examples/variational_autoencoder/), but unfortunately I wasn't able to figure out how keras does this under the hood with batch training.
I would be grateful either for a hint how this is handled under the hood by keras (and therefore tensorflow) or what the correct version is.
Thank you!
The version 2, i.e. computing the mean of quadratic errors per sample and then compute the mean of the resulting numbers, is the one which is done in Keras:
def mean_squared_error(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
However, note that taking the average over samples is done in another part of the code which I have explained extensively here and here.

Tensorflow Loss for Non-Independent Classes

I am using a Tensorflow network for classification between classes that are similar to their neighboring classes, i.e. not independent. For example, let's say we want to predict among 10 classes but the predictions are not merely "correct" or "incorrect." Instead, if the correct class is 7 and network predicts 6, the loss should be less than if the network predicted 5, because 6 is closer to the correct answer than 5. My understanding is that cross entropy and 1-hot vectors provides "all or nothing" loss rather than a "continuous" loss that reflects the magnitude of the error. If that is correct, how does one implement such a continuous loss in Tensorflow?
-- Update June 13 2016 ----
An example application might be color recognition. If the network predicts "green" but the true color is yellow-green, then the loss should be less than if the network predicted blue because green is a better prediction than blue.
You can choose to implement a continuous function (e.g. hue from HSV) as a single output, and construct your own loss calculation that reflects what you want to optimize. In that case you'd just have a single output value that ranged between 0.0 and 1.0, and the loss would be evaluated based on the distance from the labeled value.