How can I get the confident interval of the LSTM prediction? - tensorflow

model=Sequential()
model.add(LSTM(120,return_sequences=True,input_shape=(last_n,1)))
model.add(LSTM(80,return_sequences=True))
model.add(LSTM(40))
model.add(Dense(1))
model.compile(loss='mean_squared_error',optimizer='adam')
model.summary()
I use LSTM model predict the time series in tensorflow, now I would like to get the confident interval of the prediction, how can I get that?
use dropout in model.evaluate() or change the loss function into quantile loss function such as enter image description here

Related

weighted loss function for multilabel classification

I am working on multilabel classification problem for images. I have 5 classes and I am using sigmoid for the last layer of classification. I have imbalanced data caused by multilabel problem and I thought I can use:
tf.nn.weighted_cross_entropy_with_logits( labels, logits, pos_weight, name=None)
However I don't know how to get logits from my model. I also think I shouldn't use sigmoid in the last layer since this loss function applies sigmoid to the logit.
First of all I suggest you have a look at the TensorFlow tutorial for classification on imbalanced dataset. However keep in mind that this tutorial is for binary classification and uses a sigmoid as last dense layer activation function. For multi-label classification you should use a softmax activation.
The softmax function normalizes a set of N real numbers into a probability distribution such that they sum up to 1.
For K = 2, the softmax and sigmoid function are the same.
I don't know your model, but you could create something like this (following the tutorial):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=None)
])
To obtain the predictions you could do:
predictions = model(x_train[:1]).numpy() # obtains the prediction logits
tf.nn.softmax(predictions).numpy() # converts the logits to probabilities
In order to train you can define the following loss, compile the model, and train:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
Now, since you have an imbalanced dataset, in order to add weights, if you look at the documentation of SparseCategoricalCrossEntropy, you can see that the __call__ method has an optional parameter sample_weights:
Optional sample_weight acts as a coefficient for the loss. If a scalar
is provided, then the loss is simply scaled by the given value. If
sample_weight is a tensor of size [batch_size], then the total loss
for each sample of the batch is rescaled by the corresponding element
in the sample_weight vector.
I suggest you have a look at this answer if you have doubts on how to proceed. I think it answers perfectly what you want to achieve.
Also I find that this tutorial explains pretty well the multi-label classification problem.

Keras 2 units output,how to modify the loss function to combine two prediction value

I'm a beginning learner of machine learning. Recently I want to do photovoltaic interval prediction and know one method is to modify the output layer unit, which can output 2 prediction values for one point directly.
After constructing lstm with two output units by Keras, I found that the two prediction values are too close to recognize from the plot and the two prediction values cannot include the real value. I think it might be caused by the loss function,I use mae before.
I want to know how to combine the ypred1 and ypred2 with the yreal to my own loss function.
Here is my code:
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(50, activation='ReLU'))
model.add(Dense(2, activation='linear'))
model.compile(loss='mse', optimizer='adam',loss_weights=None)
Can I use y1=ypred\[0\],y2=ypred\[1\] such synax?

Different results for Categorical crossentropy as loss and as accuracy metric in keras model training

I am training and optimizing my multi classification CNN with the following compile method of keras.
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=optimiser,
metrics=['accuracy', 'categorical_crossentropy'])
I used categorical_crossentropy as loss as well as metric to watch. After training the model for 10 epochs, I get the following values.
Evn though I have chosen categorical_crossentropy as loss and a metric, what can be the possible reasons for their values to be different?

How to build a Neural Network in Keras using a custom loss function with datapoint-specific weight?

I want to train a Neural Network for a classification task in Keras using a TensorFlow backend with a custom loss function. In my loss, I want to give different weights to different training examples. I have some datapoints I consider important and some I do not consider as important. I want my loss function to take this into account and punish errors in important examples more than in less important ones.
I have already built my model:
input = tf.keras.Input(shape=(16,))
hidden_layer_1 = tf.keras.layers.Dense(5, kernel_initializer='glorot_uniform', activation='relu')(input)
output = tf.keras.layers.Dense(1, kernel_initializer='normal', activation='softmax')(hidden_layer_1)
model = tf.keras.Model(input, output)
model.compile(loss=custom_loss(input), optimizer='adam', run_eagerly=True, metrics = [tf.keras.metrics.Accuracy(), 'acc'])
and the currrent state of my loss function is:
def custom_loss(input):
def loss(y_true, y_pred):
return ...
return loss
I'm struggling with implementing the loss function in the way I explained above, mainly because I don't exactly know what input, y_pred and y_true are (KerasTensors, I know - but what is the content? And is it for one training example only or for the whole batch?). I'd appreciate help with
printing out the values of input, y_true and y_pred
converting the input value to a numpy ndarray ([1,3,7] for example) so I can use the array to look up my weight for this specific training data point
once I have my weigth as a number (0.5 for example), how do I implement the computation of the loss function in Keras? My loss for one training exaple should be 0 if the classification was correct and weight if it was incorrect.

custom loss function different than default

I am trying to understand how to build a custom loss function and the first thing I've tried is to reimplement the binary_crossentropy function in keras.
In my code if I do:
model.compile(Adam(lr=learning_rate), loss=losses.binary_crossentropy, metrics=['accuracy'])
the model compiles ok and trains quickly reaching an accuracy of over 95% in the first epoch and a loss of 0.2
When I create a custom loss function that basically replicates losses.binary_crossentropy:
def custom_loss(y_true,y_pred):
return K.mean(K.binary_crossentropy(y_pred, y_true), axis=-1)
and then:
model.compile(Adam(lr=learning_rate), loss=custom_loss, metrics=['accuracy'])
when I fit the loss is quite high (0.65) and accuracy low (0.47). The fitting procedure and data are the same on both cases so it seems that I am not correctly declaring my loss function.
I am using latest versions of keras with tensorflow backend and my model is a simple vgg16 full convolutional model (fcn 32).