What is the expected output range from Keras custom loss function? - tensorflow

Context
I would like to implement a custom loss function. Given the input and a predicted output there is a real life loss what can be calculated using the predicted output and some known real life facts which belongs to the input. I would prefer to use this real life meaning loss value as loss function instead of using any distance algorithm between the predicted output and expected output.
This real life loss for every given predicted output is between -10.0 and 50.0, where the higher the better, with other words this is the learning optimizing goal.
Question
What would Keras expect (or utilize in optimal way) as the loss function output? Should loss function output normalized between say between [1.0, 0.0]? Or just multiply [-10.0, 50.0] by -1 -> [-50.0, 10.0] and substract 10.0 -> [-60.0, 0.0]?
edit: I meant here: Or just multiply [-10.0, 50.0] by -1 -> [-50.0, 10.0] and add 50.0 -> [0.0, 60.0]?
Note
I am completely beginner in NN, so if I completely miss something here, just please point the right direction in the fewest words.

By reading your question, I am able to interpret the "Real life loss value belonging to the input" as the "ground truth"/"expected output" of any input.
The question seems vague to differentiate between your requirement and this.

Related

Is max operation differentiable in Pytorch?

I am using Pytorch to training some neural networks. The part I am confused about is:
prediction = myNetwork(img_batch)
max_act = prediction.max(1)[0].sum()
loss = softcrossentropy_loss - alpha * max_act
In the above codes, "prediction" is the output tensor of "myNetwork".
I hope to maximize the larget output of "prediction" over a batch.
For example:
[[-1.2, 2.0, 5.0, 0.1, -1.5] [9.6, -1.1, 0.7, 4,3, 3.3]]
For the first prediction vector, the 3rd element is the larget, while for the second vector, the 1st element is the largets. And I want to maximize "5.0+9.6", although we cannot know what index is the larget output for a new input data.
In fact, my training seems to be successful, because the "max_act" part was really increased, which is the desired behavior to me. However, I heard some discussion about whether max() operation is differentiable or not:
Some says, mathmatically, max() is not differentiable.
Some says, max() is just an identity function to select the largest element, and this largest element is differentiable.
So I got confused now, and I am worried if my idea of maximizing "max_act" is wrong from the beginning.
Could someone provide some guidance if max() operation is differentiable in Pytorch?
max is differentiable with respect to the values, not the indices. It is perfectly valid in your application.
From the gradient point of view, d(max_value)/d(v) is 1 if max_value==v and 0 otherwise. You can consider it as a selector.
d(max_index)/d(v) is not really meaningful as it is a discontinuous function, with only 0 and undefined as possible gradients.

How do I compare effectiveness of different linear regression models

I have a dataframe which contains three more or less significant correlations between target column and other columns ( LinarRegressionModel.coef_ from sklearn shows 57, 97 and 79). And I don't know what exact model to choose: should I use only most correlated column for regression or use regression with all three predictors. Is there any way to compare models effectiveness? Sorry, I'm very new to data analysis, I couldn't google any tools for this task
Well first at all, you must know that when we are choosing the best model to apply to new data, we are going to choose the best model to fit out of sample data, which is the kind of samples that might are not present in the training process, after all, you want to predict new probabilities or cases. In your case, predict a new number.
So, how can we do this? Well, the best is to use metrics which can help us to choose which model is better for our dataset.
There are so many kinds of metrics for regression:
MAE: Mean absolute error is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just the average error.
MSE: Mean squared error is the mean of the squared error. It’s more popular than a mean absolute error because the focus is geared more towards large errors.
RMSE: Root means the squared error is the square root of the mean squared error. This is one of the most popular of the evaluation metrics because root means the squared error is interpretable in the same units as the response vector or y units, making it easy to relate its information.
RAE: Relative absolute error, also known as the residual sum of a square, where y bar is a mean value of y, takes the total absolute error and normalizes it by dividing by the total absolute error of the simple predictor.
You can work with any of these, but I highly recommend to use MSE and RMSE.

pymc python change point detection for small probabilities. ZeroProbability Error

I am trying to use pymc to find a change point in a time-series. The value I am looking at over time is probability to "convert" which is very small, 0.009 on average with a range of 0.001-0.016.
I give the two probabilities a uniform distribution as a prior between zero and the max observation.
alpha = df.cnvrs.max() # Set upper uniform
center_1_c = pm.Uniform("center_1_c", 0, alpha)
center_2_c = pm.Uniform("center_2_c", 0, alpha)
day_c = pm.DiscreteUniform("day_c", lower=1, upper=n_days)
#pm.deterministic
def lambda_(day_c=day_c, center_1_c=center_1_c, center_2_c=center_2_c):
out = np.zeros(n_days)
out[:day_c] = center_1_c
out[day_c:] = center_2_c
return out
observation = pm.Uniform("obs", lambda_, value=df.cnvrs.values, observed=True)
When I run this code I get:
ZeroProbability: Stochastic obs's value is outside its support,
or it forbids its parents' current values.
I'm pretty new to pymc so not sure if I'm missing something obvious. My guess is I might not have appropriate distributions for modelling small probabilities.
It's impossible to tell where you've introduced this bug—and programming is off-topic here, in any case—without more of your output. But there is a statistical issue here: You've somehow constructed a model that cannot produce either the observed variables or the current sample of latent ones.
To give a simple example, say you have a dataset with negative values, and you've assumed it to be gamma distributed; this will produce an error, because the data has zero probability under a gamma. Similarly, an error will be thrown if an impossible value is sampled during an MCMC chain.

In Tensorflow, how to convert scores from neural net into discrete values as a part of learning process

Hello fellow tensorflowians!
I have a following schema:
I input some continous variables (actually, word embeddings I took from google word2vec), and I am trying to predict output that can be considered as continous as well as discrete (sorry, mathematicians! but it depends on one's training goal actually).
Output takes values from 0 to 1000 with interval of 0.25 (or a precision hyperparameter), so : 0, 0.25, 0.50, ..., 100.0 .
I know that it is not possible to include something like tf.to_int (I can omit fractions if it's necessary) or tf.round, because these are not differentiable, so we can't backpropagate. However, I feel that there is some solution that allows network to "know" that it is searching for rounded solution: some small fractions of integers like 0.25, 5.75, but I actually don't even know where to look. I looked up quantization, but that seems to be a bit of an overkill.
So my question is:
How to inform graph that we don't accept values below 0.0 ? Would doing abs on network output "logits" (regression predictions) be something worth considering? If no, can I modify the loss term to severely punish scores below 0 and using absolute error instead of squared error? I may be not aware of full consequences of doing that
I don't care whether prediction of 4.5 is 4.49999 or 4.4 because I round up predictions to nearest .25 to get accuracy, and that's my final model evaluation metric. If so, can I use?
precision = 0.01 # so that sqrt(precision) == 0.1
loss=tf.reduce_mean(tf.max(0, tf.square(tf.sub(logits, targets)) - precision ))

Simply switch output usage?

i got a game with only 10x2 pixels as input and it learns after one hour training doing it by itself. Now i want to use one float value output of the model instead of three classifier outputs. The three classifier outputs where stop,1-step right, 1-step-left. Now i want to produce one output value which tells me e.g. -4 => 4 steps-left, +2 => 2 steps-right and so on.
But after training for 1-2 hours, it only produces numbers around 0.001, but it should produce numbers between -10.0->+10.0 ?
Do i need todo it in a completly other way, or can i use an classifier model to output real value without changing much code ?
thanks for help
game code link
Training a classifier is much simpler than coming up with a good loss function that will give you scalaer values that make sense. Much (!) simpler.
Make it a classifier with 21 classes (0=10 left, 1=9 left, 2=8 left,...,10=stay, 11=1 right, ..., 20=10 right)