Is it possible to extract confidence values for regression predictions in tensorflow? - tensorflow

Can I extract the confidence values or variance in prediction error from a tensorflow regressor? e.g. if the model gives a prediction x, then can I know the confidence band, like is x in +-25% range of the actual value?

I'm afraid it's not as easy as when using sofmax in the output layer. As said in here you can use the MSE of the NN on the validation as an estimate for variance, then use your desired value of confidence. Be aware that this approach assumes a lot of things (ie. distribution of errors is allways the same which may not be true) so if you really need those confidence intervals, a regression NN is not the best fit for you.

Related

Can Tensorflow take gradient on matrix 2-norm?

Normally the matrix norm we took in Tensorflow is Frobenius norm which is easy to compute and easy to understand, e.g., a Bayesian view. But in many cases, it is the largest singular value matters. It is possible to optimize that in Tensorflow? It depends on whether tensorflow can take gradient with respect to matrix 2-norm.
Actually, the spectral norm is equal the largest singular value. To get to this value you can use TensorFlow's linalg.svd.

Tensorflow bounded regression vs classification

As part of my masters thesis I have been tasked with predicting a label integer (0-255) which is a binned representation of an angle. The feature columns are also integers, in the range (0-255).
So far I have used the custom Tensorflow layers estimator, implementing a 256 output classifier which performs well. However, my issue with the classification approach I am using is the following:
My classification model thinks that predicting a 3 instead of a 28 is as good/bad as predicting a 27 as a 28
The numerical interval / ordinal nature of my data (not sure which) leads me to believe that if I used regression I would achieve results with less drastically incorrect predictions or outliers.
My goal:
to reduce the number of drastically incorrect predicted outliers
My questions:
Is regression the better approach, or can I improve my
classification to include an ordinal/interval relationship between
my labels?
If I choose regression, is there a way to bound my predicted output between 0-255 (I know I will have to round float values predicted).
Thanks in advance. Any other comments, suggestions or ideas to help me to best tackle the problem are also very helpful.
If I made any incorrect assumptions or mistake in my interpretation of the problem feel free to correct me.
Question 1: Regression is the simpler approach, however, you can also use classification and manipulate the loss function to have a lower loss for misclassifications that are "close" to the original class.
Question 2: The tensorflow command for bounding your prediction is tf.clip_by_value. Are you mapping all 360 degrees to [0,255]? In that case you will want to consider the boundary cases, i.e. your estimator yields -4 and the true value is 251, but they are the actually representing the same value so loss should be 0.

Keras NN producing results with good variability prediction but poor magnitude prediction

I'm currently using keras (tensorflow) to create a feed-forward neural network to predict a sales value. When I look at the test set comparing predicted vs real sales values a fit line ends up having a good R squared value but a low slope value. So it's predicting the variability in the data correctly, but not predicting the magnitude of the data. When I look at the data in smaller subsets, the underprediction is consistent. Has anybody had experience with this or have an idea what could cause this? I have a feeling it may be a data bias/normalization issue.

sampled_softmax_loss vs negative sampling

I am working on text autoencoder so want to use negative sampling for training our model. I want to know the difference between negative sampling and sampled softmax.
Thanks in advance
https://www.tensorflow.org/extras/candidate_sampling.pdf
Accoring to tensorflow negative sampling relates to logistic loss while sampled softmax relates to softmax.
Both of them, at the core, pick a sample of negative examples to compute the loss on and update gradients.
For your model, use it if your output is very large (many classes) AND the regular loss is too slow to compute. If the output has few classes there's not much gain. If the training is fast anyway, why bother with approximations.

Approximating multidimensional functions with neural networks

Is it possible to fit or approximate multidimensional functions with neural networks?
Let's say I want to model the function f(x,y)=sin(x)+y from some given measurement data. (f(x,y) is considered as ground truth and is not known). Also if it's possible some code examples written in Tensorflow or Keras would be great.
As said by #AndreHolzner, theoretically you can approximate any continuous function with a neural network as well as you want, on any compact subset of R^n, even with only one hidden layer.
However, in practice, the neural net can have to be very large for some functions, and sometimes be untrainable (the optimal weights may be hard to find without getting in a local minimum). So here are a few practical suggestions (unfortunately vague, because the details depend too much on your data and are hard to predict without multiple tries):
Keep the network not too big (it'hard to define though, unfortunately): you'll just overfit. You'll probably need a LOT of training samples.
A big number of reasonably-sized layers is usually better than a reasonable number of big layers.
If you have some priors about the function, use them: for instance, if you believe there is some kind of periodicity in f (like in your example, but it could be more complicated), you could add the sin() function to some of of the outputs of the first layer (not all, that would give you a truly periodic output). If you suspect a polynom of degree n, just augment you input x with x², ...x^n and use a linear regression on that input, etc. It will be much easier than learning the weights.
The universal approximator theorem is true on any compact subset of R^n, not on the entire multidimensional space. In particular, you'll never be able to predict the value for an input that's way bigger than any of the training samples for instance (say you trained on numbers from 0 to 100, don't test on 200, it will fail).
For an example of regression you can look here for instance. To regress a more complicated function you'd need to put more complicated functions to get pred from x, for instance like this:
n_layers = 3
x = tf.placeholder(shape=[-1, n_dimensions], dtype=tf.float32)
last_layer = x
# Add n_layers dense hidden layers
for i in range(n_layers):
last_layer = tf.layers.dense(inputs=last_layer, units=128, activation=tf.nn.relu)
# Get the output prediction
pred = tf.layers.dense(inputs=last_layer, units=1, activation=None)
# Get the cost, training op, etc, just like in the linear regression example