Using ANN with Tensorflow to train a simple known equation Y=Sin(X) or Y=Cos(X). My loss function is converging properly.
Loss function convergence graph. If loss function converges it means model has fitted well to my training dataset.
However, when I predict passing in argument training set itself, model fails to predict even train data which is strange.
Here it can be seen that after 200th value there model shows no training at all
If the loss has converged then model should fit the train dataset perfectly but that is not happening here. What is wrong in my code?
X = np.linspace(0,10*np.pi,1000)
Y = np.sin(X)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(500,input_shape=(1,),activation='relu'))
model.add(tf.keras.layers.Dense(1))
opt = tf.keras.optimizers.Adam(0.01)
model.compile(optimizer=opt,loss='mse')
r= model.fit(X.reshape(-1,1),Y,epochs=100)
plt.plot(r.history['loss'])
Yhat = model.predict(X.reshape(-1,1)).flatten()
plt.plot(Y)
plt.plot(Yhat)
It is the nature of your data.
It made me remember the old paper which showed that the ANN can't compute even the XOR
Anyway the reason here is that your model is shallow and shallow networks are much less efficient than deep networks. To put in perspective a model like below
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(20,input_shape=(1,),activation='relu'))
model.add(tf.keras.layers.Dense(20,activation='relu'))
model.add(tf.keras.layers.Dense(1))
Will likely perform better even though it has only 1/3 of the parameters of the original model and that is cause the deeper you go the more complex representations can the model create. The core thing to remember is
THE DEEP LEARNING MODEL DON'T BUILD NON-LINEAR DECISION BOUNDARIES as EACH AND EVERY
UNIT IS FUNDAMENTALLY DESIGNED TO CREATE SOME LINEAR DECISION BOUNDARY. so what does
it do? IT FROM STACKING THOSE LINEAR DECISION BOUNDARIES MAKE A REPRESENTATION OF
DATA WHICH IS LINEARLY SEPARABLE.
Also, the most important things is to know your data. In this case using the Probabilistic Models will give almost perfect results. You can easily implement those using TensorFlow probability.
In tensorflow, there is a pairwise mean squared error function which takes in "predictions" it is not documented if this should be a sigmoid/softmax output or logits. https://www.tensorflow.org/api_docs/python/tf/losses/mean_pairwise_squared_error
I am looking to see if predictions must be a certain form for the input, or if there is a better pairwise loss function available.
The logits layer, in deep learning context is the layer on which the softmax function is applied. The softmax function is applied when want to perform multi-class classification. When we want to perform classification, the most common error measure is cross-entropy. On the other hand, the mean pairwise squared error is used in the context of regression. When we perform regression, we want to predict a real value as opposed to classification when we want to predict a class. With that said, the layer that will generate the outputs won't be a logits layer, but an ordinary linear layer. Moreover, the most common error measure when you want to perform regression is mean squared erorr.
I am still relatively new to the world of Deep Learning. I wanted to create a Deep Learning model (preferably using Tensorflow/Keras) for image anomaly detection. By anomaly detection I mean, essentially a OneClassSVM.
I have already tried sklearn's OneClassSVM using HOG features from the image. I was wondering if there is some example of how I can do this in deep learning. I looked up but couldn't find one single code piece that handles this case.
The way of doing this in Keras is with the KerasRegressor wrapper module (they wrap sci-kit learn's regressor interface). Useful information can also be found in the source code of that module. Basically you first have to define your Network Model, for example:
def simple_model():
#Input layer
data_in = Input(shape=(13,))
#First layer, fully connected, ReLU activation
layer_1 = Dense(13,activation='relu',kernel_initializer='normal')(data_in)
#second layer...etc
layer_2 = Dense(6,activation='relu',kernel_initializer='normal')(layer_1)
#Output, single node without activation
data_out = Dense(1, kernel_initializer='normal')(layer_2)
#Save and Compile model
model = Model(inputs=data_in, outputs=data_out)
#you may choose any loss or optimizer function, be careful which you chose
model.compile(loss='mean_squared_error', optimizer='adam')
return model
Then, pass it to the KerasRegressor builder and fit with your data:
from keras.wrappers.scikit_learn import KerasRegressor
#chose your epochs and batches
regressor = KerasRegressor(build_fn=simple_model, nb_epoch=100, batch_size=64)
#fit with your data
regressor.fit(data, labels, epochs=100)
For which you can now do predictions or obtain its score:
p = regressor.predict(data_test) #obtain predicted value
score = regressor.score(data_test, labels_test) #obtain test score
In your case, as you need to detect anomalous images from the ones that are ok, one approach you can take is to train your regressor by passing anomalous images labeled 1 and images that are ok labeled 0.
This will make your model to return a value closer to 1 when the input is an anomalous image, enabling you to threshold the desired results. You can think of this output as its R^2 coefficient to the "Anomalous Model" you trained as 1 (perfect match).
Also, as you mentioned, Autoencoders are another way to do anomaly detection. For this I suggest you take a look at the Keras Blog post Building Autoencoders in Keras, where they explain in detail about the implementation of them with the Keras library.
It is worth noticing that Single-class classification is another way of saying Regression.
Classification tries to find a probability distribution among the N possible classes, and you usually pick the most probable class as the output (that is why most Classification Networks use Sigmoid activation on their output labels, as it has range [0, 1]). Its output is discrete/categorical.
Similarly, Regression tries to find the best model that represents your data, by minimizing the error or some other metric (like the well-known R^2 metric, or Coefficient of Determination). Its output is a real number/continuous (and the reason why most Regression Networks don't use activations on their outputs). I hope this helps, good luck with your coding.
In the original Auto-Encoding Variational Bayes paper, the authors describes the "reparameterization trick" in section 2.4. The trick is to breakup your latent state z into learnable mean and sigma (learned by the encoder) and adding Gaussian noise. You then sample a datapoint from z (basically you generate an encoded image) and let the decoder map the encoded datapoint back to the original image.
I have a hard getting over how strange this is. Could someone explain a bit more on the latent variable model, specifically:
Why are we assuming the latent state is Gaussian?
How is it possible that a Gaussian can generate an image?
And how does backprop corrupt the encoder to learn a Gaussian function as opposed to an unknown non-linear function?
Here is an example implementation of the latent model from here in TensorFlow.
...neural net code maps input to hidden layers z_mean and z_log_sigma
self.z_mean, self.z_log_sigma_sq = \
self._recognition_network(network_weights["weights_recog"],
network_weights["biases_recog"])
# Draw one sample z from Gaussian distribution
n_z = self.network_architecture["n_z"]
eps = tf.random_normal((self.batch_size, n_z), 0, 1,
dtype=tf.float32)
# z = mu + sigma*epsilon
self.z = tf.add(self.z_mean,
tf.mul(tf.sqrt(tf.exp(self.z_log_sigma_sq)), eps))
...neural net code maps z to output
They are not assuming that the activations of the encoder follow a gaussian distribution, they are enforcing that of the possible solutions choose a gaussian resembling one.
The image is generated from decoding a activation/feature, the activations are distributed resembling a gaussian.
They minimize the KL divergence between the activations distribution and a gaussian one.
This question already has answers here:
What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?
(8 answers)
Closed 2 years ago.
In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don't understand why it is called logits? Isn't that a mathematical function?
loss_function = tf.nn.softmax_cross_entropy_with_logits(
logits = last_layer,
labels = target_output
)
Logits is an overloaded term which can mean many different things:
In Math, Logit is a function that maps probabilities ([0, 1]) to R ((-inf, inf))
Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.
In ML, it can be
the vector of raw (non-normalized) predictions that a classification
model generates, which is ordinarily then passed to a normalization
function. If the model is solving a multi-class classification
problem, logits typically become an input to the softmax function. The
softmax function then generates a vector of (normalized) probabilities
with one value for each possible class.
Logits also sometimes refer to the element-wise inverse of the sigmoid function.
Just adding this clarification so that anyone who scrolls down this much can at least gets it right, since there are so many wrong answers upvoted.
Diansheng's answer and JakeJ's answer get it right.
A new answer posted by Shital Shah is an even better and more complete answer.
Yes, logit as a mathematical function in statistics, but the logit used in context of neural networks is different. Statistical logit doesn't even make any sense here.
I couldn't find a formal definition anywhere, but logit basically means:
The raw predictions which come out of the last layer of the neural network.
1. This is the very tensor on which you apply the argmax function to get the predicted class.
2. This is the very tensor which you feed into the softmax function to get the probabilities for the predicted classes.
Also, from a tutorial on official tensorflow website:
Logits Layer
The final layer in our neural network is the logits layer, which will return the raw values for our predictions. We create a dense layer with 10 neurons (one for each target class 0–9), with linear activation (the default):
logits = tf.layers.dense(inputs=dropout, units=10)
If you are still confused, the situation is like this:
raw_predictions = neural_net(input_layer)
predicted_class_index_by_raw = argmax(raw_predictions)
probabilities = softmax(raw_predictions)
predicted_class_index_by_prob = argmax(probabilities)
where, predicted_class_index_by_raw and predicted_class_index_by_prob will be equal.
Another name for raw_predictions in the above code is logit.
As for the why logit... I have no idea. Sorry.
[Edit: See this answer for the historical motivations behind the term.]
Trivia
Although, if you want to, you can apply statistical logit to probabilities that come out of the softmax function.
If the probability of a certain class is p,
Then the log-odds of that class is L = logit(p).
Also, the probability of that class can be recovered as p = sigmoid(L), using the sigmoid function.
Not very useful to calculate log-odds though.
Summary
In context of deep learning the logits layer means the layer that feeds in to softmax (or other such normalization). The output of the softmax are the probabilities for the classification task and its input is logits layer. The logits layer typically produces values from -infinity to +infinity and the softmax layer transforms it to values from 0 to 1.
Historical Context
Where does this term comes from? In 1930s and 40s, several people were trying to adapt linear regression to the problem of predicting probabilities. However linear regression produces output from -infinity to +infinity while for probabilities our desired output is 0 to 1. One way to do this is by somehow mapping the probabilities 0 to 1 to -infinity to +infinity and then use linear regression as usual. One such mapping is cumulative normal distribution that was used by Chester Ittner Bliss in 1934 and he called this "probit" model, short for "probability unit". However this function is computationally expensive while lacking some of the desirable properties for multi-class classification. In 1944 Joseph Berkson used the function log(p/(1-p)) to do this mapping and called it logit, short for "logistic unit". The term logistic regression derived from this as well.
The Confusion
Unfortunately the term logits is abused in deep learning. From pure mathematical perspective logit is a function that performs above mapping. In deep learning people started calling the layer "logits layer" that feeds in to logit function. Then people started calling the output values of this layer "logit" creating the confusion with logit the function.
TensorFlow Code
Unfortunately TensorFlow code further adds in to confusion by names like tf.nn.softmax_cross_entropy_with_logits. What does logits mean here? It just means the input of the function is supposed to be the output of last neuron layer as described above. The _with_logits suffix is redundant, confusing and pointless. Functions should be named without regards to such very specific contexts because they are simply mathematical operations that can be performed on values derived from many other domains. In fact TensorFlow has another similar function sparse_softmax_cross_entropy where they fortunately forgot to add _with_logits suffix creating inconsistency and adding in to confusion. PyTorch on the other hand simply names its function without these kind of suffixes.
Reference
The Logit/Probit lecture slides is one of the best resource to understand logit. I have also updated Wikipedia article with some of above information.
Logit is a function that maps probabilities [0, 1] to [-inf, +inf].
Softmax is a function that maps [-inf, +inf] to [0, 1] similar as Sigmoid. But Softmax also normalizes the sum of the values(output vector) to be 1.
Tensorflow "with logit": It means that you are applying a softmax function to logit numbers to normalize it. The input_vector/logit is not normalized and can scale from [-inf, inf].
This normalization is used for multiclass classification problems. And for multilabel classification problems sigmoid normalization is used i.e. tf.nn.sigmoid_cross_entropy_with_logits
Personal understanding, in TensorFlow domain, logits are the values to be used as input to softmax. I came to this understanding based on this tensorflow tutorial.
https://www.tensorflow.org/tutorials/layers
Although it is true that logit is a function in maths(especially in statistics), I don't think that's the same 'logit' you are looking at. In the book Deep Learning by Ian Goodfellow, he mentioned,
The function σ−1(x) is called the logit in statistics, but this term
is more rarely used in machine learning. σ−1(x) stands for the
inverse function of logistic sigmoid function.
In TensorFlow, it is frequently seen as the name of last layer. In Chapter 10 of the book Hands-on Machine Learning with Scikit-learn and TensorFLow by Aurélien Géron, I came across this paragraph, which stated logits layer clearly.
note that logits is the output of the neural network before going
through the softmax activation function: for optimization reasons, we
will handle the softmax computation later.
That is to say, although we use softmax as the activation function in the last layer in our design, for ease of computation, we take out logits separately. This is because it is more efficient to calculate softmax and cross-entropy loss together. Remember that cross-entropy is a cost function, not used in forward propagation.
(FOMOsapiens).
If you check math Logit function, it converts real space from [0,1] interval to infinity [-inf, inf].
Sigmoid and softmax will do exactly the opposite thing. They will convert the [-inf, inf] real space to [0, 1] real space.
This is why, in machine learning we may use logit before sigmoid and softmax function (since they match).
And this is why "we may call" anything in machine learning that goes in front of sigmoid or softmax function the logit.
Here is G. Hinton video using this term.
Here is a concise answer for future readers. Tensorflow's logit is defined as the output of a neuron without applying activation function:
logit = w*x + b,
x: input, w: weight, b: bias. That's it.
The following is irrelevant to this question.
For historical lectures, read other answers. Hats off to Tensorflow's "creatively" confusing naming convention. In PyTorch, there is only one CrossEntropyLoss and it accepts un-activated outputs. Convolutions, matrix multiplications and activations are same level operations. The design is much more modular and less confusing. This is one of the reasons why I switched from Tensorflow to PyTorch.
logits
The vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function. The softmax function then generates a vector of (normalized) probabilities with one value for each possible class.
In addition, logits sometimes refer to the element-wise inverse of the sigmoid function. For more information, see tf.nn.sigmoid_cross_entropy_with_logits.
official tensorflow documentation
They are basically the fullest learned model you can get from the network, before it's been squashed down to apply to only the number of classes we are interested in. Check out how some researchers use them to train a shallow neural net based on what a deep network has learned: https://arxiv.org/pdf/1312.6184.pdf
It's kind of like how when learning a subject in detail, you will learn a great many minor points, but then when teaching a student, you will try to compress it to the simplest case. If the student now tried to teach, it'd be quite difficult, but would be able to describe it just well enough to use the language.
The logit (/ˈloʊdʒɪt/ LOH-jit) function is the inverse of the sigmoidal "logistic" function or logistic transform used in mathematics, especially in statistics. When the function's variable represents a probability p, the logit function gives the log-odds, or the logarithm of the odds p/(1 − p).
See here: https://en.wikipedia.org/wiki/Logit