How to calculate the probability of hidden markov models? - hidden-markov-models

I've seen the great article from Hidden Markov Model Simplified.
In this article. I've been struggled at some point.
I understood the mathematical formulation of the joint probability.
But I cannot find the answer why the result of the calculation is 0.75.
Is there anyone to let me understand??
Thanks

There is a typo in the calculation,
STEP 1:
P(A,B,A,Red,Green,Red) = [P(y_0=A) P(x_0=Red/y_0=A)] [P(y_1=B|y_0=A|)
P(x_1=Green/y_1=B)] [P(y_2=A|y_1=B) P(x_2=Red/y_2=A)]
STEP 2
=(1∗1)∗(1∗0.75)∗(1∗1)(1)(1)=(1∗1)∗(1∗0.75)∗(1∗1)
STEP 3
=0.75(1)(1)=0.75
INSTEAD OF
=0.75(2)(2)=0.75
There is a typo in step 3, hope this makes sense.

Related

GAN generator loss goes to zero

I am rather new to deep learning, please bear with me. I have a GAN, with model structure copy-pasted from: https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/
It will train for say 100-200 epochs with pretty ok results, then suddenly generator loss drops to zero... here is excerpt from log:
epoch,step,gen_loss,discr_loss
...
189,25,0.208,0.712
189,26,3.925,1.501
189,27,0.269,1.400
189,28,7.814,2.536
189,29,0.000,3.387 // here?!?
189,30,0.000,7.903
189,31,16.118,7.745
189,32,16.118,8.059
189,33,16.118,8.059
189,34,16.118,8.059
... etc, it never recovers
Is this a problem of vanishing gradients? Anything else I’m missing?
In the blogpost comments people argues about the GAN's collapse problem, here you have a comment:
There were problems with the discriminator collapsing to zero on occasions. This seems to be a known feature of GANs. Do any established GAN hacks help with this?
Looking at the discriminator after 100 epochs, it was in a confused state where everything passed into it was circa 50% probability real/fake. I colour coded some generated examples based on disriminator probability (red = fake, green = real, blue = unsure based on an arbitrary banding) and as you mentioned the subjective versus discriminator output does not always tie up. (example posted on linkedin). There was not enough spread in discriminator probability output to make this meaningful.
GANs are very hard to train and it is very usual that the generator or the discriminator becomes so strong that the other can't improve itself, so if you for instance try to generate pictures I would recommend to use progressive GANS what improves the stability a lot and allow to go for high resolution images.

Predict probability of predicted class

ml beginner here.
I have a dataset containing the GPA, GRE, TOEFL, SOP&LOR Ranking(out of 5)etc. (all numerical) , and a final column that states whether or not they were admitted to a university(0 or 1), which is what we'll use as y_train.
I'm supposed to not just classify the predicted labels, but also calculate the probability of each person getting admitted.
edit: so from the from the first comment, I built a Logistic Regression model, and with some googling I found 'predict_proba' from sklearn and tried implementing it. There werent any syntactical errors but the code values given by predict_proba were horribly wrong.
Link: https://github.com/tarunn2799/gre-pred/blob/master/GRE%20Admission%20Probability-%20Extraaedge.ipynb
please help me finding where I've gone wrong, and also tips to reduce the loss
thank you!
I read your notebook, but I'm confused why you think the predict_proba are horribly wrong..
Is the predict accuracy not good, or the format of predict_proba not as you expected?
You could use sklearn.metrics.accuracy_score(), sklearn.metrics.confusion_matrix() to check your predict label, or use sklearn.metrics.roc_auc_score() to check the result of predict_proba. Check both train & text parts are better.
I think the format of predict_proba is correct, or maybe you could try the predict_log_proba() to calculate the log probability?
Hope this could help you.

Learning rate doesn't change for AdamOptimizer in TensorFlow

I would like to see how the learning rate changes during training (print it out or create a summary and visualize it in tensorboard).
Here is a code snippet from what I have so far:
optimizer = tf.train.AdamOptimizer(1e-3)
grads_and_vars = optimizer.compute_gradients(loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
sess.run(tf.initialize_all_variables())
for i in range(0, 10000):
sess.run(train_op)
print sess.run(optimizer._lr_t)
If I run the code I constantly get the initial learning rate (1e-3) i.e. I see no change.
What is a correct way for getting the learning rate at every step?
I would like to add that this question is really similar to mine. However, I cannot post my findings in the comment section there since I do not have enough rep.
I was asking myself the exact same question, and wondering why wouldn't it change. By looking at the original paper (page 2), one sees that the self._lr stepsize (designed with alpha in the paper) is required by the algorithm, but never updated. We also see that there is an alpha_t that is updated for every t step, and should correspond to the self._lr_t attribute. But in fact, as you observe, evaluating the value for the self._lr_t tensor at any point during the training returns always the initial value, that is, _lr.
So your question, as I understood it, is how to get the alpha_t for TensorFlow's AdamOptimizer as described in section 2 of the paper and in the corresponding TF v1.2 API page:
alpha_t = alpha * sqrt(1-beta_2_t) / (1-beta_1_t)
BACKGROUND
As you observed, the _lr_t tensor doesn't change thorough the training, which may lead to the false conclusion that the optimizer doesn't adapt (this can be easily tested by switching to the vanilla GradientDescentOptimizer with the same alpha). And, in fact, other values do change: a quick look at the optimizer's __dict__ shows the following keys: ['_epsilon_t', '_lr', '_beta1_t', '_lr_t', '_beta1', '_beta1_power', '_beta2', '_updated_lr', '_name', '_use_locking', '_beta2_t', '_beta2_power', '_epsilon', '_slots'].
By inspecting them through training, I noticed that only _beta1_power, _beta2_power and the _slots get updated.
Further inspecting the optimizer's code, in line 211, we see the following update:
update_beta1 = self._beta1_power.assign(
self._beta1_power * self._beta1_t,
use_locking=self._use_locking)
Which basically means that _beta1_power, which is initialized with _beta1, will be multiplied by _beta_1_t after every iteration, which is also initialized with beta_1_t.
But here comes the confusing part: _beta1_t and _beta2_t never get updated, so effectively they hold the initial values (_beta1and _beta2) through the whole training, contradicting the notation of the paper in a similar fashion as _lr and lr_t do. I guess this is for a reason but I personally don't know why, in any case this are protected/private attributes of the implementation (as they start with an underscore) and don't belong to the public interface (they may even change among TF versions).
So after this small background we can see that _beta_1_power and _beta_2_power are the original beta values exponentiated to the current training step, that is, the equivalent to the variables referred with beta_tin the paper. Going back to the definition of alpha_t in the section 2 of the paper, we see that, with this information, it should be pretty straightforward to implement:
SOLUTION
optimizer = tf.train.AdamOptimizer()
# rest of the graph...
# ... somewhere in your session
# note that a0 comes from a scalar, whereas bb1 and bb2 come from tensors and thus have to be evaluated
a0, bb1, bb2 = optimizer._lr, optimizer._beta1_power.eval(), optimizer._beta2_power.eval()
at = a0* (1-bb2)**0.5 /(1-bb1)
print(at)
The variable at holds the alpha_t for the current training step.
DISCLAIMER
I couldn't find a cleaner way of getting this value by just using the optimizer's interface, but please let me know if it exists one! I guess there is none, which actually puts into question the usefulness of plotting alpha_t, since it does not depend on the data.
Also, to complete this information, section 2 of the paper also gives the formula for the weight updates, which is much more telling, but also more plot-intensive. For a very nice and good-looking implementation of that, you may want to take a look at this nice answer from the post that you linked.
Hope it helps! Cheers,
Andres

LSTM implementation with peephole

I have been reading papers about LSTM and checking its implementations. There is one point that is not clear to me.
In most of the papers it is mentioned that the weight matrices from the cell to gate vectors should be diagonal(ex: Alex page 5, 2013), but I haven't seen this in any implementation.
For example this :
1
2
Another example is from mila lab.
3
Are these people implementing wrongly or am I missing something?
The TensorFlow implementation does use a diagonal matrix, see here. Note that what this means in practice is that the peepholes only go from the cell to itself, and so you're doing elementwise vector multiplies.

TSP and Lin-Kernighan algorithm from primm graph

I'm trying to code a TSP problem. I already have the minimal weight graph thanks to Primm algo, I also read that Lin-Kernighan algorithm could be constructed from this graph but can't see how to make it.
Is anyone could explain to me how to perform that ?
Thanks
You need to construct an eulerian circuit from your minimum spanning tree and then you can remove overlapping paths (x-cross connection between 2 edges) with Lin Kernigan.