In a mixture of Gaussions, why is the mixing coefficient greater than 0? - mixture-model

Also, the requirement that p(x) > 0, together with N(x|µk,Σk) > 0, implies πk > 0 for all k.
I can't understand the meaning of the sentence. Why is the mixing coefficient greater than 0? Can I think that each component of a mixture always provides positive effects for a mixture model?

Related

Using fixed point to show square root

In going through the exercises of SICP, it defines a fixed-point as a function that satisfies the equation F(x)=x. And iterating to find where the function stops changing, for example F(F(F(x))).
The thing I don't understand is how a square root of, say, 9 has anything to do with that.
For example, if I have F(x) = sqrt(9), obviously x=3. Yet, how does that relate to doing:
F(F(F(x))) --> sqrt(sqrt(sqrt(9)))
Which I believe just converges to zero:
>>> math.sqrt(math.sqrt(math.sqrt(math.sqrt(math.sqrt(math.sqrt(9))))))
1.0349277670798647
Since F(x) = sqrt(x) when x=1. In other words, how does finding the square root of a constant have anything to do with finding fixed points of functions?
When calculating the square-root of a number, say a, you essentially have an equation of the form x^2 - a = 0. That is, to find the square-root of a, you have to find an x such that x^2 = a or x^2 - a = 0 -- call the latter equation as (1). The form given in (1) is an equation which is of the form g(x) = 0, where g(x) := x^2 - a.
To use the fixed-point method for calculating the roots of this equation, you have to make some subtle modifications to the existing equation and bring it to the form f(x) = x. One way to do this is to rewrite (1) as x = a/x -- call it (2). Now in (2), you have obtained the form required for solving an equation by the fixed-point method: f(x) is a/x.
Observe that this method requires both sides of the equation to have an 'x' term; an equation of the form sqrt(a) = x doesn't meet the specification and hence can't be solved (iteratively) using the fixed-point method.
The thing I don't understand is how a square root of, say, 9 has anything to do with that.
For example, if I have F(x) = sqrt(9), obviously x=3. Yet, how does that relate to doing: F(F(F(x))) --> sqrt(sqrt(sqrt(9)))
These are standard methods for numerical calculation of roots of non-linear equations, quite a complex topic on its own and one which is usually covered in Engineering courses. So don't worry if you don't get the "hang of it", the authors probably felt it was a good example of iterative problem solving.
You need to convert the problem f(x) = 0 to a fixed point problem g(x) = x that is likely to converge to the root of f(x). In general, the choice of g(x) is tricky.
if f(x) = x² - a = 0, then you should choose g(x) as follows:
g(x) = 1/2*(x + a/x)
(This choice is based on Newton's method, which is a special case of fixed-point iterations).
To find the square root, sqrt(a):
guess an initial value of x0.
Given a tolerance ε, compute xn+1 = 1/2*(xn + a/xn) for n = 0, 1, ... until convergence.

Pseudo-inverse via signular value decomposition in numpy.linalg.lstsq

at the risk of using the wrong SE...
My question concerns the rcond parameter in numpy.linalg.lstsq(a, b, rcond). I know it's used to define the cutoff for small singular values in the singular value decomposition when numpy computed the pseudo-inverse of a.
But why is it advantageous to set "small" singular values (values below the cutoff) to zero rather than just keeping them as small numbers?
PS: Admittedly, I don't know exactly how the pseudo-inverse is computed and exactly what role SVD plays in that.
Thanks!
To determine the rank of a system, you could need compare against zero, but as always with floating point arithmetic we should compare against some finite range. Hitting exactly 0 never happens when adding and subtracting messy numbers.
The (reciprocal) condition number allows you to specify such a limit in a way that is independent of units and scale in your matrix.
You may also opt of of this feature in lstsq by specifying rcond=None if you know for sure your problem can't run the risk of being under-determined.
The future default sounds like a wise choice from numpy
FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions
because if you hit this limit, you are just fitting against rounding errors of floating point arithmetic instead of the "true" mathematical system that you hope to model.

Does any one know how to solve the following equation?

When I reading this paper http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1976ApJ...209..214B&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf
I try to solve eq(49) numerically, it seems a fokker-planck equation, I find finite difference method doesn't work, it's unstable.
Does any one know how to solve it?
computational science stack exchange is where you can ask and hope for an answer. Or you could try its physics cousin. The equation, you quote, is integro-differential equation, fairly non-linear... Fokker-Plank looking equation. Definitely not the typical Fokker-Plank.
What you can try is to discretize the space part of the function g(x,t) using finite differences or finite-elements. After all, 0 < x < x_max and you have boundary conditions. You also have to discretize the corresponding integration. So maybe finite elements might be more appropriate? Finite elements means you can write g(x, t) as a series of a well chosen basis of compactly supported simple enough functions Bj(x) : j = 1...N in the interval [0, x_max]
g(x,t) = sum_j=1:N gj(t)*Bj(x)
That will turn your function into a (large) vector gj(t) = g(x_j, t), for j = 1, 1, ...., N. As a result, you will obtain a non-linear system of ODEs
dgj(t)/dt = Qj(g1(t), g2(t), ..., gN(t))
j = 1 ... N
After that use something like Runge-Kutta to integrate numerically the ODE system.

Should RNN attention weights over variable length sequences be re-normalized to "mask" the effects of zero-padding?

To be clear, I am referring to "self-attention" of the type described in Hierarchical Attention Networks for Document Classification and implemented many places, for example: here. I am not referring to the seq2seq type of attention used in encoder-decoder models (i.e. Bahdanau), although my question might apply to that as well... I am just not as familiar with it.
Self-attention basically just computes a weighted average of RNN hidden states (a generalization of mean-pooling, i.e. un-weighted average). When there are variable length sequences in the same batch, they will typically be zero-padded to the length of the longest sequence in the batch (if using dynamic RNN). When the attention weights are computed for each sequence, the final step is a softmax, so the attention weights sum to 1.
However, in every attention implementation I have seen, there is no care taken to mask out, or otherwise cancel, the effects of the zero-padding on the attention weights. This seems wrong to me, but I fear maybe I am missing something since nobody else seems bothered by this.
For example, consider a sequence of length 2, zero-padded to length 5. Ultimately this leads to the attention weights being computed as the softmax of a similarly 0-padded vector, e.g.:
weights = softmax([0.1, 0.2, 0, 0, 0]) = [0.20, 0.23, 0.19, 0.19, 0.19]
and because exp(0)=1, the zero-padding in effect "waters down" the attention weights. This can be easily fixed, after the softmax operation, by multiplying the weights with a binary mask, i.e.
mask = [1, 1, 0, 0, 0]
and then re-normalizing the weights to sum to 1. Which would result in:
weights = [0.48, 0.52, 0, 0, 0]
When I do this, I almost always see a performance boost (in the accuracy of my models - I am doing document classification/regression). So why does nobody do this?
For a while I considered that maybe all that matters is the relative values of the attention weights (i.e., ratios), since the gradient doesn't pass through the zero-padding anyway. But then why would we use softmax at all, as opposed to just exp(.), if normalization doesn't matter? (plus, that wouldn't explain the performance boost...)
Great question! I believe your concern is valid and zero attention scores for the padded encoder outputs do affect the attention. However, there are few aspects that you have to keep in mind:
There are different score functions, the one in tf-rnn-attention uses simple linear + tanh + linear transformation. But even this score function can learn to output negative scores. If you look at the code and imagine inputs consists of zeros, vector v is not necessarily zero due to bias and the dot product with u_omega can boost it further to low negative numbers (in other words, plain simple NN with a non-linearity can make both positive and negative predictions). Low negative scores don't water down the high scores in softmax.
Due to bucketing technique, the sequences within a bucket usually have roughly the same length, so it's unlikely to have half of the input sequence padded with zeros. Of course, it doesn't fix anything, it just means that in real applications negative effect from the padding is naturally limited.
You mentioned it in the end, but I'd like to stress it too: the final attended output is the weighted sum of encoder outputs, i.e. relative values actually matter. Take your own example and compute the weighted sum in this case:
the first one is 0.2 * o1 + 0.23 * o2 (the rest is zero)
the second one is 0.48 * o1 + 0.52 * o2 (the rest is zero too)
Yes, the magnitude of the second vector is two times bigger and it isn't a critical issue, because it goes then to the linear layer. But relative attention on o2 is just 7% higher, than it would have been with masking.
What this means is that even if the attention weights won't do a good job in learning to ignore zero outputs, the end effect on the output vector is still good enough for the decoder to take the right outputs into account, in this case to concentrate on o2.
Hope this convinces you that re-normalization isn't that critical, though probably will speed-up learning if actually applied.
BERT implementation applies a padding mask for calculating attention score.
Adds 0 to the non-padding attention score and adds -10000 to padding attention scores. the e^-10000 is very small w.r.t to other attention score values.
attention_score = [0.1, 0.2, 0, 0, 0]
mask = [0, 0, -10000, -10000] # -10000 is a large negative value
attention_score += mask
weights = softmax(attention_score)

Time complexity for finding largest eigenvalue

I'm trying to figure out the time complexity for calculating the largest eigenvector in a whole bunch of small matrices.
Each matrix is the adjacency matrix of the 1-step neighborhood of a node in a weighted, undirected graph. So all values are positive and the matrix is symmetric.
E.g.
0 2 1 1
2 0 1 0
1 1 0 0
1 0 0 0
I've found that the Power iteration method that is supposed to be O(n^2) complexity per iteration.
So does that mean the complexity for finding largest eigenvector for the 1-step neighborhood for every node in a graph is O(n * p^2), where n is the number of nodes, and p is the average degree of the graph (i.e. number of edges / number of nodes)?
Off the top of my head I would say your best bet is to use an iterative randomised algorithm known as the power iteration. The algorithm is iterative and finds true max eigenvalue with geometric convergence, with ratio of the largest eigenvalue to the second largest. So, if you two largest eigenvalues are equal do not use this method, otherwise it works quite nicely. You actually get the largest eigenvalue and respective eigenvector.
However, if you matrices are very small you might be just as well to perform PCA because it will not be so expensive. I am not aware of any hard threshold of when you should switch between the two. Also it depends if you are willing to accept small inaccuracies or the absolute true value.