Self-Attention Explainability of the Output Score Matrix - tensorflow

I am learning about attention models, and following along with Jay Alammar's amazing blog tutorial on The Illustrated Transformer. He gives a great walkthrough for how the attention scores are calculated, but I get a bit lost at a certain point, and am not seeing how the attention score Z matrix he explains is used to interpret strength of associations between different words within an input sequence.
He mentions that given some input matrix X, with shape N x D, where N is the number of elements in an input sequence, and D is the input dimensionality, we multiply X with three separate weight matrices of shape D x d, where d is some lower dimensionality that represents the projected space of the query, key, and value matrices:
The query and key matrices are dotted, and then divided by a scaling factor usually the square root of the projected dimensionality, and then run through a softmax function. This produces a weight matrix of size N x N, which is multiplied by the value matrix to get an output Z of shape N x d, which Jay says
That concludes the self-attention calculation. The resulting vector is
one we can send along to the feed-forward neural network.
The screenshot from his blog for this calculation is below:
However, this is where I'm confused. Z is N x d. However, I don't particularly understand what I'm supposed to do with this matrix from an interpretability sense, and as far as I understand, for a particular sequence element (ie. the word cats in the sequence I love pets, especially cats), self-attention is supposed to score other parts of the sequence high when it is relevant or strong associated with that word embedding. However, I'd expect then that Z is N x N, so I could say that I can select the Z[i,j] and say for the i-th word in the sequence, the j-th word relates or associates with it this or that much.
In fact, wouldn't it make much more sense to use only the softmax output of the weights (without multiplying them by the value matrix), since it already is N x N? In essence, how is Jay determining the strength of these associations in this particular sequence with the word it?
This is an N by 1 relationship he is showing - there are N values that correspond with the strength of association to the word it.

Related

Getting 2 values of focal length when finding Intrinsic camera matrix (F not Fx,Fy)?

The following image is the example that was given in my computer vision class. Now I cant understand why we are getting 2 unique values of f. I can understand if mxf and myf are different, but shouldn't the focal length 'f' be the same?
I believe you have an Fx and a Fy. This is so that the the matrix transforms on f can scale f in two directions x and y. IIRC this is why you get 2 f numbers
If really single f wanted, it should be modeled in the camera model used in calibration.
e.g. give the mx,my as constants to the camera model, and estimate the f.
However, perhaps the calibration process that obtained that K was not that way, but treated the two elements (K(0,0) and K(1,1)) independently.
In other words, mx and my were also estimated in the sense of dealing with the aspect ratio.
The estimation result is not the same as the values of mx and my calculated from the sensor specifications.
This is why you got 2 values.

Does variational autoencoder make distribution based on only latent representation?

If my latent representation of variational autoencoder(VAE) is r, and my dataset is x, does vae's latent representation follows normalization based on r or x?
If r= 10, that means it has 10 means and variance (multi-gussain) and distribution comes from data whole data x?
Or r = 10 constructs one distribution based on r, and every sample try to follow this distribution
I'm confused about which one is correct
VAE constructs a mapping e(x) -> Z (encoder), and d(z) -> X (decoder). This means that every elements of your input space x will be mapped through an encoder e(x) into a single, r-dimensional Gaussian. It is not a "mixture", it is just a single gaussian with diagonal covariance matrix.
I'll add my 2 cents to #lejlot answer.
Your encoder in VAE will map your sample to a distribution, that in your case has 10 dimensions... that distribution is used to say "ok my best estimate of this property of this sample is mu, but I'm not too sure, so consider that it might have variance sigma"
Therefore, you have a distribution for each sample.
However, in order to make sampling easier in VAE, we ask the VAE to keep the distributions as close to a known one, that is the standard normal distribution (we know "where the distributions are located", if you check the latent space in a normal AE you will see that you will have groups far from eachother).

How to find the input that maximize the Neural Network output in Tensorflow

I'm using Tensorflow (2.4) and Keras to build my neural network model. It takes two tensors as inputs and gives a scalar output. The network is already trained and, from now on, it has fixed weights. It is possible, given one of the two inputs, to find the value of the other input that maximise the output value?
Thank you in advance
In theory, yes.
Lets call your network model f. It takes two inputs x and y and outputs f(x, y). Then, assuming x and f are fixed, you can find the value y* that maximize f(x, y) as follows:
calculate the gradient of f with respect to y. Then, there are two possibilities.
there exists stationary points. Just set df/dy = 0 and solve for y. This gives the y* at which there is either a maximum or a minimum. Compute f(x, y*) to check weather y* gives a maximum or a minimum.
there are no stationary points (or there is no maximum). Here, you need to study where f decreases or increases if y varies. To do this, look for df/dy > 0 (increases) and df/dy < 0 (decreases). You will find that, asymptotically, the function increases. Simply take y*=a, where a is the closest value to such asymptote that you can take (given your data type precision).

Understanding multidimensional full covariance of normal multivariate distribution in TensorFlow

Suppose I have, say, 3 identically distributed random vectors: w, v and x generally with different lengths. w is length 2, v is length 3 and x is length 4.
How should I define the full covariance matrix sigma of these vectors for tf.contrib.distributions.MultivariateNormalFullCovariance(mean, sigma)?
I think about full covariance in this case as [(2 + 3 + 4) x (2 + 3 + 4)] square matrix (tensor rank 2), where diagonal elements are standard deviations and non-diagonal are cross-covariances between each other component of each other vector. How can I switch my mind to the terms of multidimensional covariance? What is it?
Or should I build full covariance matrix by concatenating it from pieces (e.g. particular covariances and, for instance, assuming independence of these vectors I should build partitioned block diagonal matrix) and cut (split) results of sampling into particular vectors I want to get? (I did that with R.) Or is there an easier way?
What I want is full control over all random vectors including their covariances and cross-covariances.
There is no special consideration about the dimensionality just because your random variables are distributed across multiple vectors. From a probabilistic point of view, three normally-distributed vectors of sizes 2, 3 and 4, a normally-distributed vector of size 9 and and a normally-distributed matrix of size 3x3 are all the same: a 9-dimensional normal distribution. Of course, you could have three distributions of 2, 3 and 4 dimensions, but that's a different thing, it doesn't allow you to model correlations among variables of different vectors (just like having a one-dimensional normal distribution per number does not allow you to model any correlation at all); this may or may not be enough for your use case.
If you want to use a single distribution, you just need to establish a bijection between the domain of your problem (e.g. tuples of three vectors of sizes 2, 3 and 4) and the domain of the distribution (e.g. 9-dimensional vectors). In this case is pretty obvious, just flatten (if necessary) and concatenate the vectors to obtain a distribution sample and split a sample three parts of size 2, 3 and 4 to obtain the vectors.

Faster way to perform point-wise interplation of numpy array?

I have a 3D datacube, with two spatial dimensions and the third being a multi-band spectrum at each point of the 2D image.
H[x, y, bands]
Given a wavelength (or band number), I would like to extract the 2D image corresponding to that wavelength. This would be simply an array slice like H[:,:,bnd]. Similarly, given a spatial location (i,j) the spectrum at that location is H[i,j].
I would also like to 'smooth' the image spectrally, to counter low-light noise in the spectra. That is for band bnd, I choose a window of size wind and fit a n-degree polynomial to the spectrum in that window. With polyfit and polyval I can find the fitted spectral value at that point for band bnd.
Now, if I want the whole image of bnd from the fitted value, then I have to perform this windowed-fitting at each (i,j) of the image. I also want the 2nd-derivative image of bnd, that is, the value of the 2nd-derivative of the fitted spectrum at each point.
Running over the points, I could polyfit-polyval-polyder each of the x*y spectra. While this works, this is a point-wise operation. Is there some pytho-numponic way to do this faster?
If you do least-squares polynomial fitting to points (x+dx[i],y[i]) for a fixed set of dx and then evaluate the resulting polynomial at x, the result is a (fixed) linear combination of the y[i]. The same is true for the derivatives of the polynomial. So you just need a linear combination of the slices. Look up "Savitzky-Golay filters".
EDITED to add a brief example of how S-G filters work. I haven't checked any of the details and you should therefore not rely on it to be correct.
So, suppose you take a filter of width 5 and degree 2. That is, for each band (ignoring, for the moment, ones at the start and end) we'll take that one and the two on either side, fit a quadratic curve, and look at its value in the middle.
So, if f(x) ~= ax^2+bx+c and f(-2),f(-1),f(0),f(1),f(2) = p,q,r,s,t then we want 4a-2b+c ~= p, a-b+c ~= q, etc. Least-squares fitting means minimizing (4a-2b+c-p)^2 + (a-b+c-q)^2 + (c-r)^2 + (a+b+c-s)^2 + (4a+2b+c-t)^2, which means (taking partial derivatives w.r.t. a,b,c):
4(4a-2b+c-p)+(a-b+c-q)+(a+b+c-s)+4(4a+2b+c-t)=0
-2(4a-2b+c-p)-(a-b+c-q)+(a+b+c-s)+2(4a+2b+c-t)=0
(4a-2b+c-p)+(a-b+c-q)+(c-r)+(a+b+c-s)+(4a+2b+c-t)=0
or, simplifying,
22a+10c = 4p+q+s+4t
10b = -2p-q+s+2t
10a+5c = p+q+r+s+t
so a,b,c = p-q/2-r-s/2+t, (2(t-p)+(s-q))/10, (p+q+r+s+t)/5-(2p-q-2r-s+2t).
And of course c is the value of the fitted polynomial at 0, and therefore is the smoothed value we want. So for each spatial position, we have a vector of input spectral data, from which we compute the smoothed spectral data by multiplying by a matrix whose rows (apart from the first and last couple) look like [0 ... 0 -9/5 4/5 11/5 4/5 -9/5 0 ... 0], with the central 11/5 on the main diagonal of the matrix.
So you could do a matrix multiplication for each spatial position; but since it's the same matrix everywhere you can do it with a single call to tensordot. So if S contains the matrix I just described (er, wait, no, the transpose of the matrix I just described) and A is your 3-dimensional data cube, your spectrally-smoothed data cube would be numpy.tensordot(A,S).
This would be a good point at which to repeat my warning: I haven't checked any of the details in the few paragraphs above, which are just meant to give an indication of how it all works and why you can do the whole thing in a single linear-algebra operation.