Elementwise multplication with numpy.multiply - numpy

Let's say I have a N × 1 × 1 array a, and N × M × M array b as NumPy arrays. I want to do the elementwise multiplication:
c[i,:,:] = a[i]*b[i,:,:]
without iterating over i. The function np.multiply(a,b) seems to do the job. However, I do not quite understand the inner workings of this function when a and b do not have the same size. I know that when it has the same size then it just multiplies elementwise. I assume when they are not of the same size then it does some broadcasting to change the dimensions of one of the arrays but how?

Related

Count non zero vectors in numpy nd array

I have a ndarray with shape (x,y,d).
How can I get the number of d dimensional non zero vectors (out of the x*y total d dimensional vectors)?
I tried np.count_nonzero but I don't think it has the option to do what I described.
actually I think that works:
sum(np.any(x) for x in p)
(p is the ndarray)
You need to apply np.count_nonzeo to the last axis(d). It will return a x * y array with the count of non zero elements in the d dimension. If the count is 0 then it is a zero array, so you just need to count the number of elements in the x * y array which are not equal to 0.
np.sum(np.apply_along_axis(np.count_nonzero, 2, your_arr) != 0)

How to find most similar numerical arrays to one array, using Numpy/Scipy?

Let's say I have a list of 5 words:
[this, is, a, short, list]
Furthermore, I can classify some text by counting the occurrences of the words from the list above and representing these counts as a vector:
N = [1,0,2,5,10] # 1x this, 0x is, 2x a, 5x short, 10x list found in the given text
In the same way, I classify many other texts (count the 5 words per text, and represent them as counts - each row represents a different text which we will be comparing to N):
M = [[1,0,2,0,5],
[0,0,0,0,0],
[2,0,0,0,20],
[4,0,8,20,40],
...]
Now, I want to find the top 1 (2, 3 etc) rows from M that are most similar to N. Or on simple words, the most similar texts to my initial text.
The challenge is, just checking the distances between N and each row from M is not enough, since for example row M4 [4,0,8,20,40] is very different by distance from N, but still proportional (by a factor of 4) and therefore very similar. For example, the text in row M4 can be just 4x as long as the text represented by N, so naturally all counts will be 4x as high.
What is the best approach to solve this problem (of finding the most 1,2,3 etc similar texts from M to the text in N)?
Generally speaking, the most widely standard technique of bag of words (i.e. you arrays) for similarity is to check cosine similarity measure. This maps your bag of n (here 5) words to a n-dimensional space and each array is a point (which is essentially also a point vector) in that space. The most similar vectors(/points) would be ones that have the least angle to your text N in that space (this automatically takes care of proportional ones as they would be close in angle). Therefore, here is a code for it (assuming M and N are numpy arrays of the similar shape introduced in the question):
import numpy as np
cos_sim = M[np.argmax(np.dot(N, M.T)/(np.linalg.norm(M)*np.linalg.norm(N)))]
which gives output [ 4 0 8 20 40] for your inputs.
You can normalise your row counts to remove the length effect as you discussed. Row normalisation of M can be done as M / M.sum(axis=1)[:, np.newaxis]. The residual values can then be calculated as the sum of the square difference between N and M per row. The minimum difference (ignoring NaN or inf values obtained if the row sum is 0) is then the most similar.
Here is an example:
import numpy as np
N = np.array([1,0,2,5,10])
M = np.array([[1,0,2,0,5],
[0,0,0,0,0],
[2,0,0,0,20],
[4,0,8,20,40]])
# sqrt of sum of normalised square differences
similarity = np.sqrt(np.sum((M / M.sum(axis=1)[:, np.newaxis] - N / np.sum(N))**2, axis=1))
# remove any Nan values obtained by dividing by 0 by making them larger than one element
similarity[np.isnan(similarity)] = similarity[0]+1
result = M[similarity.argmin()]
result
>>> array([ 4, 0, 8, 20, 40])
You could then use np.argsort(similarity)[:n] to get the n most similar rows.

Fast algorithm for computing cofactor matrix

I wonder if there is a fast algorithm, say (O(n^3)) for computing the cofactor matrix (or conjugate matrix) of a N*N square matrix. And yes one could first compute its determinant and inverse separately and then multiply them together. But how about this square matrix is non-invertible?
I am curious about the accepted answer here:Speed up python code for computing matrix cofactors
What would it mean by "This probably means that also for non-invertible matrixes, there is some clever way to calculate the cofactor (i.e., not use the mathematical formula that you use above, but some other equivalent definition)."?
Factorize M = L x D x U, whereL is lower triangular with ones on the main diagonal,U is upper triangular on the main diagonal, andD is diagonal.
You can use back-substitution as with Cholesky factorization, which is similar. Then,
M^{ -1 } = U^{ -1 } x D^{ -1 } x L^{ -1 }
and then transpose the cofactor matrix as :
Cof( M )^T = Det( U ) x Det( D ) x Det( L ) x M^{ -1 }.
If M is singular or nearly so, one element (or more) of D will be zero or nearly zero. Replace those elements with zero in the matrix product and 1 in the determinant, and use the above equation for the transpose cofactor matrix.

Stable sampling of large Gaussian distributions

I'm trying to sample a Gaussian distribution of covariance matrix P that is N by N, with N very large (around 4000 ).
Usually one would proceed like so:
Compute the Cholesky decomposition of P : L, such that L * L.T = P
Sample a normal Gaussian distribution : X ~N(0,I_N), where I_N is the identity and N = 4000
Obtain the desired sample Y from Y = L * X
The snag here is in the computation of L. The algorithm does not seem to be stable for such a large matrix, as the computed Cholesky decomposition does not satisfy L * L.T != P.
I've tried to normalize P before computing its Cholesky decomposition (dividing it by its largest value), to no avail. I'm using the C++ library Eigen, and I've noticed this problem with numpy as well.
Any advice?
Cholesky decomposition should be quite stable, if the input matrix is actually positive definite. It can have issues if the matrix is (near) semi- or in-definite.
In that case you can use the LDLT decomposition instead. For an input A it computes a permutation P, a unit-diagonal triangular L and a diagonal D, such that
A = P.T*L*D*L.T*P
Then instead of multiplying Y = L * X you need of course Y = sqrt(D) * L * X, where sqrt(D) is an element-wise sqrt (I don't know the python syntax for that).
Note that you can ignore the permutation, since permuting a vector of identically independent distributed random numbers, is still a vector of i.i.d. numbers.
If that still does not work, try using the SelfAdjointEigenSolver-decomposition.
This computes a diagonal matrix of Eigenvalues D and a unitarian matrix V of Eigenvectors, such that
A = V * D * V^{-1}
And you can do essentially the same as above. (Note that for unitarian matrices, V^{-1} is just the adjoint of V, i.e., V^{-1} = V^T in the real-valued case).

Fast way to set diagonals of an (M x N x N) matrix? Einsum / n-dimensional fill_diagonal?

I'm trying to write fast, optimized code based on matrices, and have recently discovered einsum as a tool for achieving significant speed-up.
Is it possible to use this to set the diagonals of a multidimensional array efficiently, or can it only return data?
In my problem, I'm trying to set the diagonals for an array of square matrices (shape: M x N x N) by summing the columns in each square (N x N) matrix.
My current (slow, loop-based) solution is:
# Build dummy array
dimx = 2 # Dimension x (likely to be < 100)
dimy = 3 # Dimension y (likely to be between 2 and 10)
M = np.random.randint(low=1, high=9, size=[dimx, dimy, dimy])
# Blank the diagonals so we can see the intended effect
np.fill_diagonal(M[0], 0)
np.fill_diagonal(M[1], 0)
# Compute diagonals based on summing columns
diags = np.einsum('ijk->ik', M)
# Set the diagonal for each matrix
# THIS IS LOW. CAN IT BE IMPROVED?
for i in range(len(M)):
np.fill_diagonal(M[i], diags[i])
# Print result
M
Can this be improved at all please? It seems np.fill_diagonal doesn't accepted non-square matrices (hence forcing my loop based solution). Perhaps einsum can help here too?
One approach would be to reshape to 2D, set the columns at steps of ncols+1 with the diagonal values. Reshaping creates a view and as such allows us to directly access those diagonal positions. Thus, the implementation would be -
s0,s1,s2 = M.shape
M.reshape(s0,-1)[:,::s2+1] = diags
If you do np.source(np.fill_diagonal) you'll see that in the 2d case it uses a 'strided' approach
if a.ndim == 2:
step = a.shape[1] + 1
end = a.shape[1] * a.shape[1]
a.flat[:end:step] = val
#Divakar's solution applies this to your 3d case by 'flattening' on 2 dimensions.
You could sum the columns with M.sum(axis=1). Though I vaguely recall some timings that found that einsum was actually a bit faster. sum is a little more conventional.
Someone has has asked for an ability to expand dimensions in einsum, but I don't think that will happen.