If I have a matrix A I can extract the indices of elements over the diagonal via np.triu_indices(A, k=1).
How to do that for 3-dimensional matrix?
Related
for instance, assume
a=[[1,2],[3,4]]
this is read by default as a 2x2 matrix.
but is there anything I can do to make python read it as row, whose elements are vectors [1,2] and [3,4] ?
I have a matrix with dimension (22,2) and I want to decompose it using SVD. SVD in numpy doesn't return the correct dimensions though.I'd expect dimensions like (22,22), (22),(22,2)?
The returned dimensions are correct. The uu and vvh matrices are always square matrices, while depending on the software s can be an array with just the singular values (as in numpy) or a diagonal matrix with the dimension of the original matrix (as in MATLAB, for instance).
The dimensions of the uu matrix is the number of rows of the original matrix, while the dimension of the vvh matrix is the number of columns of the original matrix. This can never change or you would be computing something else instead of the SVD.
To reconstruct the original matrix from the decomposition in numpy we need to make s into a matrix with the proper dimension. For square matrices it's easy, just np.diag(s) is enough. Since your original matrix is not square and it has more rows than columns, then we can use something like
S = np.vstack([np.diag(s), np.zeros((20, 2))])
Then we get a S matrix which is a diagonal matrix with the singular values concatenated with a zero matrix. In the end, uu is 22x22, S is 22x2 and vvh is 2x2. Multiplying uu # S # vvh will give the original matrix back.
I have a 3D numpy array which I am using to represent a tuple of (square) matrices, and I'd like to perform a matrix operation on each of those matrices, corresponding to the first two dimensions of the array. For instance, if my list of matrices is [A,B,C] I would like to compute [A'A,B'B,C'C] where ' denotes the conjugate transpose.
The following code kinda sorta does what I'm looking for:
foo=np.array([[[1,1],[0,1]],[[0,1],[0,0]],[[3,0],[0,-2]]])
[np.matrix(j).H*np.matrix(j) for j in foo]
But I'd like to do this using vectorized operations instead of list comprehension.
I have a numpy array,say A1, of shape (1,1), and another, say A2, of shape (1,).
When I do A1-A2, I get another array of shape (1,1).
Shouldn't the arrays be of same dimensions for subtraction/sum operation?
If you take a look at the Documentation, you can see that numpy uses broadcasting (duplication of the array until it matches the dimensions of the other array) on the smaller Array to ensure that the arrays have the same size and an elementwise operation is possible.
I have to operate on matrices using an equivalent of sicpy's sparse.coo_matrix and sparse.csr_matrix. However, I cannot use scipy (it is incompatible with the image analysis software I want to use this in). I can, however, use numpy.
Is there an easy way to accomplish what scipy.sparse.coo_matrix and scipy.sparse.csr_matrix do, with numpy only?
Thanks!
The attributes of a sparse.coo_matrix are:
dtype : dtype
Data type of the matrix
shape : 2-tuple
Shape of the matrix
ndim : int
Number of dimensions (this is always 2)
nnz
Number of nonzero elements
data
COO format data array of the matrix
row
COO format row index array of the matrix
col
COO format column index array of the matrix
The data, row, col arrays are essentially the data, i, j parameters when defined with coo_matrix((data, (i, j)), [shape=(M, N)]). shape also comes from the definition. dtype from the data array. nzz as first approximation is the length of data (not accounting for zeros and duplicate coordinates).
So it is easy to construct a coo like object. Similarly a lil matrix has 2 lists of lists. And a dok matrix is a dictionary (see its .__class__.__mro__).
The data structure of a csr matrix is a bit more obscure:
data
CSR format data array of the matrix
indices
CSR format index array of the matrix
indptr
CSR format index pointer array of the matrix
It still has 3 arrays. And they can be derived from the coo arrays. But doing so with pure Python code won't be nearly as fast as the compiled scipy functions.
But these classes have a lot of functionality that would require a lot of work to duplicate. Some is pure Python, but critical pieces are compiled for speed. Particularly important are the mathematical operations that the csr_matrix implements, such as matrix multiplication.
Replicating the data structures for temporary storage is one thing; replicating the functionality is quite another.