for instance, assume
a=[[1,2],[3,4]]
this is read by default as a 2x2 matrix.
but is there anything I can do to make python read it as row, whose elements are vectors [1,2] and [3,4] ?
Related
I have a matrix with dimension (22,2) and I want to decompose it using SVD. SVD in numpy doesn't return the correct dimensions though.I'd expect dimensions like (22,22), (22),(22,2)?
The returned dimensions are correct. The uu and vvh matrices are always square matrices, while depending on the software s can be an array with just the singular values (as in numpy) or a diagonal matrix with the dimension of the original matrix (as in MATLAB, for instance).
The dimensions of the uu matrix is the number of rows of the original matrix, while the dimension of the vvh matrix is the number of columns of the original matrix. This can never change or you would be computing something else instead of the SVD.
To reconstruct the original matrix from the decomposition in numpy we need to make s into a matrix with the proper dimension. For square matrices it's easy, just np.diag(s) is enough. Since your original matrix is not square and it has more rows than columns, then we can use something like
S = np.vstack([np.diag(s), np.zeros((20, 2))])
Then we get a S matrix which is a diagonal matrix with the singular values concatenated with a zero matrix. In the end, uu is 22x22, S is 22x2 and vvh is 2x2. Multiplying uu # S # vvh will give the original matrix back.
I have a 3D numpy array which I am using to represent a tuple of (square) matrices, and I'd like to perform a matrix operation on each of those matrices, corresponding to the first two dimensions of the array. For instance, if my list of matrices is [A,B,C] I would like to compute [A'A,B'B,C'C] where ' denotes the conjugate transpose.
The following code kinda sorta does what I'm looking for:
foo=np.array([[[1,1],[0,1]],[[0,1],[0,0]],[[3,0],[0,-2]]])
[np.matrix(j).H*np.matrix(j) for j in foo]
But I'd like to do this using vectorized operations instead of list comprehension.
I want to construct a weight whose certain elements are zero and never change, and other elements are the variables.For example:
[[0,0,a,0],[0,0,b,0],[0,0,0,c],[0,0,0,d]]
This is a tf variable, and all zeros stay unchanged. Only a, b, c, d are tuned using gradient descent.
Are there anyone who knows how to define such a matrix?
You should look into SparseTensor. It is highly optimised for operations where tensor consists of many zeros.
So, in your case, to initialise SparseTensor:
a,b,c,d = 10,20,30,40
sparse = tf.SparseTensor([[0,2], [1,2], [2,3], [3,3]], [a,b,c,d], [4,4])
Is there a convenient way to use masked array over sparse matrices ?
Because it seems that mask not work when creating masked array with scipy sparse matrix...
And a typical application would be a adjacency matrix where values could be {0,1,?} for representing links in a network {0,1} and unknown/unseen value {?} to predict.
I'm not surprised that trying to give a sparse matrix to masked does not work. The few numpy functions that work with sparse ones are ones that delegate to the task to the sparse code.
It might be possible to construct a coo format matrix with the data attribute being a masked array, but I doubt if that carries far. Code that isn't masked aware generally will ignore the mask.
A masked array is an ndarray subclass that maintains two attributes, the data and mask, both of which are arrays. Many masked methods work by filling the masked values with a suitable value (0 for sums, 1 for products), and performing regular array calculations.
A sparse matrix is not an ndarray subclass. One format is actually a dictionary subclass. Most store their data in 3 arrays, the 2 coordinates and the data. Interactions with non-sparse arrays often involve todense() to turn the action into a regular numpy one.
There's no interoperability by design. If something does work it's probably because of some coincidental delegation of method.
For example
In [85]: A=sparse.coo_matrix(np.eye(3))
In [86]: M=np.ma.masked_array(np.eye(3))
In [87]: A+M
Out[87]:
masked_array(data =
[[ 2. 0. 0.]
[ 0. 2. 0.]
[ 0. 0. 2.]],
mask =
False,
fill_value = 1e+20)
In [88]: M+A
NotImplementedError: adding a nonzero scalar to a sparse matrix is not supported
I would have expected M+A to work, but since I read it as adding sparse to a masked. But sometimes x+y is actually implemented as y.__add__(x). A+np.eye(3) works in both orders.
I have to operate on matrices using an equivalent of sicpy's sparse.coo_matrix and sparse.csr_matrix. However, I cannot use scipy (it is incompatible with the image analysis software I want to use this in). I can, however, use numpy.
Is there an easy way to accomplish what scipy.sparse.coo_matrix and scipy.sparse.csr_matrix do, with numpy only?
Thanks!
The attributes of a sparse.coo_matrix are:
dtype : dtype
Data type of the matrix
shape : 2-tuple
Shape of the matrix
ndim : int
Number of dimensions (this is always 2)
nnz
Number of nonzero elements
data
COO format data array of the matrix
row
COO format row index array of the matrix
col
COO format column index array of the matrix
The data, row, col arrays are essentially the data, i, j parameters when defined with coo_matrix((data, (i, j)), [shape=(M, N)]). shape also comes from the definition. dtype from the data array. nzz as first approximation is the length of data (not accounting for zeros and duplicate coordinates).
So it is easy to construct a coo like object. Similarly a lil matrix has 2 lists of lists. And a dok matrix is a dictionary (see its .__class__.__mro__).
The data structure of a csr matrix is a bit more obscure:
data
CSR format data array of the matrix
indices
CSR format index array of the matrix
indptr
CSR format index pointer array of the matrix
It still has 3 arrays. And they can be derived from the coo arrays. But doing so with pure Python code won't be nearly as fast as the compiled scipy functions.
But these classes have a lot of functionality that would require a lot of work to duplicate. Some is pure Python, but critical pieces are compiled for speed. Particularly important are the mathematical operations that the csr_matrix implements, such as matrix multiplication.
Replicating the data structures for temporary storage is one thing; replicating the functionality is quite another.