Calculate the inverse of a non-square matrix using numpy - numpy

Is there a way where I can calculate the inverse of a mxn non-square matrix using numpy? Since using la.inv(S) seems to give me an error of ValueError: expected square matrix

You are probably looking for np.linalg.pinv.

To calculate the non square matrix mxn, We can use np.linalg.pinv(S), here s is the data you want to pass.
For square matrix we use np.linalg.inv(S), The inverse of a matrix is such that if it is multiplied by the original matrix, it results in identity matrix.
note: np is numpy
We can also use np.linalg.inv(S) for non square matrix but in order to not get any error you need to slice the data S.
For more details on np.linalg.pinv : https://numpy.org/doc/stable/reference/generated/numpy.linalg.pinv.html

Related

svd doesn't return correct dimension

I have a matrix with dimension (22,2) and I want to decompose it using SVD. SVD in numpy doesn't return the correct dimensions though.I'd expect dimensions like (22,22), (22),(22,2)?
The returned dimensions are correct. The uu and vvh matrices are always square matrices, while depending on the software s can be an array with just the singular values (as in numpy) or a diagonal matrix with the dimension of the original matrix (as in MATLAB, for instance).
The dimensions of the uu matrix is the number of rows of the original matrix, while the dimension of the vvh matrix is the number of columns of the original matrix. This can never change or you would be computing something else instead of the SVD.
To reconstruct the original matrix from the decomposition in numpy we need to make s into a matrix with the proper dimension. For square matrices it's easy, just np.diag(s) is enough. Since your original matrix is not square and it has more rows than columns, then we can use something like
S = np.vstack([np.diag(s), np.zeros((20, 2))])
Then we get a S matrix which is a diagonal matrix with the singular values concatenated with a zero matrix. In the end, uu is 22x22, S is 22x2 and vvh is 2x2. Multiplying uu # S # vvh will give the original matrix back.

TensorFlow: Contracting a dimension of two tensors via dot product

I have two tensors, a of rank 4 and b of rank 1. I'd like to produce aprime, of rank 3, by "contracting" the last axis of a away, by replacing it with its dot product against b. In numpy, this is as easy as np.tensordot(a, b, 1). However, I can't figure out a way to do this in Tensorflow.
How can I replace the last axis of a tensor with a value equal to that axis's dot product against another tensor (of course, of the same shape)?
UPDATE:
I see in Wikipedia that this is called the "Tensor Inner Product" https://en.wikipedia.org/wiki/Dot_product#Tensors aka tensor contraction. It seems like this is a common operation, I'm surprised that there's no explicit support for it in Tensorflow.
I believe that this may be possible via tf.einsum; however, I have not been able to find a generalized way to do this that works for tensors of any rank (this is probably because I do not understand einsum and have been reduced to trial and error)
Aren't you just using tensor in the sense of a multidimensional array? Or in some disciplines a tensor is 3d (vector 1d, matrix 2d, etc). I haven't used tensorflow but I don't think it has much to do with tensors in that linear algebra sensor. They talk about data flow graphs. I'm not sure where the tensor part of the name comes from.
I assume you are talking about an expression like:
In [293]: A=np.tensordot(np.ones((5,4,3,2)),np.arange(2),1)
resulting in a (5,4,3) shape array. The einsum equivalent is
In [294]: B=np.einsum('ijkl,l->ijk',np.ones((5,4,3,2)),np.arange(2))
np.einsum implements Einstine Notation, as discussed here: https://en.wikipedia.org/wiki/Einstein_notation. I got this link from https://en.wikipedia.org/wiki/Tensor_contraction
You seem to be talking about straight forward numpy operations, not something special in tensorflow.
I would first add 3 dimensions of size 1 to b so that it can be broadcast along the 4'th dimension of a.
b = tf.reshape(b, (1, 1, 1, -1))
Then you can multiply b and a and it will broadcast b along all of the other dimensions.
a_prime = a * b
Finally, reduce the sum along the 4'th dimension to get rid of that dimension and replace it with the dot product.
a_prime = tf.reduce_sum(a_prime, [3])
This seems like it would work (for the first tensor being of any rank):
tf.einsum('...i,i->...', x, y)

A Pure Pythonic Pairwise Euclidean distance of rows of a numpy ndarray

I have a matrix of size (n_classes, n_features) and i want to compute the pairwise euclidean distance of each pair of classes so the output would be a (n_classes, n_classes) matrix where each cell has the value of euclidean_distance(class_i, class_j).
I know that there is this scipy spatial distances (http://docs.scipy.org/doc/scipy-0.14.0/reference/spatial.distance.html) and sklearn.metric.euclidean distances (http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.euclidean_distances.html) but i want to use this in Theano software so i need a pure mathematical formula rather than functions that compute the results.
for example i need a series of transformations like A = X * B, D = X.T-X, results = D.T something that contains just matrix mathematical operations not functions.
You can do this using numpy broadcasting as shown in this gist. It should be straightforward to convert this to Theano code, or just reference #eickenberg's comment above, since he's the one who showed me how to do this!

Numpy sum over planes of 3d array, return a scalar

I'm making the transition from MATLAB to Numpy and feeling some growing pains.
I have a 3D array, lets say it's 3x3x3 and I want the scalar sum of each plane.
In matlab, I would use:
sum_vec = sum(3dArray,3);
TIA
wbg
EDIT: I was wrong about my matlab code. Matlab only vectorizes in one dim, so a loop wold be required. So numpy turns out to be more elegant...cool.
MATLAB
for i = 1:3
sum_vec(i) = sum(sum(3dArray(:,:,i));
end
You can do
sum_vec = np.array([plane.sum() for plane in cube])
or simply
sum_vec = cube.sum(-1).sum(-1)
where cube is your 3d array. You can specify 0 or 1 instead of -1 (or 2) depending on the orientation of the planes. The latter version is also better because it doesn't use a Python loop, which usually helps to improve performance when using numpy.
You should use the axis keyword in np.sum. Like in many other numpy functions, axis lets you perform the operation along a specific axis. For example, if you want to sum along the last dimension of the array, you would do:
import numpy as np
sum_vec = np.sum(3dArray, axis=-1)
And you'll get a resulting 2D array which corresponds to the sum along the last dimension to all the array slices 3dArray[i, k, :].
UPDATE
I didn't understand exactly what you wanted. You want to sum over two dimensions (a plane). In this case you can do two sums. For example, summing over the first two dimensions:
sum_vec = np.sum(np.sum(3dArray, axis=0), axis=0)
Instead of applying the same sum function twice, you may perform the sum on the reshaped array:
a = np.random.rand(10, 10, 10) # 3D array
b = a.view()
b.shape = (a.shape[0], -1)
c = np.sum(b, axis=1)
The above should be faster because you only sum once.
sumvec= np.sum(3DArray, axis=2)
or this works as well
sumvec=3DArray.sum(2)
Remember Python starts with 0 so axis=2 represent the 3rd dimension.
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html
If you're trying to sum over a plane (and avoid loops, which is always a good idea) you can use np.sum and pass two axes as a tuple for your argument.
For example, if you have an (nx3x3) array then using
np.sum(a, (1,2))
Will give an (nx1x1), summing over a plane, not a single axis.

Vectorizing multiplication of matrices with different shapes in numpy/tensorflow

I have a 4x4 input matrix and I want to multiply every 2x2 slice with a weight stored in a 3x3 weight matrix. Please see the attached image for an example:
In the image, the colored section of the 4x4 input matrix is multiplied by the same colored section of the 3x3 weight matrix and stored in the 4x4 output matrix. When the slices overlap, the output takes the sum of the overlaps (e.g. the blue+red).
I am trying to perform this operation in Tensorflow 2.0 using eager tensors (which can be treated as numpy arrays). This is what I've written to perform this operation and it produces the expected output.
inputm = np.ones([4,4]) # initialize 4x4 input matrix
weightm = np.ones([3,3]) # initialize 3x3 weight matrix
outputm = np.zeros([4,4]) # initialize blank 4x4 output matrix
# iterate through each weight
for i in range(weightm.shape[0]):
for j in range(weightm.shape[1]):
outputm[i:i+2, j:j+2] += weightm[i,j] * inputm[i:i+2, j:j+2]
However, I don't think this is efficient since I am iterating through the weight matrix one-by-one, and this will be extremely slow when I need to perform this on large matrices of 500x500. I am having a hard time identifying a way to vectorize this operation, maybe tiling the weight matrix to be the same shape as the input matrix and performing a single matrix multiplication. I have also thought about flattening the matrix but I'm still not able to see a way to do this more efficiently.
Any advice will be much appreciated. Thanks in advance!
Alright, I think I have a solution but this involves using both numpy operations (e.g. np.repeat) and TensorFlow 2.0 operations (i.e. tf.segment_sum). And to warn you this is not the most clear elegant solution in the world, but it was the most elegant I could come up with. So here goes.
The main culprit in your problem is this weight matrix. If you manipulate this weight matrix to be a 4x4 matrix (with correct sum of weight at each position) you have a nice weight matrix which you can do an element-wise multiplication with the input. And that's my solution. Note that this is designed for the 4x4 problem and you should be able to relatively easily extend this to the 500x500 matrix.
import numpy as np
import tensorflow as tf
a = np.array([[1,2,3,4],[4,3,2,1],[1,2,3,4],[4,3,2,1]])
w = np.array([[5,4,3],[3,4,5],[5,4,3]])
# We make weights to a 6x6 matrix by repeating 2 times on both axis
w_rep = np.repeat(w,2,axis=0)
w_rep = np.repeat(w_rep,2,axis=1)
# Let's now jump in to tensorflow
tf_a = tf.constant(a)
tf_w = tf.constant(w_rep)
tf_segments = tf.constant([0,1,1,2,2,3])
# This is the most tricky bit, here we use the segment_sum to achieve what we need
# You can use segment_sum to get the sum of segments on the very first dimension of a matrix.
# So you need to do that to the input matrix twice. One on the original and the other on the transpose.
tf_w2 = tf.math.segment_sum(tf_w, tf_segments)
tf_w2 = tf.transpose(tf_w2)
tf_w2 = tf.math.segment_sum(tf_w2, tf_segments)
tf_w2 = tf.transpose(tf_w2)
print(tf_w2*a)
PS: I will try to include an illustration of what's going on here in a future edit. But I reckon that will take some time.
After realising #thushv89's trick, I realised you can get the same result by convolving the weight matrix with a matrix of ones:
import numpy as np
from scipy.signal import convolve2d
a = np.ones([4,4]) # initialize 4x4 input matrix
w = np.ones([3,3]) # initialize 3x3 weight matrix
b = np.multiply(a, convolve2d(w, np.ones((2,2))))
print(b)