How can I replace the summing in numpy matrix multiplication with concatenation in a new dimension? - numpy

For each location in the result matrix, instead of storing the dot product of the corresponding row and column in the argument matrices, I would like like to store the element wise product, which will be a vector extending into a third dimension.
One idea would be to convert the argument matrices to vectors with vector entries, and then take their outer product, but I'm not sure how to do this either.
EDIT:
I figured it out before I saw there was a reply. Here is my solution:
def newdot(A, B):
A = A.reshape((1,) + A.shape)
B = B.reshape((1,) + B.shape)
A = A.transpose(2, 1, 0)
B = B.transpose(1, 0, 2)
return A * B
What I am doing is taking apart each row and column pair that will have their outer product taken, and forming two lists of them, which then get their contents matrix multiplied together in parallel.

It's a little convoluted (and difficult to explain) but this function should get you what you're looking for:
def f(m1, m2):
return (m2.A.T * m1.A.reshape(m1.shape[0],1,m1.shape[1]))
m3 = m1 * m2
m3_el = f(m1, m2)
m3[i,j] == sum(m3_el[i,j,:])
m3 == m3_el.sum(2)
The basic idea is to turn the matrices into arrays and do element-by-element multiplication. One of the arrays gets reshaped to have a size of one in its middle dimension, and array broadcasting rules expand this dimension out to match the height of the other array.

Related

Pseudoinverse calculation in Python

Problem
I was working on the problem described here. I have two goals.
For any given system of linear equations, figure out which variables have unique solutions.
For those variables with unique solutions, return the minimal list of equations such that knowing those equations determines the value of that variable.
For example, in the following set of equations
X = a + b
Y = a + b + c
Z = a + b + c + d
The appropriate output should be c and d, where X and Y determine c and Y and Z determine d.
Parameters
I'm provided a two columns pandas DataFrame entitled InputDataSet where the two columns are Equation and Variable. Each row represents a variable's membership in a given equation. For example, the above set of equations would be represented as
InputDataSet = pd.DataFrame([['X','a'],['X','b'],['Y','a'],['Y','b'],['Y','c'],
['Z','a'],['Z','b'],['Z','c'],['Z','d']],columns=['Equation','Variable'])
The output will be stored in a 2 column DataFrame named OutputDataSet as well, where the first contains the variables that have unique solution, and the second is a comma delimited string of the minimal set of equations needed to solve the given variable. For example, the correct OutputDataSet would look like
OutputDataSet = pd.DataFrame([['c','X,Y'],['d','Y,Z']],columns=['Variable','EquationList'])
Current Solution
My current solution takes the InputDataSet and converts it into a NetworkX graph. After splitting the graph into connected subgraphs, it then converts the graph into a biadjacency matrix (since the graph by nature is bipartite). After this conversion, the SVD is computed, and the nullspace and pseudoinverse are calculated from the SVD (To see how they are calculated, see here and here: look at the source code for numpy.linalg.pinv and the cookbook function for nullspace. I fused the two functions since they both use SVD).
After calculating nullspace and pseudo-inverse, and rounding to a given tolerance, I find all rows in the nullspace where all of the coefficients are 0, and return those variables as those with a unique solution, and return those equations with non-zero coefficients for those variables in the pseudo-inverse.
Here is the code:
import networkx as nx
import pandas as pd
import numpy as np
import numpy.core as cr
def svd_lite(a, tol=1e-2):
wrap = getattr(a, "__array_prepare__", a.__array_wrap__)
rcond = cr.asarray(tol)
a = a.conjugate()
u, s, vt = np.linalg.svd(a)
nnz = (s >= tol).sum()
ns = vt[nnz:].conj().T
shape = a.shape
if shape[0]>shape[1]:
u = u[:,:shape[1]]
elif shape[1]>shape[0]:
vt = vt[:shape[0]]
cutoff = rcond[..., cr.newaxis] * cr.amax(s, axis=-1, keepdims=True)
large = s > cutoff
s = cr.divide(1, s, where=large, out=s)
s[~large] = 0
res = cr.matmul(cr.swapaxes(vt, -1, -2), cr.multiply(s[..., cr.newaxis],
cr.swapaxes(u, -1, -2)))
return (wrap(res),ns)
cols = InputDataSet.columns
tolexp=2
graphs = nx.connected_component_subgraphs(nx.from_pandas_dataframe(InputDataSet,cols[0],
cols[1]))
OutputDataSet = []
Eqs = InputDataSet[cols[0]].unique()
Vars = InputDataSet[cols[1]].unique()
for i in graphs:
EqList = np.array([val for val in np.array(i.nodes) if val in Eqs])
VarList = [val for val in np.array(i.nodes) if val in Vars]
pinv,nulls = svd_lite(nx.bipartite.biadjacency_matrix(i,EqList,VarList,format='csc')
.astype(float).todense(),tol=10**-tolexp)
df2 = np.where(~np.round(nulls,tolexp).any(axis=1))[0]
df3 = np.round(np.array(pinv),tolexp)
OutputDataSet.extend([[VarList[i],",".join(EqList[np.nonzero(df3[i])])] for i in df2])
OutputDataSet = pd.DataFrame(OutputDataSet)
Issues
On the data that I've tested this algorithm on, it performs pretty well with decent execution time. However, the main issue is that it suggests far too many equations as required to determine a given variable.
Often, with datasets of 10,000 equations, the algorithm will claim that 8,000 of those 10,000 are required to determine a given variable, which most definitely is not the case.
I tried raising the tolerance (what I round the coefficients in the pseudo-inverse) to .1, but even then, nearly 5000 equations had non-zero coefficients.
I had conjectured that perhaps the pseudo-inverse is collapsing upon a non-optimal set of coefficients, but the Moore-Penrose pseudoinverse is unique, so that isn't a possibility.
Am I doing something wrong here? Or is the approach I'm taking not going to give me what I desire?
Further Notes
All of the coefficients of all of the variables are 1
The results the current algorithm is producing are reliable ... When I multiply any vector of equation totals by the pseudoinverse generated by the algorithm, I get values essentially equal to those claimed to have a unique solution, which is promising.
What I want to know here is either whether I'm doing something wrong in how I'm extrapolating information from the pseudo-inverse, or whether my approach is completely wrong.
I apologize for not posting any actual results, but not only are they quite large, but they are somewhat unintuitive since they are reformatted into an XML which would probably take another question to explain anyways.
Thank you for you time!

How to create a 3-D array by multiplying vectors from two 2-D matrices

I have two 2-D matrices which have a shared axis.
I want to get a 3-D array that holds the results of every pairwise multiplication made between all the combinations of vectors from each matrix along that shared axis.
What is the best way to achieve this? (assuming that the matrices are big)
As an illustration, let's say I have 100 technicians and 1000 customers.
For each of these individuals I have a 1-D array with ones and zeros representing their availability on a each day of the week.
That's a 7x100 matrix for the technicians, a 7x1000 matrix for the customers.
import numpy as np
technicians = np.random.randint(low=0,high=2,size=(7,100))
customers = np.random.randint(low=0,high=2,size=(7,1000))
result = solution(technicians, customers)
result.shape # (7,100,1000)
I want to find for each technician-customer couple the days they are both available.
If I perform a pairwise multiplication between each combination of technician availability and customer availability I get a 1-D arrays that shows for each couple whether they are both available on these days. Together they create the 3-D array I'm aiming for, shaped something like 7x100x1000.
Thanks!
Try
ans = technicians.reshape((7, 1, 100)) * customers.reshape((7, 1000, 1))
We make use of numpy.broadcasting.
General Broadcasting Rules: When operating on two arrays, NumPy
compares their shapes element-wise. It starts with the trailing
dimensions, and works its way forward. Two dimensions are compatible
when
(1) they are equal, or (2) one of them is 1
Now, we are matching the shape of technicians and customers as
technician : 7 x 1 x 100
customers : 7 x 1000 x 1
Result (3d array): 7 x 1000 x 100
using reshape. Then, we can apply elementwise multiplication with *.

How to perform matching between two sequences?

I have two mini-batch of sequences :
a = C.sequence.input_variable((10))
b = C.sequence.input_variable((10))
Both a and b have variable-length sequences.
I want to do matching between them where matching is defined as: match (eg. dot product) token at each time step of a with token at every time step of b .
How can I do this?
I have mostly answered this on github but to be consistent with SO rules, I am including a response here. In case of something simple like a dot product you can take advantage of the fact that it factorizes nicely, so the following code works
axisa = C.Axis.new_unique_dynamic_axis('a')
axisb = C.Axis.new_unique_dynamic_axis('b')
a = C.sequence.input_variable(1, sequence_axis=axisa)
b = C.sequence.input_variable(1, sequence_axis=axisb)
c = C.sequence.broadcast_as(C.sequence.reduce_sum(a), b) * b
c.eval({a: [[1, 2, 3],[4, 5]], b: [[6, 7], [8]]})
[array([[ 36.],
[ 42.]], dtype=float32), array([[ 72.]], dtype=float32)]
In the general case you need the following steps
static_b, mask = C.sequence.unpack(b, neutral_value).outputs
scores = your_score(a, static_b)
The first line will convert the b sequence into a static tensor with one more axis than b. Because of packing, some elements of this tensor will be invalid and those will be indicated by the mask. The neutral_value will be placed as a dummy value in the static_b tensor wherever data was missing. Depending on your score you might be able to arrange for the neutral_value to not affect the final score (e.g. if your score is a dot product a 0 would be a good choice, if it involves a softmax -infinity or something close to that would be a good choice). The second line can now have access to each element of a and all the elements of b as the first axis of static_b. For a dot product static_b is a matrix and one element of a is a vector so a matrix vector multiplication will result in a sequence whose elements are all inner products between the corresponding element of a and all elements of b.

Extract different rows from two numpy 2D arrays

I generated a new random rows matrix B (50, 40) from a matrix A (100, 40):
B = A[np.random.randint(0,100,size=50)] # it works fine.
Now, I want to take the rows from A that isn't in matrix B.
C = A not in B # pseudocode.
This should do the job:
import numpy as np
A=np.random.randint(5,size=[100,40])
l=np.random.choice(100, size=50, replace=False)
B = A[l]
C= A[np.setdiff1d(np.arange(0,100),l)]
l stores the selected rows, and for C you take the complement of l. Then C is the required matrix.
Note that I set l=np.random.choice(100, size=50, replace=False) to avoid replacement. If you use np.random.randint(0,100,size=50) you may get repeated rows as the same number is selected at random.
Inspried by this question, Check whether each row of a matrix is in another matrix [Python]. First get indices of rows exists in B, then get difference from whole A indices. select rows using difference in the end.
index = np.argwhere((B[:,None,:] == A[:,:]).all(-1))[:, 1]
C = A[np.setdiff1d(np.arange(100), index)]
The numpy_indexed package (Disclaimer: i am its author) has efficient vectorized functionality for all these kinds of operations.
import numpy_indexed as npi
C = npi.difference(A, B)

Numpy matrix multiplication with custom dot product

The default matrix multiplication is computed as
c[i,j] = sum(a[i,k] * b[k,j])
I am trying to use a custom formula instead of the dot product to get
c[i,j] = sum(a[i,k] == b[k,j])
Is there an efficient way to do this in numpy?
You could use broadcasting:
c = sum(a[...,np.newaxis]*b[np.newaxis,...],axis=1) # == np.dot(a,b)
c = sum(a[...,np.newaxis]==b[np.newaxis,...],axis=1)
I included the newaxis in b just make it clear how that array is expanded. There are other ways of adding dimensions to arrays (reshape, repeat, etc), but the effect is the same. Expand a and b to the same shape to do element by element multiplying (or ==), and then sum on the correct axis.