Using numpy einsum to perform high dimensional subtraction broadcasting - numpy

I'm having troubles in using a broadcasting subtraction. My problem is the following. I have an array x of shape [L,N], where L is an integer and N is the number of variables of my problem.
I need to compute a [L,N,N] array where at each element l,i,j it contains x[l,i]-x[l,j].
If L=1 this is equivalent to run broadcasting on subtraction: x-x.T
For example here with L=1 and N=3:
import numpy as np
x = np.array([[0,2,4]])
x-x.T
However, if one increases the dimension L things become more complicated and enter the realm of the np.einsum function.
So I tried to recreate my example, in the case L=2, where I've replicated the two rows. What I'd expect is to get a 2x3x3 array with two 3x3 matrices with equal elements.
x = np.array([[0,2,4],[0,2,4]])
n = 3
k = 2
X = np.zeros([k,n,n])
for l in range(k):
for i in range(n):
for j in range(n):
X[l,i,j] = x[l,i]-x[l,j]
print(X)
which returns
[[[ 0. -2. -4.]
[ 2. 0. -2.]
[ 4. 2. 0.]]
[[ 0. -2. -4.]
[2. 0. -2.]
[ 4. 2. 0.]]]
But how to make this with numpy einsum? I can only obtain the product:
np.einsum('ki,kj->kij',x,-x)
Are there specific examples of numpy batched subtractions or additions with increased dimension?

Related

Built-in index dependent weight for tensordot in numpy?

I would like to obtain a tensordot of two arrays with the same shape with index-dependent weight applied, without use of explicit loop. For example,
import numpy as np
A=np.array([1,2,3])
B=np.array([-2,6,9])
C=np.zeros((3,3))
for i in range(3):
for j in range(3):
C[i,j]=A[i]*B[j]*(np.exp(i-j)if i>j else 0)
Can an array similar to C be obtained with a built-in tool (e.g., with some options for tensordot)?
Here's a vectorized solution:
N = 3
C = np.tril(A[:, None] * B * np.exp(np.arange(N)[:, None] - np.arange(N)), k=-1)
Output:
>>> C
array([[ -2. , 0. , 0. ],
[-10.87312731, 12. , 0. ],
[-44.33433659, 48.92907291, 27. ]])
With np.einsum inconsistently slightly faster for some larger inputs than broadcasting, slower for others.
import numpy as np
A=np.array([1,2,3])
B=np.array([-2,6,9])
np.einsum('ij,i,j->ij', np.tril(np.exp(np.subtract.outer(A,A)), -1), A, B)
Output
array([[ 0. , 0. , 0. ],
[-10.87312731, 0. , 0. ],
[-44.33433659, 48.92907291, 0. ]])

After projecting 3D points to 2D, how to get back to 3D?

Simple question: I used a translation and rotation matrix and camera intrinsics matrix to get a 3x4 matrix used to transform 3d points to 2d points (notated as Tform)
I transformed the point [10,-5,1] with the matrix by adding one to the end, and the new point is notated as newpoint.
Now I want to use the newpoint data to transform back to 3D space, where old_est should be equal to old.
I'm looking for the solution to plug into the XXX matrix in my code below
import numpy as np
Tform=np.array([[4000,0,-1600,-8000],[500,5000,868,-8000],[.5,0,.8,-8]])
old=np.array([10,-5,1,1])
newpoint=np.dot(Tform,old)
print(newpoint)
old_est=np.dot(XXX,np.append(newpoint,1))
print(old_est)
Add a 4th row to Tform with the values 0 0 0 1, i.e. the last row of an identity matrix:
>>> m = np.vstack(Tform, np.array([0,0,0,1]))
>>> m
array([[ 4.00e+03, 0.00e+00, -1.60e+03, -8.00e+03],
[ 5.00e+02, 5.00e+03, 8.68e+02, -8.00e+03],
[ 5.00e-01, 0.00e+00, 8.00e-01, -8.00e+00],
[ 0.00e+00, 0.00e+00, 0.00e+00, 1.00e+00]])
Note that you cannot use append because it also flattens the input arrays.
Observe that, when multiplied with old, the 4th component of the result is 1, i.e. the result is equal to np.append(newpoint, 1):
>>> np.dot(m, old)
array([ 3.0400e+04, -2.7132e+04, -2.2000e+00, 1.0000e+00])
----------
It follows that XXX is the inverse of this new matrix:
>>> XXX = np.linalg.inv(m)
>>> np.dot(XXX, np.append(newpoint, 1))
array([10., -5., 1., 1.])
-------------
And we get the components of old back.
Alternatively you can subtract the 4th column of Tform from newpoint and multiply the result with the inverse of the left 3x3 sub-matrix of Tform, but this is slightly fiddly so we might as well let numpy do more of the work :)

How to create index combinations (k out of n) as sparse bitmasks for numpy

For numpy how can I efficiently create
an array/matrix representing a list of all combinations (k out of n) as lists of k indices. The shape would be (binomial(n, k), k).
a sparse array/matrix representing this combinations as bitmasks of length n. (So expanding aboves indices to bitmask.) The shape would be (binomial(n, k), n).
I need to do this with large n (and maybe small k). So the algorithm should be
time efficient (e.g. maybe allocate complete result space at once before filling it?)
space efficient (e.g. sparse bitmasks)
Many Thanks for your help.
Assuming the blowup is not that bad (as mentioned in the comment above), you might try this. It's pretty vectorized and should be fast (for cases which could be handled).
Edit: i somewhat assumed you are interested in an output based on scipy.sparse. Maybe you are not.
Code
import itertools
import numpy as np
import scipy.sparse as sp
def combs(a, r):
"""
Return successive r-length combinations of elements in the array a.
Should produce the same output as array(list(combinations(a, r))), but
faster.
"""
a = np.asarray(a)
dt = np.dtype([('', a.dtype)]*r)
b = np.fromiter(itertools.combinations(a, r), dt)
b_ = b.view(a.dtype).reshape(-1, r)
return b_
def sparse_combs(k, n):
combs_ = combs(np.arange(n), k)
n_bin = combs_.shape[0]
spmat = sp.coo_matrix(( np.ones(n_bin*k),
(np.repeat(np.arange(n_bin), k),
combs_.ravel()) ),
shape=(n_bin, n))
return spmat
print('dense')
print(combs(range(4), 3))
print('sparse (dense for print)')
print(sparse_combs(3, 4).todense())
Output
dense
[[0 1 2]
[0 1 3]
[0 2 3]
[1 2 3]]
sparse (dense for print)
[[ 1. 1. 1. 0.]
[ 1. 1. 0. 1.]
[ 1. 0. 1. 1.]
[ 0. 1. 1. 1.]]
The helper-function combs i took (probably) from this question (sometime in the past).
Small (unscientific) timing:
from time import perf_counter as pc
start = pc()
spmat = sparse_combs(5, 50)
time_used = pc() - start
print('secs: ', time_used)
print('nnzs: ', spmat.nnz)
#secs: 0.5770790778094155
#nnzs: 10593800
(3, 500)
#secs: 3.4843752405405497
#nnzs: 62125500

How does tf.multinomial work?

How does tf.multinomial work? Here is stated that it "Draws samples from a multinomial distribution". What does that mean?
If you perform an experiment n times that can have only two outcomes (either success or failure, head or tail, etc.), then the number of times you obtain one of the two outcomes (success) is a binomial random variable.
In other words, If you perform an experiment that can have only two outcomes (either success or failure, head or tail, etc.), then a random variable that takes value 1 in case of success and value 0 in case of failure is a Bernoulli random variable.
If you perform an experiment n times that can have K outcomes (where K can be any natural number) and you denote by X_i the number of times that you obtain the i-th outcome, then the random vector X defined as
X = [X_1, X_2, X_3, ..., X_K]
is a multinomial random vector.
In other words, if you perform an experiment that can have K outcomes and you denote by X_i a random variable that takes value 1 if you obtain the i-th outcome and 0 otherwise, then the random vector X defined as
X = [X_1, X_2, X_3, ..., X_K]
is a Multinoulli random vector. In other words, when the i-th outcome is obtained, the i-th entry of the Multinoulli random vector X takes value 1, while all other entries take value 0.
So, a multinomial distribution can be seen as a sum of mutually independent Multinoulli random variables.
And the probabilities of the K possible outcomes will be denoted by
p_1, p_2, p_3, ..., p_K
An example in Tensorflow,
In [171]: isess = tf.InteractiveSession()
In [172]: prob = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3]
...: dist = tf.distributions.Multinomial(total_count=[4., 5], probs=prob)
...:
...: counts = [[2., 1, 1], [3, 1, 1]]
...: isess.run(dist.prob(counts)) # Shape [2]
...:
Out[172]: array([ 0.0168 , 0.06479999], dtype=float32)
Note: The Multinomial is identical to the
Binomial distribution when K = 2. For more detailed information please refer either tf.compat.v1.distributions.Multinomial or the latest docs of tensorflow_probability.distributions.Multinomial

Why does matrix multiplication give different results depending on how they are grouped?

We know that A*B*C = A*(B*C), but why this matrix multiplication got different result?
import numpy as np
A = np.array([[1,2,3],[4,5,6]])
B = np.array([[1,2,3],[4,5,6],[7,8,9]])
print( A.dot( np.linalg.inv(B) ).dot(A.T) )
print( A.dot( np.linalg.inv(B).dot(A.T) ) )
The result is
[[ 0.5 2. ]
[ 1. 4. ]]
and
[[ 2. 4.]
[ 8. 16.]]
B is of insufficient rank to take an inverse. To get at least consistent results, use np.linalg.pinv for the pseudo inverse.
np.linalg.matrix_rank(B)
# we want 3
# we got 2
2
A = np.array([[1,2,3],[4,5,6]])
B = np.array([[1,2,3],[4,5,6],[7,8,9]])
print( A.dot( np.linalg.pinv(B) ).dot(A.T) )
print( A.dot( np.linalg.pinv(B).dot(A.T) ) )
[[ 1. 4.]
[ 2. 5.]]
[[ 1. 4.]
[ 2. 5.]]
Floating point arithmetical operations are not associative. Usually we don't notice this because the numerical differences between matrices A*(B*C) and (A*B)*C are tiny. But in this case, you are trying to invert a non-invertible matrix B, which Numpy actually tries to do, getting some absurd result:
[[ 3.15251974e+15 -6.30503948e+15 3.15251974e+15]
[ -6.30503948e+15 1.26100790e+16 -6.30503948e+15]
[ 3.15251974e+15 -6.30503948e+15 3.15251974e+15]]
The magnitude of these numbers is such that errors of size ~1 are to be expected at the double precision level (you get about 16 accurate digits). The multiplication by A and A.T brings the matrix entires back to something small, due to a lot of cancellation. But when very large numbers cancel each other, the relative error grows; and the result ends up being fairly meaningless.