I'm preprocessing a quadratic matrix A of shape (n,n) with scipy's LU decomposition and then solve over and over again for multiple right hand sides B of shape (...,n). But scipy.linalg.lu_solve only accepts a vector for b, not a matrix like (m,n) or (k,m,n).
How can I wrap lu_solve to work for arguments of shape (...,n)? Numpy's linalg.solve would accept multiple b, but does not allow for separated LU factor and solve operation.
It is not mentioned in the documenation of lu_solve, but in fact b can contain multiple vectors. If A has shape (n, n), then b can have shape (n, m). For example,
In [44]: A
Out[44]:
array([[ 1.01, 0.02, -0.01],
[ 0.02, 1.04, -0.02],
[-0.01, -0.02, 1.01]])
In [45]: b
Out[45]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [46]: lu = lu_factor(A)
In [47]: x = lu_solve(lu, b)
In [48]: x
Out[48]:
array([[ 0. , 0.98113208, 1.96226415, 2.94339623],
[ 4. , 4.96226415, 5.9245283 , 6.88679245],
[ 8. , 9.01886792, 10.03773585, 11.05660377]])
In [49]: A.dot(x)
Out[49]:
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]])
Higher dimensional b must have shape (n, ...). Note that for shapes with more than two dimensions, testing the result with A.dot(x) will not work, because the shape of x will not be compatible with NumPy's matrix multiplication. For example, here B has shape (3, 2, 5):
In [40]: A
Out[40]:
array([[ 1.01, 0.02, -0.01],
[ 0.02, 1.04, -0.02],
[-0.01, -0.02, 1.01]])
In [41]: B = np.random.rand(3, 2, 5)
In [42]: lu = lu_factor(A)
In [43]: x = lu_solve(lu, B)
In [44]: x.shape
Out[44]: (3, 2, 5)
In [45]: xx = np.moveaxis(x, 0, 1)
In [46]: np.allclose(A.dot(xx), B)
Out[46]: True
Related
Given the following ndarray t -
In [26]: t.shape
Out[26]: (3, 3, 2)
In [27]: t
Out[27]:
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]]])
this piecewise linear interpolant for the points t[:, 0, 0] can evaluated for [0 , 0.66666667, 1.33333333, 2.] as follows using numpy.interp -
In [38]: x = np.linspace(0, t.shape[0]-1, 4)
In [39]: x
Out[39]: array([0. , 0.66666667, 1.33333333, 2. ])
In [30]: xp = np.arange(t.shape[0])
In [31]: xp
Out[31]: array([0, 1, 2])
In [32]: fp = t[:,0,0]
In [33]: fp
Out[33]: array([ 0, 6, 12])
In [40]: np.interp(x, xp, fp)
Out[40]: array([ 0., 4., 8., 12.])
How can all the interpolants be efficiently calculated and returned together for all values of fp -
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 4, 5],
[ 6, 7],
[ 8, 9]],
[[ 8, 9],
[10, 11],
[12, 13]],
[[12, 13],
[14, 15],
[16, 17]]])
As the interpolation is 1d with changing y values it must be run for each 1d slice of t. It's probably faster to loop explicitly but neater to loop using np.apply_along_axis
import numpy as np
t = np.arange( 18 ).reshape(3,3,2)
x = np.linspace( 0, t.shape[0]-1, 4)
xp = np.arange(t.shape[0])
def interfunc( arr ):
""" Function interpolates a 1d array. """
return np.interp( x, xp, arr )
np.apply_along_axis( interfunc, 0, t ) # apply function along axis 0
""" Result
array([[[ 0., 1.],
[ 2., 3.],
[ 4., 5.]],
[[ 4., 5.],
[ 6., 7.],
[ 8., 9.]],
[[ 8., 9.],
[10., 11.],
[12., 13.]],
[[12., 13.],
[14., 15.],
[16., 17.]]]) """
With explicit loops
result = np.zeros((4,3,2))
for c in range(t.shape[1]):
for p in range(t.shape[2]):
result[:,c,p] = np.interp( x, xp, t[:,c,p])
On my machine the second option runs in half the time.
Edit to use np.nditer
As the result and the parameter have different shapes I seem to have to create two np.nditer objects one for the parameter and one for the result. This is my first attempt to use nditer for anything so it could be over complicated.
def test( t ):
ts = t.shape
result = np.zeros((ts[0]+1,ts[1],ts[2]))
param = np.nditer( [t], ['external_loop'], ['readonly'], order = 'F')
with np.nditer( [result], ['external_loop'], ['writeonly'], order = 'F') as res:
for p, r in zip( param, res ):
r[:] = interfunc(p)
return result
It's slightly slower than the explicit loops and less easy to follow than either of the other solutions.
As requested by #Tis Chris, here is a solution using np.nditer with the multi_index flag but I prefer the explicit nested for loops method above because it is 10% faster
In [29]: t = np.arange( 18 ).reshape(3,3,2)
In [30]: ax0old = np.arange(t.shape[0])
In [31]: ax0new = np.linspace(0, t.shape[0]-1, 4)
In [32]: tnew = np.zeros((len(ax0new), t.shape[1], t.shape[2]))
In [33]: it = np.nditer(t[0], flags=['multi_index'])
In [34]: for _ in it:
...: tnew[:, it.multi_index[0], it.multi_index[1]] = np.interp(ax0new, ax0old, t[:, it.multi_
...: index[0], it.multi_index[1]])
...:
In [35]: tnew
Out[35]:
array([[[ 0., 1.],
[ 2., 3.],
[ 4., 5.]],
[[ 4., 5.],
[ 6., 7.],
[ 8., 9.]],
[[ 8., 9.],
[10., 11.],
[12., 13.]],
[[12., 13.],
[14., 15.],
[16., 17.]]])
You could try scipy.interpolate.interp1d:
from scipy.interpolate import interp1d
import numpy as np
t = np.array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]]])
# for the first slice
f = interp1d(np.arange(t.shape[0]), t[..., 0], axis=0)
# returns a function which you call with values within range np.arange(t.shape[0])
# data used for interpolation
t[..., 0]
>>> array([[ 0, 2, 4],
[ 6, 8, 10],
[12, 14, 16]])
f(1)
>>> array([ 6., 8., 10.])
f(1.5)
>>> array([ 9., 11., 13.])
This is a quick one. I am wondering if there is a better way to express the following lines (besides using a short loop):
energy = np.zeros((4, signal.shape[1]))
energy[0::4, 0:] = np.sum(signal[0::4, :], axis=0)
energy[1::4, 0:] = np.sum(signal[1::4, :], axis=0)
energy[2::4, 0:] = np.sum(signal[2::4, :], axis=0)
energy[3::4, 0:] = np.sum(signal[3::4, :], axis=0)
Reshape to split the first axis into two and then sum along the first of those two, like so -
energy = signal.reshape(-1,4,signal.shape[1]).sum(0)
Sample run -
In [327]: np.random.seed(0)
In [328]: signal = np.random.randint(0,9,(8,5))
In [329]: energy = np.zeros((4, signal.shape[1]))
...: energy[0::4, 0:] = np.sum(signal[0::4, :], axis=0)
...: energy[1::4, 0:] = np.sum(signal[1::4, :], axis=0)
...: energy[2::4, 0:] = np.sum(signal[2::4, :], axis=0)
...: energy[3::4, 0:] = np.sum(signal[3::4, :], axis=0)
In [330]: energy
Out[330]:
array([[ 13., 4., 6., 3., 10.],
[ 8., 5., 4., 7., 15.],
[ 7., 11., 11., 4., 13.],
[ 7., 8., 8., 5., 12.]])
In [331]: signal.reshape(-1,4,signal.shape[1]).sum(0)
Out[331]:
array([[13, 4, 6, 3, 10],
[ 8, 5, 4, 7, 15],
[ 7, 11, 11, 4, 13],
[ 7, 8, 8, 5, 12]])
For arrays with number of rows not necessarily a multiple of 4, here's the generic version -
m = signal.shape[0]
n = m//4
energy = signal[:n*4].reshape(n,4,-1).sum(0)
energy[:m%4] += signal[n*4:]
I have two vectors A & B having dimensions (1, 100) & (784, 100) respectively. I thought A would be broadcast along the raw to the same dimension as B, but got error that "Dimensions must be equal". Can you please explain why?
Broadcasting of matrices with the same rank (i.e. 2) seems to work as it says on the tin:
import tensorflow as tf
tf.__version__
# 1.3.0
A = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float32)
B = tf.constant([[1, -1]], dtype=tf.float32)
sess = tf.Session()
sess.run(A * B)
# array([[ 1., -2.],
# [ 3., -4.],
# [ 5., -6.]], dtype=float32)
sess.run(tf.multiply(A, B))
# array([[ 1., -2.],
# [ 3., -4.],
# [ 5., -6.]], dtype=float32)
I am new in TensorFlow. I am trying to implement the global_context extraction in this paper https://arxiv.org/abs/1506.04579, which is actually an average pooling over the whole feature map, then duplicate the 1x1 feature map back to the original size. The illustration is as below
Specifically, the expected operation is following.
input: [N, 1, 1, C] tensor, where N is the batch size and C is the number of channel
output: [N, H, W, C] tensor, where H, W is the hight and width of original feature map, and all the H * W values of output are the same as the 1x1 input.
For example,
[[1, 1, 1]
1 -> [1, 1, 1]
[1, 1, 1]]
I have no idea how to do this using TensorFlow. tf.image.resize_images requires 3 channels, and tf.pad cannot pad constant value other than zero.
tf.tile may help you
x = tf.constant([[1, 2, 3]]) # shape (1, 3)
y = tf.tile(x, [3, 1]) # shape (3, 3)
y_ = tf.tile(x, [3, 2]) # shape (3, 6)
with tf.Session() as sess:
a, b, c = sess.run([x, y, y_])
>>>a
array([[1, 2, 3]], dtype=int32)
>>>b
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]], dtype=int32)
>>>c
array([[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3]], dtype=int32)
tf.tile(input, multiples, name=None)
multiples means how many times you want to repeat in this axis
in y repeat axis0 3 times
in y_ repeat axis0 3 times, and axis1 2 times
you may need to use tf.expand_dim first
yes it accept dynamic shape
x = tf.placeholder(dtype=tf.float32, shape=[None, 4])
x_shape = tf.shape(x)
y = tf.tile(x, [3 * x_shape[0], 1])
with tf.Session() as sess:
x_ = np.array([[1, 2, 3, 4]])
a = sess.run(y, feed_dict={x:x_})
>>>a
array([[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]], dtype=float32)
I am came across numpyand am trying to understand the proper syntax for building multidimensional arrays. For instance:
numpy.asarray([[1.,2], [3,4], [5, 6]])
prints:
[[ 1. 2.]
[ 3. 4.]
[ 5. 6.]]
while:
numpy.asarray([[1 ,2], [3, 4], [5, 6]])
prints:
[[1 2]
[3 4]
[5 6]]
that . is an odd syntax element.
what is it doing exactly?
np.array deduces the array shape from the nesting of the [], and dtype from the nature of the elements. If at least one element is a Python float, the whole array is float:
In [178]: x=np.array([1, 2, 3.0]) # 1d float
In [179]: x.shape
Out[179]: (3,)
In [180]: x.dtype
Out[180]: dtype('float64')
if all elements are integer - the array is also int
In [182]: x=np.array([[1, 2],[3, 4]]) # 2d int
In [183]: x.shape
Out[183]: (2, 2)
In [184]: x.dtype
Out[184]: dtype('int32')
You can also set the dtype explicitly, e.g.
In [185]: x=np.array([[1, 2],[3, 4]], dtype=np.float32)
In [186]: x
Out[186]:
array([[ 1., 2.],
[ 3., 4.]], dtype=float32)
In [187]: x.dtype
Out[187]: dtype('float32')