How to conveniently use operations on numpy fortran contiguos arrays? - numpy

Some numpy functions like np.matmul(a, b) have convenient behavior for stacks of matrices.
The manual states:
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
Thus, for a.shape = (10 , 2, 4) and b.shape(10, 4, 2) the statementa # b is meaningful and will have shape (10, 2, 2)
However, I'm coming from the linear algebra world, where I'm used to a Fortran contiguous array layout.
The same a represented as a Fortran contiguous array would have shape (4, 2, 10) and similarly b.shape = (2, 4, 10).
To do a # b as before I would have to invoke
(a.T # b.T).T .
Even worse, assume you naively created the same Fortran-contiguous array a with the behavior of matmul in mind, such that it has shape (10, 4, 2).
Then a.strides = (8, 80, 320) with the smallest stride in the 'stack' index, which actually should have highest stride.
Is this really the way to go or am I missing something?

While numpy can handle all sorts of layouts, many details are designed with the "C" layout in mind. Good examples are how nested lists translate into arrays, and the way numpy operations batch excess dimensions as in the matmul case.
It is correct that results in numpy as a rule of thumb do not depend on array layout (FORTRAN,C,non-contiguous); speed, however, certainly does and heavily so:
rng = np.random.default_rng()
a = rng.random((100,111,200))
b = rng.random((111,77,200))
af = np.array(a,order="F")
bf = np.array(b,order="F")
np.allclose((b.T#a.T).T,(bf.T#af.T).T)
# True
timeit(lambda:(b.T#a.T).T,number=10)
# 5.972857117187232
timeit(lambda:(bf.T#af.T).T,number=10)
# 0.1994628761895001
In fact, sometimes it is totally worth it to non-lazily transpose, i.e. copy your data into the best layout:
timeit(lambda:(np.array(b.T,order="C")#np.array(a.T,order="C")).T,number=10)
# 0.3931349152699113
My advice: If you want speed and convenience it is probably best to go with the "C" layout, it doesn't take all that long to get used to and saves you a lot of potential headaches.

numpy's matrix multiplication works regardless of the internal layout of the array. For example, here are two C-ordered arrays:
>>> import numpy as np
>>> a = np.random.rand(10, 2, 4)
>>> b = np.random.rand(10, 4, 2)
>>> print('a', a.shape, a.strides)
>>> print('b', b.shape, b.strides)
a (10, 2, 4) (64, 32, 8)
b (10, 4, 2) (64, 16, 8)
Here are the equivalent arrays in Fortran order:
>>> af = np.asfortranarray(a)
>>> bf = np.asfortranarray(b)
>>> print('af', af.shape, af.strides)
>>> print('bf', bf.shape, bf.strides)
af (10, 2, 4) (8, 80, 160)
bf (10, 4, 2) (8, 80, 320)
Numpy treats equivalent arrays as equivalent, regardless of their internal layout:
>>> np.allclose(a, af) and np.allclose(b, bf)
True
The results of a matrix multiplication do not depend on the internal layout:
>>> np.allclose(a # b, af # bf)
True
and you can even mix layouts if you wish:
>>> np.allclose(a # bf, af # b)
True
In short, the most convenient way to use Fortran-ordered arrays in numpy is to not worry about internal array layout: the shape is all that matters.
If your array shapes differ from what is expected by the numpy matmul API, your best bet is to reshape the arrays, for example using a.transpose(2, 0, 1) # b.transpose(2, 0, 1) or similar, depending on what is appropriate for your use-case, but don't worry: for C or Fortran contiguous arrays, this operation only adjusts the metadata around the array view, it does not cause the underlying data buffer to be copied or re-ordered.

Related

TF broadcast along first axis

Say I have 2 tensors, one with shape (10,1) and another one with shape (10, 11, 1)... what I want is to multiply those broadcasting along the first axis, and not the last one, as used to
tf.zeros([10,1]) * tf.ones([10,12,1])
however this is not working... is there a way to do it without transposing it using perm?
You cannot change the broadcasting rules, but you can prevent broadcasting by doing it yourself. Broadcasting takes effect if the ranks are different.
So instead of permuting the axes, you can also repeat along a new axis:
import tensorflow as tf
import einops as ops
a = tf.zeros([10, 1])
b = tf.ones([10, 12, 1])
c = ops.repeat(a, 'x z -> x y z', y=b.shape[1]) * b
c.shape
>>> TensorShape([10, 12, 1])
For the above example, you need to do tf.zeros([10,1])[...,None] * tf.ones([10,12,1]) to satisfy broadcasting rules: https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules
If you want to do this for any random shapes, you can do the multiplication with the transposed shape, so that the last dimensions of both the matrices match, obeying broadcasting rule and then do the transpose again, to get back to the required output,
tf.transpose(a*tf.transpose(b))
Example,
a = tf.ones([10,])
b = tf.ones([10,11,12,13,1])
tf.transpose(b)
#[1, 13, 12, 11, 10]
(a*tf.transpose(b))
#[1, 13, 12, 11, 10]
tf.transpose(a*tf.transpose(b)) #Note a is [10,] not [10,1], otherwise you need to add transpose to a as well.
#[10, 11, 12, 13, 1]
Another approach is to expanding the axis:
a = tf.ones([10])[(...,) + (tf.rank(b)-1) * (tf.newaxis,)]

Numpy Random Choice with Non-regular Array Size

I'm making an array of sums of random choices from a negative binomial distribution (nbd), with each sum being of non-regular length. Right now I implement it as follows:
import numpy
from numpy.random import default_rng
rng = default_rng()
nbd = rng.negative_binomial(1, 0.5, int(1e6))
gmc = [12, 35, 4, 67, 2]
n_pp = np.empty(len(gmc))
for i in range(len(gmc)):
n_pp[i] = np.sum(rng.choice(nbd, gmc[i]))
This works, but when I perform it over my actual data it's very slow (gmc is of dimension 1e6), and I would like to vary this for multiple values of n and p in the nbd (in this example they're set to 1 and 0.5, respectively).
I'd like to work out a pythonic way to do this which eliminates the loop, but I'm not sure it's possible. I want to keep default_rng for the better random generation than the older way of doing it (np.random.choice), if possible.
The distribution of the sum of m samples from the negative binomial distribution with parameters (n, p) is the negative binomial distribution with parameters (m*n, p). So instead of summing random selections from a large, precomputed sample of negative_binomial(1, 0.5), you can generate your result directly with negative_binomial(gmc, 0.5):
In [68]: gmc = [12, 35, 4, 67, 2]
In [69]: npp = rng.negative_binomial(gmc, 0.5)
In [70]: npp
Out[70]: array([ 9, 34, 1, 72, 7])
(The negative_binomial method will broadcast its inputs, so we can pass gmc as an argument to generate all the samples with one call.)
More generally, if you want to vary the n that is used to generate nbd, you would multiply that n by the corresponding element in gmc and pass the product to rng.negative_binomial.

Two 4D mat mul numpy and should expect output 5D

I want to apply attention weights (5 label) to my convolution with 3 filters, could any help me to do how to apply matmul. Appreciated if you give tensorflow version as well.
import numpy as np
conv = np.random.randint(10,size=[1,3,2,2], dtype=int) # [batches,filter,row,col]
attention = np.random.randint(5,size=[1,5,2,1], dtype=int) # [batches,label,row,col]
np.matmul(conv,attention).shape # expected output size [1,3,5,2,1] [batches,filter,label,row,col]
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (1,3,2,2)->(1,3,2,newaxis,2) (1,5,2,1)->(1,5,newaxis,1,2)
According to the docs of matmul:
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
and
Stacks of matrices are broadcast together as if the matrices were elements.
This means that in your case, all but the last two dimensions need to match up. If you want the output shape to be 1, 3, 5, 2, 1, you will need to explicitly insert an empty axis into each array. You can do that at creation time:
import numpy as np
conv = np.random.randint(10, size=[1, 3, 1, 2, 2], dtype=int)
attention = np.random.randint(5, size=[1, 1, 5,2,1], dtype=int)
np.matmul(conv,attention).shape
Alternatively, you can make the insertion explicit by multiplying views with the appropriate insertions:
np.matmul(conv[:, :, np.newaxis, ...], attention[:, np.newaxis, ...]).shape

TensorFlow: implicit broadcasting in element-wise addition/multiplication

How does the implicit broadcasting in tensorflow using + and * work?
If i Have two tensors, such that
a.get_shape() = [64, 10, 1, 100]
b.get_shape() = [64, 100]
(a+b).get_shape = [64, 10, 64, 100]
(a*b).get_shape = [64, 10, 64, 100]
How does that become [64, 10, 64, 100]??
According to the documentation, operations like add are broadcasting operation.
Quoting the glossary:
Broadcasting operation
An operation that uses numpy-style broadcasting to make the shapes of its tensor arguments compatible.
The numpy-style broadcasting is well documented in the documentation:
In brief:
[...] the smaller array is “broadcast” across the larger array so that they have compatible shapes.
Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python.
I think that the broadcasting isn't doing what you intended. It's actually broadcasting both directions. Let me show you what I mean by modifying your example
a = tf.ones([64, 10, 1, 100])
b = tf.ones([128, 100])
print((a+b).shape) # prints "(64, 10, 128, 100)"
From this we see that it broadcasts by matching the last dimensions first. It's implicitly tiling a across it's third dimension to match the size of b's first dimension, then implicitly adding singletons and tiling b across a's first two dimensions.
What I think you expected to do was to implicitly tile b across a's second dimension. To do that, you need b to be a different shape:
a = tf.ones([64, 10, 1, 100])
b = tf.ones([64, 1, 1, 100])
print((a+b).shape) # prints "(64, 10, 1, 100)"
You can use tf.expand_dims() twice on your b to add the two singleton dimensions to match this shape.
numpy style broadcasting is well documented, but to give a short explanation: the 2 tensors' shapes will be compared start from the last shape backward, then any shape lacked in either tensor will be replicated to be matched.
For example, with
a.get_shape() = [64, 10, 1, 100]
b.get_shape() = [64, 100]
(a*b).get_shape = [64, 10, 64, 100]
a and b have the same last shape==100, then the next to last shape of a is replicated to match b shape==64, b lacks the first two shapes of a and they will be created.
Note that any lacking shape must be 1 or absent, because the whole of lower-level shapes are replicated.

Numpy rebinning a 2D array

I am looking for a fast formulation to do a numerical binning of a 2D numpy array. By binning I mean calculate submatrix averages or cumulative values. For ex. x = numpy.arange(16).reshape(4, 4) would have been splitted in 4 submatrix of 2x2 each and gives numpy.array([[2.5,4.5],[10.5,12.5]]) where 2.5=numpy.average([0,1,4,5]) etc...
How to perform such an operation in an efficient way... I don't have really any ideay how to perform this ...
Many thanks...
You can use a higher dimensional view of your array and take the average along the extra dimensions:
In [12]: a = np.arange(36).reshape(6, 6)
In [13]: a
Out[13]:
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
In [14]: a_view = a.reshape(3, 2, 3, 2)
In [15]: a_view.mean(axis=3).mean(axis=1)
Out[15]:
array([[ 3.5, 5.5, 7.5],
[ 15.5, 17.5, 19.5],
[ 27.5, 29.5, 31.5]])
In general, if you want bins of shape (a, b) for an array of (rows, cols), your reshaping of it should be .reshape(rows // a, a, cols // b, b). Note also that the order of the .mean is important, e.g. a_view.mean(axis=1).mean(axis=3) will raise an error, because a_view.mean(axis=1) only has three dimensions, although a_view.mean(axis=1).mean(axis=2) will work fine, but it makes it harder to understand what is going on.
As is, the above code only works if you can fit an integer number of bins inside your array, i.e. if a divides rows and b divides cols. There are ways to deal with other cases, but you will have to define the behavior you want then.
See the SciPy Cookbook on rebinning, which provides this snippet:
def rebin(a, *args):
'''rebin ndarray data into a smaller ndarray of the same rank whose dimensions
are factors of the original dimensions. eg. An array with 6 columns and 4 rows
can be reduced to have 6,3,2 or 1 columns and 4,2 or 1 rows.
example usages:
>>> a=rand(6,4); b=rebin(a,3,2)
>>> a=rand(6); b=rebin(a,2)
'''
shape = a.shape
lenShape = len(shape)
factor = asarray(shape)/asarray(args)
evList = ['a.reshape('] + \
['args[%d],factor[%d],'%(i,i) for i in range(lenShape)] + \
[')'] + ['.sum(%d)'%(i+1) for i in range(lenShape)] + \
['/factor[%d]'%i for i in range(lenShape)]
print ''.join(evList)
return eval(''.join(evList))
I assume that you only want to know how to generally build a function that performs well and does something with arrays, just like numpy.reshape in your example. So if performance really matters and you're already using numpy, you can write your own C code for that, like numpy does. For example, the implementation of arange is completely in C. Almost everything with numpy which matters in terms of performance is implemented in C.
However, before doing so you should try to implement the code in python and see if the performance is good enough. Try do make the python code as efficient as possible. If it still doesn't suit your performance needs, go the C way.
You may read about that in the docs.