Why is broadcasting done by aligning axes backwards - numpy

Numpy's broadcasting rules have bitten me once again and I'm starting to feel there may be a way of thinking about this
topic that I'm missing.
I'm often in situations as follows: the first axis of my arrays is reserved for something fixed, like the number of samples. The second axis could represent different independent variables of each sample, for some arrays, or it could be not existent when it feels natural that there only be one quantity attached to each sample in an array. For example, if the array is called price, I'd probably only use one axis, representing the price of each sample. On the other hand, a second axis is sometimes much more natural. For example, I could use a neural network to compute a quantity for each sample, and since neural networks can in general compute arbitrary multi valued functions, the library I use would in general return a 2d array and make the second axis singleton if I use it to compute a single dependent variable. I found this approach to use 2d arrays is also more amenable to future extensions of my code.
Long story short, I need to make decisions in various places of my codebase whether to store array as (1000,) or (1000,1), and changes of requirements occasionally make it necessary to switch from one format to the other.
Usually, these arrays live alongside arrays with up to 4 axes, which further increases the pressure to sometimes introduce singleton second axis, and then have the third axis represent a consistent semantic quality for all arrays that use it.
The problem now occurs when I add my (1000,) or (1000,1) arrays, expecting to get (1000,1), but get (1000,1000) because of implicit broadcasting.
I feel like this prevents giving semantic meaning to axes. Of course I could always use at least two axes, but that leads to the question where to stop: To be fail safe, continuing this logic, I'd have to always use arrays of at least 6 axes to represent everything.
I'm aware this is maybe not the best technically well defined question, but does anyone have a modus operandi that helps them avoid these kind of bugs?
Does anyone know the motivations of the numpy developers to align axes in reverse order for broadcasting? Was computational efficiency or another technical reason behind this, or a model of thinking that I don't understand?

In MATLAB broadcasting, a jonny-come-lately to this game, expands trailing dimensions. But there the trailing dimensions are outermost, that is order='F'. And since everything starts as 2d, this expansion only occurs when one array is 3d (or larger).
https://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b/
explains, and gives a bit of history. My own history with the language is old enough, that the ma_expanded = ma(ones(3,1),:) style of expansion is familiar. octave added broadcasting before MATLAB.
To avoid ambiguity, broadcasting expansion can only occur in one direction. Expanding in the direction of the outermost dimension makes seems logical.
Compare (3,) expanded to (1,3) versus (3,1) - viewed as nested lists:
In [198]: np.array([1,2,3])
Out[198]: array([1, 2, 3])
In [199]: np.array([[1,2,3]])
Out[199]: array([[1, 2, 3]])
In [200]: (np.array([[1,2,3]]).T).tolist()
Out[200]: [[1], [2], [3]]
I don't know if there are significant implementation advantages. With the striding mechanism, adding a new dimension anywhere is easy. Just change the shape and strides, adding a 0 for the dimension that needs to be 'replicated'.
In [203]: np.broadcast_arrays(np.array([1,2,3]),np.array([[1],[2],[3]]),np.ones((3,3)))
Out[203]:
[array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]), array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]]), array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])]
In [204]: [x.strides for x in _]
Out[204]: [(0, 8), (8, 0), (24, 8)]

Related

Numpy bivariate normal distribution with correlation = 1

Consider X and Y to be marginally standard normal with correlation 1.0.
When the correlation is 1.0, the bivariate normal distribution is undefined (it's technically the y = x line), but numpy still prints out values. Why does it do this?
Oh, but the distribution is defined! It just doesn't have a well-defined density function. (At least, not with respect to the Lesbegue measure on the 2D space.) (See Mathematics Stack Exchange's discussion on broader groups of such distributions.) So numpy is doing nothing wrong.
What you're describing is the degenerate case of the bivariate (or more generally, multivariate) normal distribution. This occurs when the covariance matrix is not positive definite. However, the distribution is defined for any positive semi-definite covariance matrix.
As an example, the matrix [[1, 1], [1, 1]] is positive not definite but is positive semidefinite.
The distribution still has a host of other properties that distributions should: a support (the real line, as you note: μ + span(Σ)), moments, and more.
import numpy as np
np.random.multivariate_normal(mean=[0, 0], cov=[[1, 1], [1, 1]])
# array([0.61156886, 0.61156887])
In summary, numpy's behavior isn't broken. It's well-behaved by returning samples from a properly specified distribution.

Higher (4+) dimension/axis in numpy...are they ever actually used in computation?

what I mean by the title is that sometimes I come across code that requires numpy operations (for example sum or average) along a specified axis. For example:
np.sum([[0, 1], [0, 5]], axis=1)
I can grasp this concept, but do we actually ever do these operations also along higher dimensions? Or is that not a thing? And if yes, how do you gain intuition for high-dimensional datasets and how do you make sure you are working along the right dimension/axis?

How to change each element in an array to the mean of the array using NumPy?

I am new to Python. In one of my assignment question, part of the question requires us to compute the average of each element in a sub-matrix and replace each element with the mean using operators that's available in Numpy.
An example of the matrix could be
M = [[[1,2,3],[2,3,4]],[[3,4,5],[4,5,6]]]
Through some operations, it is expected to get a matrix like the following:
M = [[[2,2,2],[3,3,3]],[[4,4,4],[5,5,5]]]
I have looked at some numpy documentations and still haven't figured out, would really appreciate if someone can help.
You have a few different options here. All of them follow the same general idea. You have an MxNxL array and you want to apply a reduction operation along the last axis that will leave you with an MxN result by default. However, you want to broadcast that result across the same MxNxL shape you started with.
Numpy has a parameter in most reduction operations that allows you to keep the reduced dimension present in the output array, which will allow you to easily broadcast that result into the correct sized matrix. The parameter is called keepdims, you can read more in the documentation to numpy.mean.
Here are a few approaches that all take advantage of this.
Setup
avg = M.mean(-1, keepdims=1)
# array([[[2.],
# [3.]],
#
# [[4.],
# [5.]]])
Option 1
Assign to a view of the array. However, it will also coerce float averages to int, so cast your array to float first for precision if you want to do this.
M[:] = avg
Option 2
An efficient read only view using np.broadcast_to
np.broadcast_to(avg, M.shape)
Option 3
Broadcasted multiplication, more for demonstration than anything.
avg * np.ones(M.shape)
All will produce (same except for possibly the dtype):
array([[[2., 2., 2.],
[3., 3., 3.]],
[[4., 4., 4.],
[5., 5., 5.]]])
In one line of code:
M.mean(-1, keepdims=1) * np.ones(M.shape)

Why the second dimension in a Numpy array is empty?

Why the output here
array = np.arange(3)
array.shape
is
(3,)
and not
(1,3)
What does the missing dimension means or equals?
In case there's confusion, (3,) doesn't mean there's a missing dimension. The comma is part of the standard Python notation for a single element tuple. Shapes (1,3), (3,), and (3,1) are distinct,
While they can contain the same 3 elements, their use in calculations (broadcasting) is different, their print format is different, and their list equivalent is different:
In [21]: np.array([1,2,3])
Out[21]: array([1, 2, 3])
In [22]: np.array([1,2,3]).tolist()
Out[22]: [1, 2, 3]
In [23]: np.array([1,2,3]).reshape(1,3).tolist()
Out[23]: [[1, 2, 3]]
In [24]: np.array([1,2,3]).reshape(3,1).tolist()
Out[24]: [[1], [2], [3]]
And we don't have to stop at adding just one singleton dimension:
In [25]: np.array([1,2,3]).reshape(1,3,1).tolist()
Out[25]: [[[1], [2], [3]]]
In [26]: np.array([1,2,3]).reshape(1,3,1,1).tolist()
Out[26]: [[[[1]], [[2]], [[3]]]]
In numpy an array can have 0, 1, 2 or more dimensions. 1 dimension is just as logical as 2.
In MATLAB a matrix always has 2 dim (or more), but it doesn't have to be that way. Strictly speaking MATLAB doesn't even have scalars. An array with shape (3,) is missing a dimension only if MATLAB is taken as the standard.
numpy is built on Python which as scalars, and lists (which can nest). How many dimensions does a Python list have?
If you want to get into history, MATLAB was developed as a front end to a set of Fortran linear algebra routines. Given the problems those routines solved the concept of matrix with 2 dimensions, and row vs column vectors made sense. It wasn't until version 3.something that MATLAB was generalized to allow more than 2 dimensions (in the late 1990s).
numpy is based on several attempts to provide arrays to Python (e.g. numeric). Those developers took a more general approach to arrays, one where 2d was an artificial constraint. That has precedence in computer languages and mathematics (and physics). APL was developed in the 1960s, first as a mathematical notation, and then as a computer language. Like numpy its arrays can be 0d or higher. (Since I used APL before I used MATLAB, the numpy approach feels quite natural.)
In APL there aren't separate lists or tuples. So the shape of an array, rho A is itself an array, and rho rho A is the number of dimensions of A, also called the rank.
http://docs.dyalog.com/14.0/Dyalog%20APL%20Idioms.pdf

numpy: Broadcasting a vector horizontally

I have a 1-D array in numpy v. I'd like to copy it to make a matrix with each row being a copy of v. That's easy: np.broadcast_to(v, desired_shape).
However, if I'd like to treat v as a column vector, and copy it to make a matrix with each column being a copy of v, I can't find a simple way to do it. Through trial and error, I'm able to do this:
np.broadcast_to(v.reshape(v.shape[0], 1), desired_shape)
While that works, I can't claim to understand it (even though I wrote it!).
Part of the problem is that numpy doesn't seem to have a concept of a column vector (hence the reshape hack instead of what in math would just be .T).
But, a deeper part of the problem seems to be that broadcasting only works vertically, not horizontally. Or perhaps a more correct way to say it would be: broadcasting only works on the higher dimensions, not the lower dimensions. I'm not even sure if that's correct.
In short, while I understand the concept of broadcasting in general, when I try to use it for particular applications, like copying the col vector to make a matrix, I get lost.
Can you help me understand or improve the readability of this code?
https://en.wikipedia.org/wiki/Transpose - this article on Transpose talks only of transposing a matrix.
https://en.wikipedia.org/wiki/Row_and_column_vectors -
a column vector or column matrix is an m × 1 matrix
a row vector or row matrix is a 1 × m matrix
You can easily create row or column vectors(matrix):
In [464]: np.array([[1],[2],[3]]) # column vector
Out[464]:
array([[1],
[2],
[3]])
In [465]: _.shape
Out[465]: (3, 1)
In [466]: np.array([[1,2,3]]) # row vector
Out[466]: array([[1, 2, 3]])
In [467]: _.shape
Out[467]: (1, 3)
But in numpy the basic structure is an array, not a vector or matrix.
[Array in Computer Science] - Generally, a collection of data items that can be selected by indices computed at run-time
A numpy array can have 0 or more dimensions. In contrast in MATLAB matrix has 2 or more dimensions. Originally a 2d matrix was all that MATLAB had.
To talk meaningfully about a transpose you have to have at least 2 dimensions. One may have size one, and map onto a 1d vector, but it still a matrix, a 2d object.
So adding a dimension to a 1d array, whether done with reshape or [:,None] is NOT a hack. It is a perfect valid and normal numpy operation.
The basic broadcasting rules are:
a dimension of size 1 can be changed to match the corresponding dimension of the other array
a dimension of size 1 can be added automatically on the left (front) to match the number of dimensions.
In this example, both steps apply: (5,)=>(1,5)=>(3,5)
In [458]: np.broadcast_to(np.arange(5), (3,5))
Out[458]:
array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]])
In this, we have to explicitly add the size one dimension on the right (end):
In [459]: np.broadcast_to(np.arange(5)[:,None], (5,3))
Out[459]:
array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4]])
np.broadcast_arrays(np.arange(5)[:,None], np.arange(3)) produces two (5,3) arrays.
np.broadcast_arrays(np.arange(5), np.arange(3)[:,None]) makes (3,5).
np.broadcast_arrays(np.arange(5), np.arange(3)) produces an error because it has no way of determining whether you want (5,3) or (3,5) or something else.
Broadcasting always adds new dimensions to the left because it'd be ambiguous and bug-prone to try to guess when you want new dimensions on the right. You can make a function to broadcast to the right by reversing the axes, broadcasting, and reversing back:
def broadcast_rightward(arr, shape):
return np.broadcast_to(arr.T, shape[::-1]).T