Elementwise multiplication of NumPy arrays of different shapes - numpy

When I use numpy.multiply(a,b) to multiply numpy arrays with shapes (2, 1),(2,) I get a 2 by 2 matrix. But what I want is element-wise multiplication.
I'm not familiar with numpy's rules. Can anyone explain what's happening here?

When doing an element-wise operation between two arrays, which are not of the same dimensionality, NumPy will perform broadcasting. In your case Numpy will broadcast b along the rows of a:
import numpy as np
a = np.array([[1],
[2]])
b = [3, 4]
print(a * b)
Gives:
[[3 4]
[6 8]]
To prevent this, you need to make a and b of the same dimensionality. You can add dimensions to an array by using np.newaxis or None in your indexing, like this:
print(a * b[:, np.newaxis])
Gives:
[[3]
[8]]

Let's say you have two arrays, a and b, with shape (2,3) and (2,) respectively:
a = np.random.randint(10, size=(2,3))
b = np.random.randint(10, size=(2,))
The two arrays, for example, contain:
a = np.array([[8, 0, 3],
[2, 6, 7]])
b = np.array([7, 5])
Now for handling a product element to element a*b you have to specify what numpy has to do when reaching for the absent axis=1 of array b. You can do so by adding None:
result = a*b[:,None]
With result being:
array([[56, 0, 21],
[10, 30, 35]])

Here are the input arrays a and b of the same shape as you mentioned:
In [136]: a
Out[136]:
array([[0],
[1]])
In [137]: b
Out[137]: array([0, 1])
Now, when we do multiplication using either * or numpy.multiply(a, b), we get:
In [138]: a * b
Out[138]:
array([[0, 0],
[0, 1]])
The result is a (2,2) array because numpy uses broadcasting.
# b
#a | 0 1
------------
0 | 0*0 0*1
1 | 1*0 1*1

I just explained the broadcasting rules in broadcasting arrays in numpy
In your case
(2,1) + (2,) => (2,1) + (1,2) => (2,2)
It has to add a dimension to the 2nd argument, and can only add it at the beginning (to avoid ambiguity).
So you want a (2,1) result, you have to expand the 2nd argument yourself, with reshape or [:, np.newaxis].

Related

How to properly select an area inside a numpy ndarray

How to properly select a specific area inside a NumPy ndarray. For example, in the sample code below I want to select the 2x3 matrix that corresponds to the intersections of columns and rows of matrix M stored in a and b respectively,
M = np.random.rand(4,5)
print(M)
a = [0, 2]
b = [0, 2, 3]
Selection = M[a,b]
but I am getting:
IndexError: shape mismatch: indexing arrays could not be broadcast
together with shapes (2,) (3,)
when I want from matrix M:
[[0.36899449 0.02531732 0.04966994 0.66058884 0.26193009]
[0.92893864 0.10193024 0.74850916 0.72822403 0.09112129]
[0.28863096 0.45470087 0.01032583 0.30931807 0.42765045]
[0.59819051 0.94057773 0.95352287 0.81818564 0.24220261]]
To get:
[[0.36899449 0.04966994]
[0.28863096 0.01032583]
[0.59819051 0.95352287]]

Numpy Advanced Indexing confusion

If a is numpy array of shape (5,3), b is of shape (2,2) and c is of shape (2,2), what is the shape of a[b,c]?
Can anyone explain this to me with an example. I've read the docs but still I am not able to understand how it works.
Just for the purpose of expounding the concept of advanced indexing, here is a contrived example:
# input arrays
In [22]: a
Out[22]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14]])
In [23]: b
Out[23]:
array([[0, 1],
[2, 3]])
In [24]: c
Out[24]:
array([[0, 1],
[2, 2]])
# advanced indexing
In [25]: a[b, c]
Out[25]:
array([[ 0, 4],
[ 8, 11]])
By the expression a[b, c], we are using the arrays b and c to selectively pull out elements from the array a.
To interpret the output of a[b, c]:
# b # c # 2D indices
[[0, 1], [[0, 1] ---> (0,0) (1,1)
[2, 3]] [2, 2]] ---> (2,2) (3,2)
The 2D indices would simply be applied to the array a and the corresponding elements would be returned as array in the result of a[b, c]
a[(0,0)] --> 0
a[(1,1)] --> 4
a[(2,2)] --> 8
a[(3,2)] --> 11
The above elements are returned as a 2D array since the arrays b and c are 2D arrays themselves.
Also, please note that advanced indexing always returns a copy.
In [27]: (a[b, c]).flags.owndata
Out[27]: True
However, an assignment operation using advanced indexing will alter the original array (in-place). But, this behaviour is also dependent on two factors:
whether your indexing operation is pure (only advanced indexing) or mixed (a combination of advanced & simple indexing)
in case of mixed indexing, the order in which they are applied.
See: Views and copies confusion with NumPy arrays when combining index operations

Slicing a tensor by an index tensor in Tensorflow

I have two following tensors (note that they are both Tensorflow tensors which means they are still virtually symbolic at the time I construct the following slicing op before I launch a tf.Session()):
params: has shape (64,784, 256)
indices: has shape (64, 784)
and I want to construct an op that returns the following tensor:
output: has shape (64,784) where
output[i,j] = params_tensor[i,j, indices[i,j] ]
What is the most efficient way in Tensorflow to do so?
ps: I tried with tf.gather but couldn't make use of it to perform the operation I described above.
Many thanks.
-Bests
You can get exactly what you want using tf.gather_nd. The final expression is:
tf.gather_nd(params, tf.stack([tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[0]), 1), [1, tf.shape(indices)[1]]), tf.transpose(tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[1]), 1), [1, tf.shape(indices)[0]])), indices], 2))
This expression has the following explanation:
tf.gather_nd does what you expected and uses the indices to gather the output from the params
tf.stack combines three separate tensors, the last of which is the indices. The first two tensors specify the ordering of the first two dimensions (axis 0 and axis 1 of params/indices)
For the example provided, this ordering is simply 0, 1, 2, ..., 63 for axis 0, and 0, 1, 2, ... 783 for axis 1. These sequences are obtained with tf.range(tf.shape(indices)[0]) and tf.range(tf.shape(indices)[1]), respectively.
For the example provided, indices has shape (64, 784). The other two tensors from the last point above need to have this same shape in order to be combined with tf.stack
First, an additional dimension/axis is added to each of the two sequences using tf.expand_dims.
The use of tf.tile and tf.transpose can be shown by example: Assume the first two axes of params and index have shape (5,3). We want the first tensor to be:
[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]
We want the second tensor to be:
[[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]
These two tensors almost function like specifying the coordinates in a grid for the associated indices.
The final part of tf.stack combines the three tensors on a new third axis, so that the result has the same 3 axes as params.
Keep in mind if you have more or less axes than in the question, you need to modify the number of coordinate-specifying tensors in tf.stack accordingly.
What you want is like a custom reduction function. If you are keeping something like index of maximum value at indices then I would suggest using tf.reduce_max:
max_params = tf.reduce_max(params_tensor, reduction_indices=[2])
Otherwise, here is one way to get what you want (Tensor objects are not assignable so we create a 2d list of tensors and pack it using tf.pack):
import tensorflow as tf
import numpy as np
with tf.Graph().as_default():
params_tensor = tf.pack(np.random.randint(1,256, [5,5,10]).astype(np.int32))
indices = tf.pack(np.random.randint(1,10,[5,5]).astype(np.int32))
output = [ [None for j in range(params_tensor.get_shape()[1])] for i in range(params_tensor.get_shape()[0])]
for i in range(params_tensor.get_shape()[0]):
for j in range(params_tensor.get_shape()[1]):
output[i][j] = params_tensor[i,j,indices[i,j]]
output = tf.pack(output)
with tf.Session() as sess:
params_tensor,indices,output = sess.run([params_tensor,indices,output])
print params_tensor
print indices
print output
I know I'm late, but I recently had to do something similar, and was able to to do it using Ragged Tensors:
output = tf.gather(params, tf.RaggedTensor.from_tensor(indices), batch_dims=-1, axis=-1)
Hope it helps

How to get a dense representation of one-hot vectors

Suppose a Tensor containing :
[[0 0 1]
[0 1 0]
[1 0 0]]
How to get the dense representation in a native way (without using numpy or iterations) ?
[2,1,0]
There is tf.one_hot() to do the inverse, there is also tf.sparse_to_dense() that seems to do it but I was not able to figure out how to use it.
tf.argmax(x, axis=1) should do the job.
vec = tf.constant([[0, 0, 1], [0, 1, 0], [1, 0, 0]])
locations = tf.where(tf.equal(vec, 1))
# This gives array of locations of "1" indices below
# => [[0, 2], [1, 1], [2, 0]])
# strip first column
indices = locations[:,1]
sess = tf.Session()
print(sess.run(indices))
# => [2 1 0]
TensorFlow does not have a native dense to sparse conversion function/helper. Given that the input array is a dense tensor, such as the one you provided, you can define a function to convert a dense tensor to a sparse tensor.
def dense_to_sparse(dense_tensor):
where_dense_non_zero = tf.where(tf.not_equal(dense_tensor, 0))
indices = where_dense_non_zero
values = tf.gather_nd(dense_tensor, where_dense_non_zero)
shape = dense_tensor.get_shape()
return tf.SparseTensor(
indices=indices,
values=values,
shape=shape
)
This helper function finds the indices and values where the Tensor is non-zero and outputs a Sparse tensor with those indices and values. Additionally, the shape is effectively copied over.
You do not want to use tf.sparse_to_dense as that gives you the opposite representation. If you want your output to be [2, 1, 0] instead, you'll need to index the indices. First, you'll need the indices where the array isn't 0:
indices = tf.where(tf.not_equal(dense_tensor, 0))
Then, you'll need to access the tensor using slicing/indicing:
output = indices[:, 1]
You might notice that 1 in the slice above is equivalent to the dimension of the tensor - 1. Therefore, to make these value generic, you could do something like:
output = indices[:, len(dense_tensor.get_shape()) - 1]
Although I'm not exactly sure what you'd do with these values (the value of the column where the value is). Hope this helped!
EDIT: Yaroslav's answer is better if you're looking for the indices/locations of where the input tensor if 1; it won't be extensible for tensors with non-1/0 values if that is required.

Dot product of ith row with ith column

In NumPy:
A = np.array([[1,2,3],[4,5,6]])
array([[1, 3, 5],
[2, 4, 6]])
B = np.array([[1,2],[3,4],[5,6]])
array([[1, 2],
[3, 4],
[5, 6]])
A.dot(B)
array([[35, 44],
[44, 56]])
I only care about getting A.dot(B).diagonal() = array([35, 56])
Is there a way I can get array([35, 56]) without having to compute the inner products of all the rows and columns? I.e. the inner product of the ith row with ith column?
I ask because the performance difference becomes more significant for larger matrices.
This is just matrix multiplication for 2D arrays:
C[i, j] = sum(A[i, ] * B[, j])
So since you just want the diagonal elements, looks like you're after
sum(A[i, ] * B[, i]) # for each i
So you could just use list comprehension:
[np.dot(A[i,:], B[:, i]) for i in xrange(A.shape[0])]
# [22, 64]
OR, (and this only works because you want a diagonal so this assumes that if A's dimensions are n x m, B's dimensions will be m x n):
np.sum(A * B.T, axis=1)
# array([22, 64])
(no fancy numpy tricks going on here, just playing around with the maths).
Can you simply leave out the row in the parameter you don't care about?
The 2x3 x 3x2 gives you a 2x2 result.
A 1x3 x 3x2 matrices will give you only the top row of [A][B], a 1x2 matrix.
EDIT: misread the question. Still, each value in the matrix is produced by the product of the transpose of a column and a row.