Using lexsort on higher dimensional arrays - numpy

I could not for the life of me get array indexing to work properly with higher dimensional lexsort.
I have an ndarray lines of shape (N, 2, 3). You can think of it as N pairs (start and end of a line) of three-dimensional coordinates. These pairs of vectors can contain duplicates, which should be removed.
points = np.array([[1,1,0],[-1,1,0],[-1,-1,0],[1,-1,0]])
lines = np.dstack([points, np.roll(points, shift=1, axis=0)]) # create point pairs / lines
lines = np.vstack([lines, lines[..., ::-1]]) # add duplicates w/reversed direction
lines = lines.transpose(0,2,1) # change shape from N,3,2 to N,2,3
Since the pair (v1, v2) is not equal to (v2, v1), I am sorting the vectors with lexsort as follows
idx = np.lexsort((lines[..., 0], lines[..., 1], lines[..., 2]))
which gives me an array idx of shape (N, 2) indicating the order along axis 1:
array([[0, 1],
[0, 1],
[1, 0],
[1, 0],
[1, 0],
[1, 0],
[0, 1],
[0, 1]])
However, lines[idx] results in something with shape (N, 2, 2, 3). I had tried all manner of newaxis padding, axis reordering etc. to get broadcasting to work, but everything results in the output having even more dimensions, not less. I also tried lines[:, idx], but this gives (N, N, 2, 3).
Based on https://numpy.org/doc/stable/user/basics.indexing.html#integer-array-indexing
for my concrete problem I eventually figured out I need to add an additional
idx_n = np.arange(len(lines))[:, np.newaxis]
lines[idx_n, idx]
due to mixing "advanced" and "simple" indexing lines[:, idx] did not work as I expected.
but is this really the most succinct it can be?

Eventually I found out I wanted
np.take_along_axis(lines, idx[..., np.newaxis] , axis=1)

Related

What is the difference between tf.scatter_add and tf.scatter_nd when indices is a matrix?

Both tf.scatter_add and tf.scatter_nd allow indices to be a matrix. It is clear from the documentation of tf.scatter_nd that the last dimension of indices contains values that are used to index a tensor of shape shape. The other dimensions of indices define the number of elements/slices to be scattered. Suppose updates has a rank N. First k dimensions of indices (except the last dimension) should match with first k dimensions of updates. The last (N-k) dimensions of updates should match with the last (N-k) dimensions of shape.
This implies that tf.scatter_nd can be used to perform an N-dimensional scatter. However, tf.scatter_add also takes matrices as indices. But, its not clear which dimensions of indices correspond to the number of scatters to be performed and how do these dimensions align with updates. Can someone provide a clear explanation possibly with examples?
#shaunshd , I finally fully understand the 3 tensors relationship in tf.scatter_nd_*() arguments, especially when the indices have multi-demensions. e.g:
indices = tf.constant([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [3,3,2]], dtype=tf.int32)
Please don't expect tf.rank(indices)>2, tf.rank(indices)==2 is permanently true;
The following is my test codes to show more complex test case than the examples provided in tensroflow's official website:
def testScatterNDUpdate(self):
ref = tf.Variable(np.zeros(shape=[4, 4, 4], dtype=np.float32))
indices = tf.constant([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [3,3,2]], dtype=tf.int32)
updates = tf.constant([1,2,3,4,5], dtype=tf.float32)
#shape = (4,4,4)
print(tf.tensor_scatter_nd_update(ref, indices, updates))
print(ref.scatter_nd_update(indices, updates))
#print(updates.shape[-1]==shape[-1], updates.shape[0]<=shape[0])
#conditions are:
# updates.shape[0]==indices[0]
# indices[1]<=len(shape)
# tf.rank(indices)==2
You also could understand the indices with the following psudo codes:
def scatter_nd_update(ref, indices, updates):
for i in range(tf.shape(indices)[0]):
ref[indices[i]]=updates[i]
return ref
Comapring with numpy's fancy indexing feature, tensorflow's indexing features are still very difficult to use and have different using style, not unified as same as numpy yet. Hope the situation could be better in tf3.x

Vectorise numpy code on demand

Suppose I have a very basic function in Python:
def f(x, y):
return x + y
Then I can call this with scalars, f(1, 5.4) == 6.4 or with numpy vectors of arbitrary (but the same) shape. E.g. this works:
x = np.arange(3)
y = np.array([1,4,2.3])
f(x, y)
which gives an array with entries 1, 5, 4.3.
But what if f is more complicated? For example, xx and yy are 1D numpy arrays here.
def g(x, y):
return np.sum((xx - x)**2 + (yy - y)**2)
(I hasten to add that I'm not interested in this specific g, but in general strategies...) Then g(5, 6) works fine, but if I want to pass numpy arrays, I seem to have to write a very different function with explict broadcasting etc. For example:
def gg(x, y):
xfull = np.stack([x]*len(xx),axis=-1)
yfull = np.stack([y]*len(xx),axis=-1)
return np.sum((xfull - xx)**2 + (yfull - yy)**2, axis=-1)
This does now work with scalars and arrays. But it seems like a mess, and is hard to read.
Is there a better way?
Given:
def g(x, y):
return np.sum((xx - x)**2 + (yy - y)**2)
my first questions are:
this is written with scalar x and y in mind?
what are xx and yy? You say 1d arrays. Same length?
why aren't they parameters? Because in this context they are fixed?
in words, this offsets xx and yy by constant amounts and takes the sum of their squares, returning a single value?
My next step is to explore the 'broadcasting' limits of this expression. For example it runs for any x that can be used in xx-x. That could be a 0d array, a one element 1d array, an array with the same shape as xx, or anything else that can 'broadcast' with `xx. That's where a thorough understanding of 'broadcasting' is essential.
g(1,2)
g(xx,xx)
g(xx[:,None],yy[None,:])
xx-xx[:,None] though produces a 2d array. np.sum as written takes the sum over all values, i.e. a flattened. Your gg suggests you want to sum on the last axis. If so go ahead and put that in g
def g(x, y):
return np.sum((xx - x)**2 + (yy - y)**2, axis=-1)
Your use of stack in gg produces:
In [101]: xx
Out[101]: array([0, 1, 2, 3, 4])
In [103]: np.stack([np.arange(3)]*len(xx), axis=-1)
Out[103]:
array([[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2]])
I would have written that as x[:,None]
In [104]: xx-_
Out[104]:
array([[ 0, 1, 2, 3, 4],
[-1, 0, 1, 2, 3],
[-2, -1, 0, 1, 2]])
In [105]: xx-np.arange(3)[:,None]
Out[105]:
array([[ 0, 1, 2, 3, 4],
[-1, 0, 1, 2, 3],
[-2, -1, 0, 1, 2]])
That does not work with scalar x; but this does
xx-np.asarray(x)[...,None]
np.array or np.asarray is commonly used as the start of numpy functions to accommodate scalar or list inputs. ... is handy when dealing with a variable number of dimensions. reshape(...,-1) and [...,None] are widely used to expand or generalize dimensions.
I've learned a lot by looking the Python code of numpy functions. I've also learned from years of work with MATLAB to be pedantic about dimensions. Keep track of intended and actual array shapes. It helps to use test shapes that will highlight errors. Test with a (2,3) array instead of an ambiguous (3,3) one.

Slicing a tensor by an index tensor in Tensorflow

I have two following tensors (note that they are both Tensorflow tensors which means they are still virtually symbolic at the time I construct the following slicing op before I launch a tf.Session()):
params: has shape (64,784, 256)
indices: has shape (64, 784)
and I want to construct an op that returns the following tensor:
output: has shape (64,784) where
output[i,j] = params_tensor[i,j, indices[i,j] ]
What is the most efficient way in Tensorflow to do so?
ps: I tried with tf.gather but couldn't make use of it to perform the operation I described above.
Many thanks.
-Bests
You can get exactly what you want using tf.gather_nd. The final expression is:
tf.gather_nd(params, tf.stack([tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[0]), 1), [1, tf.shape(indices)[1]]), tf.transpose(tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[1]), 1), [1, tf.shape(indices)[0]])), indices], 2))
This expression has the following explanation:
tf.gather_nd does what you expected and uses the indices to gather the output from the params
tf.stack combines three separate tensors, the last of which is the indices. The first two tensors specify the ordering of the first two dimensions (axis 0 and axis 1 of params/indices)
For the example provided, this ordering is simply 0, 1, 2, ..., 63 for axis 0, and 0, 1, 2, ... 783 for axis 1. These sequences are obtained with tf.range(tf.shape(indices)[0]) and tf.range(tf.shape(indices)[1]), respectively.
For the example provided, indices has shape (64, 784). The other two tensors from the last point above need to have this same shape in order to be combined with tf.stack
First, an additional dimension/axis is added to each of the two sequences using tf.expand_dims.
The use of tf.tile and tf.transpose can be shown by example: Assume the first two axes of params and index have shape (5,3). We want the first tensor to be:
[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]
We want the second tensor to be:
[[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]
These two tensors almost function like specifying the coordinates in a grid for the associated indices.
The final part of tf.stack combines the three tensors on a new third axis, so that the result has the same 3 axes as params.
Keep in mind if you have more or less axes than in the question, you need to modify the number of coordinate-specifying tensors in tf.stack accordingly.
What you want is like a custom reduction function. If you are keeping something like index of maximum value at indices then I would suggest using tf.reduce_max:
max_params = tf.reduce_max(params_tensor, reduction_indices=[2])
Otherwise, here is one way to get what you want (Tensor objects are not assignable so we create a 2d list of tensors and pack it using tf.pack):
import tensorflow as tf
import numpy as np
with tf.Graph().as_default():
params_tensor = tf.pack(np.random.randint(1,256, [5,5,10]).astype(np.int32))
indices = tf.pack(np.random.randint(1,10,[5,5]).astype(np.int32))
output = [ [None for j in range(params_tensor.get_shape()[1])] for i in range(params_tensor.get_shape()[0])]
for i in range(params_tensor.get_shape()[0]):
for j in range(params_tensor.get_shape()[1]):
output[i][j] = params_tensor[i,j,indices[i,j]]
output = tf.pack(output)
with tf.Session() as sess:
params_tensor,indices,output = sess.run([params_tensor,indices,output])
print params_tensor
print indices
print output
I know I'm late, but I recently had to do something similar, and was able to to do it using Ragged Tensors:
output = tf.gather(params, tf.RaggedTensor.from_tensor(indices), batch_dims=-1, axis=-1)
Hope it helps

How to get a dense representation of one-hot vectors

Suppose a Tensor containing :
[[0 0 1]
[0 1 0]
[1 0 0]]
How to get the dense representation in a native way (without using numpy or iterations) ?
[2,1,0]
There is tf.one_hot() to do the inverse, there is also tf.sparse_to_dense() that seems to do it but I was not able to figure out how to use it.
tf.argmax(x, axis=1) should do the job.
vec = tf.constant([[0, 0, 1], [0, 1, 0], [1, 0, 0]])
locations = tf.where(tf.equal(vec, 1))
# This gives array of locations of "1" indices below
# => [[0, 2], [1, 1], [2, 0]])
# strip first column
indices = locations[:,1]
sess = tf.Session()
print(sess.run(indices))
# => [2 1 0]
TensorFlow does not have a native dense to sparse conversion function/helper. Given that the input array is a dense tensor, such as the one you provided, you can define a function to convert a dense tensor to a sparse tensor.
def dense_to_sparse(dense_tensor):
where_dense_non_zero = tf.where(tf.not_equal(dense_tensor, 0))
indices = where_dense_non_zero
values = tf.gather_nd(dense_tensor, where_dense_non_zero)
shape = dense_tensor.get_shape()
return tf.SparseTensor(
indices=indices,
values=values,
shape=shape
)
This helper function finds the indices and values where the Tensor is non-zero and outputs a Sparse tensor with those indices and values. Additionally, the shape is effectively copied over.
You do not want to use tf.sparse_to_dense as that gives you the opposite representation. If you want your output to be [2, 1, 0] instead, you'll need to index the indices. First, you'll need the indices where the array isn't 0:
indices = tf.where(tf.not_equal(dense_tensor, 0))
Then, you'll need to access the tensor using slicing/indicing:
output = indices[:, 1]
You might notice that 1 in the slice above is equivalent to the dimension of the tensor - 1. Therefore, to make these value generic, you could do something like:
output = indices[:, len(dense_tensor.get_shape()) - 1]
Although I'm not exactly sure what you'd do with these values (the value of the column where the value is). Hope this helped!
EDIT: Yaroslav's answer is better if you're looking for the indices/locations of where the input tensor if 1; it won't be extensible for tensors with non-1/0 values if that is required.

How do I swap tensor's axes in TensorFlow?

I have a tensor of shape (30, 116, 10), and I want to swap the first two dimensions, so that I have a tensor of shape (116, 30, 10)
I saw that numpy as such a function implemented (np.swapaxes) and I searched for something similar in tensorflow but I found nothing.
Do you have any idea?
tf.transpose provides the same functionality as np.swapaxes, although in a more generalized form. In your case, you can do tf.transpose(orig_tensor, [1, 0, 2]) which would be equivalent to np.swapaxes(orig_np_array, 0, 1).
It is possible to use tf.einsum to swap axes if the number of input dimensions is unknown. For example:
tf.einsum("ij...->ji...", input) will swap the first two dimensions of input;
tf.einsum("...ij->...ji", input) will swap the last two dimensions;
tf.einsum("aij...->aji...", input) will swap the second and the third
dimension;
tf.einsum("ijk...->kij...", input) will permute the first three dimensions;
and so on.
You can transpose just the last two axes with tf.linalg.matrix_transpose, or more generally, you can swap any number of trailing axes by working out what the leading indices are dynamically, and using relative indices for the axes you want to transpose
x = tf.ones([5, 3, 7, 11])
trailing_axes = [-1, -2]
leading = tf.range(tf.rank(x) - len(trailing_axes)) # [0, 1]
trailing = trailing_axes + tf.rank(x) # [3, 2]
new_order = tf.concat([leading, trailing], axis=0) # [0, 1, 3, 2]
res = tf.transpose(x, new_order)
res.shape # [5, 3, 11, 7]