Suppose I have the following tensors:
N = 2
k = 3
d = 2
L = torch.arange(N * k * d * d).view(N, k, d, d)
L
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]]],
[[[12, 13],
[14, 15]],
[[16, 17],
[18, 19]],
[[20, 21],
[22, 23]]]])
index = torch.Tensor([0,1,0,0]).view(N,-1)
index
tensor([[0., 1.],
[0., 0.]])
I now would like to use the index tensor to pick out the corresponding matrices on the second dimension, i.e. I would like to get something like:
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]]],
[[[12, 13],
[14, 15]],
[[[[12, 13],
[14, 15]]])
Any idea how I could achieve this?
Thank you so much!
Tensors can be indexed with multiple tensors specified across different dimensions (tuples of tensors), where the i-th element of each tensor are combined to create a tuple of indices, i.e. data[indices_dim0, indices_dim1] results in indexing data[indices_dim0[0], indices_dim1[0]], data[indices_dim0[1], indices_dim1[1]] and so on. They must have the same length len(indices_dim0) == len(indices_dim1).
Let's use the flat version of index (before you applied the view). Each element needs to be matched to the appropriate batch index, which would be [0, 0, 1, 1]. Also index needs to have type torch.long, because floats cannot be used as indices. torch.tensor should be preferred for creating tensors with existing data, since torch.Tensor is an alias for the default tensor type (torch.FloatTensor), whereas torch.tensor automatically uses the data type that represents the given values, but also supports the dtype argument to set the type manually, and is generally more versatile.
# Type torch.long is inferred
index = torch.tensor([0, 1, 0, 0])
# Same, but explicitly setting the type
index = torch.tensor([0, 1, 0, 0], dtype=torch.long)
batch_index = torch.tensor([0, 0, 1, 1])
L[batch_index, index]
# => tensor([[[ 0, 1],
# [ 2, 3]],
#
# [[ 4, 5],
# [ 6, 7]],
#
# [[12, 13],
# [14, 15]],
#
# [[12, 13],
# [14, 15]]])
The indices are not limited to 1D tensors, but they all need to have the same size and each element is used as one index, for example with 2D tensors the indexing happens as data[indices_dim0[i][j], indices_dim1[i][j]]
With 2D tensors it happens to be much simpler to create the batch indices without having to do it manually.
index = torch.tensor([0, 1, 0, 0]).view(N, -1)
# => tensor([[0, 1],
# [0, 0]])
# Every batch gets its index and is repeated across dim=1
batch_index = torch.arange(N).view(N, 1).expand_as(index)
# => tensor([[0, 0],
# [1, 1]])
L[batch_index, index]
Related
Suppose that you have a 3-tensor
data = np.reshape(np.arange(12), [2, 2, 3])
x = tf.constant(data)
Thinking of this as 2x2 matrices indexed by the last index, I would like to get the first column from the first matrix, the second column from the second matrix and the second column from the third matrix.
How can I use tf.gather_nd to do this?
You need first generate the indices you want.
import tensorflow as tf
import numpy as np
indices = [[i,min(j,1),j] for j in range(3) for i in range(2)] # According to your description
# [[0, 0, 0], [1, 0, 0], [0, 1, 1], [1, 1, 1], [0, 1, 2], [1, 1, 2]]
a = tf.constant(np.arange(12).reshape(2,2,3))
res = tf.gather_nd(a, indices)
sess = tf.InteractiveSession()
a.eval()
# array([[[ 0, 1, 2],
# [ 3, 4, 5]],
# [[ 6, 7, 8],
# [ 9, 10, 11]]])
res.eval()
# array([ 0, 6, 4, 10, 5, 11])
I found the following tutorial online explaining how to deal with this kind of problems: https://geekyisawesome.blogspot.com/2018/05/fancy-indexing-in-tensorflow-getting.html
Suppose we have a 4x3 matrix
M = tf.constant(np.arange(12).reshape(4,3))
Now let's say that you wanted the third element of the first row, the second element of the second row, the first element of the third row, and the second element of the fourth row. As explained in the tutorial, this could be accomplished like:
idx = tf.constant([2,1,0,1], tf.int32)
x = tf.gather_nd(M, tf.stack([tf.range(M.shape[0]), idx], axis=1))
But what if M has an unknown number of rows? (and idx as a tensor of integers of the appropriate size) Then tf.range(M.shape[0]) will raise an error. How can I go around that?
b = np.array([[[0, 2, 3], [10, 12, 13]], [[20, 22, 23], [110, 112, 113]]])
print(b[..., -1])
>>>[[3, 13], [23, 113]]
Why does this output show the first axis but not the second axis (to show the second axis, it would have to show each number in its own list)? Is Numpy trying to minimize unnecessary display of dimensions when there is only one number per each second dimension list being shown? Why doesn’t numpy replicate the dimensions of the original array exactly?
Why does this output show the first axis but not the second axis (to show the second axis, it would have to show each number in its own list)?
It does show the first and the second axis. Note that you have a 2d array here, and the first and second axis are retained. Only the third axis has "collapsed".
Your indexing is, for a 3d array, equivalent to:
b[:, :, -1]
It thus means that you create a 2d array c where cij = bij-1. -1 means the last element, so cij=bij2.
b has as values:
>>> b
array([[[ 0, 2, 3],
[ 10, 12, 13]],
[[ 20, 22, 23],
[110, 112, 113]]])
So that means that our result c has as c00=b002 which is 3; for c01=b012 which is 13; for c10=b102 which is 23; and for c11=b112, which is 113.
So the end product is:
>>> b[:,:,-1]
array([[ 3, 13],
[ 23, 113]])
>>> b[...,-1]
array([[ 3, 13],
[ 23, 113]])
By specifying a value for a given dimension that dimension "collapses". Another sensical alternative would have been to have a dimension of size 1, but frequently such subscripting is done to retrieve arrays with a lower number of dimensions.
In [7]: b = np.array([[[0, 2, 3], [10, 12, 13]], [[20, 22, 23], [110, 112, 113]]])
In [8]: b # (2,2,3) shape array
Out[8]:
array([[[ 0, 2, 3],
[ 10, 12, 13]],
[[ 20, 22, 23],
[110, 112, 113]]])
In [9]: b[..., -1]
Out[9]:
array([[ 3, 13],
[ 23, 113]])
This slice of b is a (2,2) array. It's not just a matter of display. Axes 0 and 1 are present; it's axes 2 that's been dropped.
Indexing with a list, or a slice:
In [10]: b[..., [-1]] # (2,2,1)
Out[10]:
array([[[ 3],
[ 13]],
[[ 23],
[113]]])
In [11]: b[..., -1:]
Out[11]:
array([[[ 3],
[ 13]],
[[ 23],
[113]]])
https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
This indexing page is long, but it covers these cases (and more).
I have a problem with using Tensorflow. I have four images with their corresponding indices. I want to make an image from them. I tried for loops, tf.gather, tf.assign, and so on but all show error. If somebody help me, it would be really appreciated. I explain my question with one small example:
We have 4 tensors and their indices from tensor tf.ktop function: (I write like MATLAB for just for simplicity)
a = [1, 2; 5, 6] a_idx = [0, 1; 2, 3]
b = [3, 4; 7, 8] b_idx = [0, 1; 2, 3]
c = [9, 10; 13, 14] c_idx = [0, 1; 2, 3]
d = [11, 12; 15, 16] d_idx = [0, 1; 2, 3]
I am looking for a big image from a, b, c, and d and their indices like:
image = [a b; c d]
image = [1, 2, 3, 4; 5, 6, 7, 8;9 10, 11, 12;13, 14, 15, 16]
In python I have something like:
a, a_idx, b, b_idx, c, c_idx, d, d_idx
n_x = tf.Variable(tf.zeros([1, 4, 4, 1]))
n_patches = tf.extract_image_patches(
n_x,
[1, 2, 2, 1],
[1, 2, 2, 1],
[1, 1, 1, 1],
"SAME"
)
So, n_patches is 4 tensors and I need to put a to d values to each patch corresponding to a_idx to d_idx. Its really easy for me in MATLAB or Numpy to do that using for loop but in tensorflow I can not
In your comments, I suspect you made a tiny error in your desired output, image.
I interpret that you want is given
values = np.array([[2, 5],\
[4, 6]])
indices = np.array([[0, 3],\
[2, 1]])
your result would be
[[2. 0. 0. 0.]
[0. 0. 0. 5.]
[0. 0. 4. 0.]
[0. 6. 0. 0.]]
So you want to obtain a sort of one hot encoded matrix, but with values corresponding to given indices. This can be obtained like so:
import numpy as np
values = np.array([[2, 5],\
[4, 6]])
indices = np.array([[0, 3],\
[2, 1]])
# Make a matrix with only zeros
n_hots = np.zeros_like((indices))
# Now row 0,1,2 and 3 should have values corresponding to the
# indices. That is we should first "unpack" the values and indices:
indices=indices.ravel()
values=values.ravel()
# values are now: [2,5,4,6]
# indices are now: [0,3,2,1]
# values:
# n_hots[row,indices[row]]=values[indices[row]]
# e.g.
# n_hots[0,0]=2
# n_hots[1,3]=5
# n_hots[2,2]=4
# n_hots[3,1]=6
# Notice how the first slices are a ascending range of values:
# [0,1,2,3], and the second slice are the raveled indices, and the
# right hand side of the equal sign are the ravele values!
# That means we can just do the following:
n_hots[np.arange(4),indices]=values
print(n_hots)
In tensorflow it would be a bit different. First generating a one_hot tensor that have ones at the 2nd axis value: at the indices, and then multiplying that with the corresponding indices:
import numpy as np
import tensorflow as tf
indices=tf.placeholder(shape=(None),dtype=tf.int32)
values=tf.placeholder(shape=(None),dtype=tf.float32)
one_hots=tf.one_hot(indices, tf.shape(indices)[0])
n_hots=one_hots*tf.gather(values, indices)
with tf.Session() as sess:
_values = np.array([[2, 5],\
[4, 6]])
_indices = np.array([[0, 3],\
[2, 1]])
n_h=sess.run(n_hots, {indices: _indices.ravel(), values:_values.ravel()})
print(n_h)
For example, I got a tensor [30,6,6,3]: 30 is the batch_size, 6X6 is height x width, 3 is channels).
How could I rearrange its elements from every 3X3 to 1X9, like pixels in MATLAB? As the picture described:
tf.reshape() seems unworkable.
You can do these kinds of transformations by using combination of transpose and reshape. Numpy and TensorFlow logic is the same, so here's a simpler example using numpy. Suppose you have 4x4 array and want to spit it into 4 sub-arrays by skipping rows/columns like in your example.
IE, starting with
a=array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
You want to obtain 4 sub-images like
[0, 2]
[8, 10]
and
[1, 3]
[9, 11]
etc
First you can generate subarrays by stepping over columns
b = a.reshape((4,2,2)).transpose([2,0,1])
This generates the following array
array([[[ 0, 2],
[ 4, 6],
[ 8, 10],
[12, 14]],
[[ 1, 3],
[ 5, 7],
[ 9, 11],
[13, 15]]])
Now you skip the rows
c = b.reshape([2,2,2,2]).transpose(2,0,1,3)
This generates following array
array([[[[ 0, 2],
[ 8, 10]],
[[ 1, 3],
[ 9, 11]]],
[[[ 4, 6],
[12, 14]],
[[ 5, 7],
[13, 15]]]])
Now notice that you have the desired subarrays, but the leftmost shape is 2x2, but you want to have 4, so you reshape
c.reshape([4,2,2])
which gives you
array([[[ 0, 2],
[ 8, 10]],
[[ 1, 3],
[ 9, 11]],
[[ 4, 6],
[12, 14]],
[[ 5, 7],
[13, 15]]])
Note that the general technique of combining n,m array into n*m single dimension is to do reshape(m*n, ...). Because of row-major order, the dimensions to flatten must be on the left for reshape to work as a flattening operation. So if in your example the channels are the last dimension, you will need to transpose them to the left, flatten (using reshape), and then transpose them back.
Lets say I have a sparse tensor with duplicate indices and where they are duplicate I want to merge values (sum them up)
What is the best way to do this?
example:
indicies = [[1, 1], [1, 2], [1, 2], [1, 3]]
values = [1, 2, 3, 4]
object = tf.SparseTensor(indicies, values, shape=[10, 10])
result = tf.MAGIC(object)
result should be a spare tensor with the following values (or concrete!):
indicies = [[1, 1], [1, 2], [1, 3]]
values = [1, 5, 4]
The only thing I have though of is to string concat the indicies together to create an index hash apply it to a third dimension and then reduce sum on that third dimension.
indicies = [[1, 1, 11], [1, 2, 12], [1, 2, 12], [1, 3, 13]]
sparse_result = tf.sparse_reduce_sum(sparseTensor, reduction_axes=2, keep_dims=true)
But that feels very very ugly
Here is a solution using tf.segment_sum. The idea is to linearize the indices in to a 1-D space, get the unique indices with tf.unique, run tf.segment_sum, and convert the indices back to N-D space.
indices = tf.constant([[1, 1], [1, 2], [1, 2], [1, 3]])
values = tf.constant([1, 2, 3, 4])
# Linearize the indices. If the dimensions of original array are
# [N_{k}, N_{k-1}, ... N_0], then simply matrix multiply the indices
# by [..., N_1 * N_0, N_0, 1]^T. For example, if the sparse tensor
# has dimensions [10, 6, 4, 5], then multiply by [120, 20, 5, 1]^T
# In your case, the dimensions are [10, 10], so multiply by [10, 1]^T
linearized = tf.matmul(indices, [[10], [1]])
# Get the unique indices, and their positions in the array
y, idx = tf.unique(tf.squeeze(linearized))
# Use the positions of the unique values as the segment ids to
# get the unique values
values = tf.segment_sum(values, idx)
# Go back to N-D indices
y = tf.expand_dims(y, 1)
indices = tf.concat([y//10, y%10], axis=1)
tf.InteractiveSession()
print(indices.eval())
print(values.eval())
Maybe you can try:
indicies = [[1, 1], [1, 2], [1, 2], [1, 3]]
values = [1, 2, 3, 4]
object = tf.SparseTensor(indicies, values, shape=[10, 10])
tf.sparse.to_dense(object, validate_indices=False)
Using unsorted_segment_sum could be simpler:
def deduplicate(tensor):
if not isinstance(tensor, tf.IndexedSlices):
return tensor
unique_indices, new_index_positions = tf.unique(tensor.indices)
summed_values = tf.unsorted_segment_sum(tensor.values, new_index_positions, tf.shape(unique_indices)[0])
return tf.IndexedSlices(indices=unique_indices, values=summed_values, dense_shape=tensor.dense_shape)
Another solution is to use tf.scatter_nd which will create a dense tensor and accumulate values on duplicate indices. This behavior is clearly stated in the documentation:
If indices contains duplicates, the duplicate values are accumulated (summed).
Then eventually we can convert it back to a sparse representation.
Here is a code sample for TensorFlow 2.x in eager mode:
import tensorflow as tf
indicies = [[1, 1], [1, 2], [1, 2], [1, 3]]
values = [1, 2, 3, 4]
merged_dense = tf.scatter_nd(indices, values, shape=(10, 10))
merged_sparse = tf.sparse.from_dense(merged_dense)
print(merged_sparse)
Output
SparseTensor(
indices=tf.Tensor(
[[1 1]
[1 2]
[1 3]],
shape=(3, 2),
dtype=int64),
values=tf.Tensor([1 5 4], shape=(3,), dtype=int32),
dense_shape=tf.Tensor([10 10], shape=(2,), dtype=int64))
So. As per the solution mentioned above.
Another example.
For the shape [12, 5]:
Lines to be changed in the code:
linearized = tf.matmul(indices, [[5], [1]])
indices = tf.concat([y//5, y%5], axis=1)