I have a tensor of shape (2, 3), say input = [[1 2 3] [4 5 6]], I have a index tensor of shape (2, 3) which I hope to be used to retrieve the values from input, index = [[1 0] [2 0]]. My expected result is result = [[2 1] [6 4]]. However, simply using tf.gather(input, index) does not seem to work.
If you want to extract element from the array, you can use gather_nd and the index should be of the form (i,j) for each element. In your example, your index should be:
inputs = tf.Variable([[1, 2, 3], [4, 5, 6]])
index = tf.Variable([[[0,1],[0,0]], [[1,2],[1,0]]])
result = tf.gather_nd(inputs, new_index)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
print(result.eval())
# output
# [[2 1]
# [6 4]]
If you want to generate the index from the form you have mentioned, you can do:
index = tf.Variable([[1, 0], [2, 0]])
r = tf.tile(tf.expand_dims(tf.range(tf.shape(index1)[0]), 1), [1, 2])
new_index = tf.concat([tf.expand_dims(r,-1), tf.expand_dims(index, -1)], axis=2)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
print(new_index.eval())
#output
#[[[0 1]
#[0 0]]
#[[1 2]
#[1 0]]]
The problem is in index, you have to use 0 or 1 values in index only, because your input array has shape (2,3). If you add additional row to input array all work fine:
import tensorflow as tf
input = tf.Variable([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
index = tf.Variable([[1, 0], [2, 0]])
result = tf.gather(input, index)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(result))
# results [[[4 5 6] [1 2 3]] [[7 8 9] [1 2 3]]]
Anyway, index describe which slice gather from input array, not which element.
Related
If there are two tensor matrices
a = [[1 2 3 4][5 6 7 8]]
b = [[0 1][1 2]],
how can we get this:
c = [[1 2][6 7]]
i.e. from first row extracting column 0 and 1, from second row extracting column 1 and 2.
Here is a way to do that:
import tensorflow as tf
a = tf.constant([[1, 2, 3, 4],
[5, 6, 7, 8]])
b = tf.constant([[0, 1],
[1, 2]])
row = tf.range(tf.shape(a)[0])
row = tf.tile(row[:, tf.newaxis], (1, tf.shape(b)[1]))
idx = tf.stack([row, b], axis=-1)
c = tf.gather_nd(a, idx)
with tf.Session() as sess:
print(sess.run(c))
Output:
[[1 2]
[6 7]]
I have a problem with using Tensorflow. I have four images with their corresponding indices. I want to make an image from them. I tried for loops, tf.gather, tf.assign, and so on but all show error. If somebody help me, it would be really appreciated. I explain my question with one small example:
We have 4 tensors and their indices from tensor tf.ktop function: (I write like MATLAB for just for simplicity)
a = [1, 2; 5, 6] a_idx = [0, 1; 2, 3]
b = [3, 4; 7, 8] b_idx = [0, 1; 2, 3]
c = [9, 10; 13, 14] c_idx = [0, 1; 2, 3]
d = [11, 12; 15, 16] d_idx = [0, 1; 2, 3]
I am looking for a big image from a, b, c, and d and their indices like:
image = [a b; c d]
image = [1, 2, 3, 4; 5, 6, 7, 8;9 10, 11, 12;13, 14, 15, 16]
In python I have something like:
a, a_idx, b, b_idx, c, c_idx, d, d_idx
n_x = tf.Variable(tf.zeros([1, 4, 4, 1]))
n_patches = tf.extract_image_patches(
n_x,
[1, 2, 2, 1],
[1, 2, 2, 1],
[1, 1, 1, 1],
"SAME"
)
So, n_patches is 4 tensors and I need to put a to d values to each patch corresponding to a_idx to d_idx. Its really easy for me in MATLAB or Numpy to do that using for loop but in tensorflow I can not
In your comments, I suspect you made a tiny error in your desired output, image.
I interpret that you want is given
values = np.array([[2, 5],\
[4, 6]])
indices = np.array([[0, 3],\
[2, 1]])
your result would be
[[2. 0. 0. 0.]
[0. 0. 0. 5.]
[0. 0. 4. 0.]
[0. 6. 0. 0.]]
So you want to obtain a sort of one hot encoded matrix, but with values corresponding to given indices. This can be obtained like so:
import numpy as np
values = np.array([[2, 5],\
[4, 6]])
indices = np.array([[0, 3],\
[2, 1]])
# Make a matrix with only zeros
n_hots = np.zeros_like((indices))
# Now row 0,1,2 and 3 should have values corresponding to the
# indices. That is we should first "unpack" the values and indices:
indices=indices.ravel()
values=values.ravel()
# values are now: [2,5,4,6]
# indices are now: [0,3,2,1]
# values:
# n_hots[row,indices[row]]=values[indices[row]]
# e.g.
# n_hots[0,0]=2
# n_hots[1,3]=5
# n_hots[2,2]=4
# n_hots[3,1]=6
# Notice how the first slices are a ascending range of values:
# [0,1,2,3], and the second slice are the raveled indices, and the
# right hand side of the equal sign are the ravele values!
# That means we can just do the following:
n_hots[np.arange(4),indices]=values
print(n_hots)
In tensorflow it would be a bit different. First generating a one_hot tensor that have ones at the 2nd axis value: at the indices, and then multiplying that with the corresponding indices:
import numpy as np
import tensorflow as tf
indices=tf.placeholder(shape=(None),dtype=tf.int32)
values=tf.placeholder(shape=(None),dtype=tf.float32)
one_hots=tf.one_hot(indices, tf.shape(indices)[0])
n_hots=one_hots*tf.gather(values, indices)
with tf.Session() as sess:
_values = np.array([[2, 5],\
[4, 6]])
_indices = np.array([[0, 3],\
[2, 1]])
n_h=sess.run(n_hots, {indices: _indices.ravel(), values:_values.ravel()})
print(n_h)
Sorry for the inaccurate title. Here is the detail description of the problem: assume a tensor of shape (?, 2), e.g., a tensor T [[0,1], [0,2], [0,0], [1, 4], [1, 3], [2,0], [2,0], [2,0],[2,0]]. How to count how many zero showing up for every T[:, 0]. For the example above, because there is [0,0] and [2,0], the answer is 2.
More examples:
[[0,1], [0,2], [0,1], [1, 4], [1, 3], [2,0], [2,0], [2,0],[2,0]] (Answer: 1, because of [2,0])
[[0,1], [0,2], [0,1], [1, 4], [1, 3], [2,0], [2,0], [2,0],[2,0],[3,0]] (Answer: 2, because of [2, 0] and [3,0])
If I got what you are looking for, the question is how many unique "[X, 0]-pairs" you have in the data. If so, this should do it:
import tensorflow as tf
x = tf.placeholder(shape=(None, 2), dtype=tf.int32)
indices = tf.where(tf.equal(x[:,1], tf.constant(0, dtype=tf.int32)))
unique_values, _ = tf.unique(tf.squeeze(tf.gather(x[:, 0], indices)))
no_unique_values = tf.shape(unique_values, out_type=tf.int32)
data = [ .... ]
with tf.Session() as sess:
no_unique = sess.run(fetches=[no_unique_values], feed_dict={x: data})
Here is a solution I got myself
def get_unique(ts):
ts_part = ts[:, 1]
where = tf.where(tf.equal(0, ts_part))
gather_nd = tf.gather_nd(ts, where)
gather_plus = gather_nd[:, 0] + gather_nd[:, 1]
unique_values, _ = tf.unique(gather_plus)
return tf.shape(unique_values)[0]
For example:
array = [[1, 2, 3], [4, 5, 6]]
slice = [[0, 0, 1], [0, 1, 2]]
output = [[1, 1, 2], [4, 5,6]]
I've tried array[slice], but that didn't work. I also couldn't get tf.gather or tf.gather_nd to work, although these initially seemed like the correct functions to use. Note that these are all tensors in-graph.
How can I select these values in my array according to slice?
You need to add a dimension to your slice tensor which you can do with tf.pack and then we can use tf.gather_nd no problem.
import tensorflow as tf
tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
old_slice = tf.constant([[0, 0, 1], [0, 1, 2]])
# We need to add a dimension - we need a tensor of rank 2, 3, 2 instead of 2, 3
dims = tf.constant([[0, 0, 0], [1, 1, 1]])
new_slice = tf.pack([dims, old_slice], 2)
out = tf.gather_nd(tensor, new_slice)
If we run the follow code:
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
run_tensor, run_slice, run_out = sess.run([tensor, new_slice, out])
print 'Input tensor:'
print run_tensor
print 'Correct param for gather_nd:'
print run_slice
print 'Output:'
print run_out
This should give the correct output:
Input tensor:
[[1 2 3]
[4 5 6]]
Correct param for gather_nd:
[[[0 0]
[0 0]
[0 1]]
[[1 0]
[1 1]
[1 2]]]
Output:
[[1 1 2]
[4 5 6]]
An even easier way to calculate the results, which is also of more general nature, is to directly leverage the batch_dims argument of tf.gather:
>>> array = tf.constant([[1,2,3], [4,5,6]])
>>> slice = tf.constant([[0,0,1], [0,1,2]])
>>> output = tf.constant([[1,1,2], [4,5,6]])
>>> tf.gather(array, slice, batch_dims=1, axis=1)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 1, 2],
[4, 5, 6]], dtype=int32)>
Lets say I have a sparse tensor with duplicate indices and where they are duplicate I want to merge values (sum them up)
What is the best way to do this?
example:
indicies = [[1, 1], [1, 2], [1, 2], [1, 3]]
values = [1, 2, 3, 4]
object = tf.SparseTensor(indicies, values, shape=[10, 10])
result = tf.MAGIC(object)
result should be a spare tensor with the following values (or concrete!):
indicies = [[1, 1], [1, 2], [1, 3]]
values = [1, 5, 4]
The only thing I have though of is to string concat the indicies together to create an index hash apply it to a third dimension and then reduce sum on that third dimension.
indicies = [[1, 1, 11], [1, 2, 12], [1, 2, 12], [1, 3, 13]]
sparse_result = tf.sparse_reduce_sum(sparseTensor, reduction_axes=2, keep_dims=true)
But that feels very very ugly
Here is a solution using tf.segment_sum. The idea is to linearize the indices in to a 1-D space, get the unique indices with tf.unique, run tf.segment_sum, and convert the indices back to N-D space.
indices = tf.constant([[1, 1], [1, 2], [1, 2], [1, 3]])
values = tf.constant([1, 2, 3, 4])
# Linearize the indices. If the dimensions of original array are
# [N_{k}, N_{k-1}, ... N_0], then simply matrix multiply the indices
# by [..., N_1 * N_0, N_0, 1]^T. For example, if the sparse tensor
# has dimensions [10, 6, 4, 5], then multiply by [120, 20, 5, 1]^T
# In your case, the dimensions are [10, 10], so multiply by [10, 1]^T
linearized = tf.matmul(indices, [[10], [1]])
# Get the unique indices, and their positions in the array
y, idx = tf.unique(tf.squeeze(linearized))
# Use the positions of the unique values as the segment ids to
# get the unique values
values = tf.segment_sum(values, idx)
# Go back to N-D indices
y = tf.expand_dims(y, 1)
indices = tf.concat([y//10, y%10], axis=1)
tf.InteractiveSession()
print(indices.eval())
print(values.eval())
Maybe you can try:
indicies = [[1, 1], [1, 2], [1, 2], [1, 3]]
values = [1, 2, 3, 4]
object = tf.SparseTensor(indicies, values, shape=[10, 10])
tf.sparse.to_dense(object, validate_indices=False)
Using unsorted_segment_sum could be simpler:
def deduplicate(tensor):
if not isinstance(tensor, tf.IndexedSlices):
return tensor
unique_indices, new_index_positions = tf.unique(tensor.indices)
summed_values = tf.unsorted_segment_sum(tensor.values, new_index_positions, tf.shape(unique_indices)[0])
return tf.IndexedSlices(indices=unique_indices, values=summed_values, dense_shape=tensor.dense_shape)
Another solution is to use tf.scatter_nd which will create a dense tensor and accumulate values on duplicate indices. This behavior is clearly stated in the documentation:
If indices contains duplicates, the duplicate values are accumulated (summed).
Then eventually we can convert it back to a sparse representation.
Here is a code sample for TensorFlow 2.x in eager mode:
import tensorflow as tf
indicies = [[1, 1], [1, 2], [1, 2], [1, 3]]
values = [1, 2, 3, 4]
merged_dense = tf.scatter_nd(indices, values, shape=(10, 10))
merged_sparse = tf.sparse.from_dense(merged_dense)
print(merged_sparse)
Output
SparseTensor(
indices=tf.Tensor(
[[1 1]
[1 2]
[1 3]],
shape=(3, 2),
dtype=int64),
values=tf.Tensor([1 5 4], shape=(3,), dtype=int32),
dense_shape=tf.Tensor([10 10], shape=(2,), dtype=int64))
So. As per the solution mentioned above.
Another example.
For the shape [12, 5]:
Lines to be changed in the code:
linearized = tf.matmul(indices, [[5], [1]])
indices = tf.concat([y//5, y%5], axis=1)