Let's assume, I have a 3 dimensional tensor of shape [a, b, c].
I want to extract one dimension at model run time, e.g. [4, 2, c], so I would end up with with a 1 dimensional tensor [c].
The parameters of a and b are stored in different tensors of shape [a, 1] and [b, 1], so using tf.slice is not an option, as tf.slice only accepts a 1 dimensional scalar tensor.
Any ideas?
Thanks!
You can use the tf.reshape function, although you need to pass the number of entries in the new tensor as an argument. For instance:
import tensorflow as tf
# Define a 3D tensor
tensor3d = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
# Convert the 3D tensor to 1D
tensor1d = tf.reshape(tensor3d, [12])
Probably you need to split the tensor with tf.split.
https://www.tensorflow.org/api_docs/python/tf/split
Related
Summarize the problem
I am working with high dimensional tensors in pytorch and I need to index one tensor with the argmax values from another tensor. So I need to index tensor y of dim [3,4] with the results from the argmax of tensor xwith dim [3,4]. If tensors are:
import torch as T
# Tensor to get argmax from
# expected argmax: [2, 0, 1]
x = T.tensor([[1, 2, 8, 3],
[6, 3, 3, 5],
[2, 8, 1, 7]])
# Tensor to index with argmax from preivous
# expected tensor to retrieve [2, 4, 9]
y = T.tensor([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10, 11]])
# argmax
x_max, x_argmax = T.max(x, dim=1)
I would like an operation that given the argmax indexes of x, or x_argmax, retrieves the values in tensor y in the same indexes x_argmax indexes.
Describe what you’ve tried
This is what I have tried:
# What I have tried
print(y[x_argmax])
print(y[:, x_argmax])
print(y[..., x_argmax])
print(y[x_argmax.unsqueeze(1)])
I have been reading a lot about numpy indexing, basic indexing, advanced indexing and combined indexing. I have been trying to use combined indexing (since I want a slice in first dimension of the tensor and the indexes values on the second one). But I have not been able to come up with a solution for this use case.
You are looking for torch.gather:
idx = torch.argmax(x, dim=1, keepdim=true) # get argmax directly, w/o max
out = torch.gather(y, 1, idx)
Resulting with
tensor([[2],
[4],
[9]])
How about y[T.arange(3), x_argmax]?
That does the job for me...
Explanation: You take dimensional information away when you invoke T.max(x, dim=1), so this information needs to be restored explicitly.
Here is what I would like to accomplish in Tensorflow.
I have 2x2 matrix (trainable)
x_1 x_2
x_3 x_4
and I have input vector
a
b
I would like to multiply each column of matrix by element of vector and get back the following matrix
ax_1 bx_2
ax_3 bx_4
I can get this result by declaring each column of matrix as separate variable, but I wonder if there is more elegant solution.
Thanks to broadcasting, you should be fine using the regular multiplication operator:
import tensorflow as tf
x = tf.constant([[3, 5], [7, 11]], dtype=tf.int32)
a = tf.constant([4, 8], dtype=tf.int32)
y = x * a
with tf.Session() as sess:
print(sess.run(y)) # Result: [[12, 40], [28, 88]]
Is their a way using tf functions to multiply certain columns of a 2D tensor by a scaler?
e.g. multiply the second and third column of a matrix by 2:
[[2,3,4,5],[4,3,4,3]] -> [[2,6,8,5],[4,6,8,3]]
Thanks for any help.
EDIT:
Thank you Psidom for the reply. Unfortunately I am not using a tf.Variable, so it seems I have to use tf.slice.
What I am trying to do is to multiply all components by 2 of a single-sided PSD, except for the DC component and the Nyquist frequency component, to conserve the total power when going from a double-sided spectrum to a single-sided spectrum.
This would correspond to: 2*PSD[:,1:-1] if it was a numpy array.
Here is my attempt with tf.assign and tf.slice:
x['PSD'] = tf.assign(tf.slice(x['PSD'], [0, 1], [tf.shape(x['PSD'])[0], tf.shape(x['PSD'])[1] - 2]),
tf.scalar_mul(2, tf.slice(x['PSD'], [0, 1], [tf.shape(x['PSD'])[0], tf.shape(x['PSD'])[1] - 2]))) # single-sided power spectral density.
However:
AttributeError: 'Tensor' object has no attribute 'assign'
If the tensor is a variable, you can do this by slicing the columns you want to update and then use tf.assign:
x = tf.Variable([[2,3,4,5],[4,3,4,3]])
x = tf.assign(x[:,1:3], x[:,1:3]*2) # update the second and third columns and assign
# the new tensor to x
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(x))
#[[2 6 8 5]
# [4 6 8 3]]
Ended up taking 3 different slices and concatenating them together, with the middle slice multiplied by 2. Probably not the most efficient way, but it works:
x['PSD'] = tf.concat([tf.slice(x['PSD'], [0, 0], [tf.shape(x['PSD'])[0], 1]),
tf.scalar_mul(2, tf.slice(x['PSD'], [0, 1], [tf.shape(x['PSD'])[0], tf.shape(x['PSD'])[1] - 2])),
tf.slice(x['PSD'], [0, tf.shape(x['PSD'])[1] - 1], [tf.shape(x['PSD'])[0], 1])], 1) # single-sided power spectral density.
Suppose I have a 2D tensor with shape (size, size), and I want to get 2 new tensors that containing the original tensors row index and column index.
So if size is 2, I want to get
[[0, 0], [1, 1]] and [[0, 1], [0, 1]]
What's tricky is that size is another tensor whose value can only be known when running the graph in a tensorflow Session.
How can I do this in tensorflow?
Seems like you are looking for tf.meshgrid.
Here's an example:
shape = tf.shape(matrix)
R, C = tf.meshgrid(tf.range(shape[0]), tf.range(shape[1]), indexing='ij')
matrix is your 2D tensor, R and C contain your row and column indices, respectively. Note that this can be slightly simplified if your matrix is square (only one tf.range).
I have two following tensors (note that they are both Tensorflow tensors which means they are still virtually symbolic at the time I construct the following slicing op before I launch a tf.Session()):
params: has shape (64,784, 256)
indices: has shape (64, 784)
and I want to construct an op that returns the following tensor:
output: has shape (64,784) where
output[i,j] = params_tensor[i,j, indices[i,j] ]
What is the most efficient way in Tensorflow to do so?
ps: I tried with tf.gather but couldn't make use of it to perform the operation I described above.
Many thanks.
-Bests
You can get exactly what you want using tf.gather_nd. The final expression is:
tf.gather_nd(params, tf.stack([tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[0]), 1), [1, tf.shape(indices)[1]]), tf.transpose(tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[1]), 1), [1, tf.shape(indices)[0]])), indices], 2))
This expression has the following explanation:
tf.gather_nd does what you expected and uses the indices to gather the output from the params
tf.stack combines three separate tensors, the last of which is the indices. The first two tensors specify the ordering of the first two dimensions (axis 0 and axis 1 of params/indices)
For the example provided, this ordering is simply 0, 1, 2, ..., 63 for axis 0, and 0, 1, 2, ... 783 for axis 1. These sequences are obtained with tf.range(tf.shape(indices)[0]) and tf.range(tf.shape(indices)[1]), respectively.
For the example provided, indices has shape (64, 784). The other two tensors from the last point above need to have this same shape in order to be combined with tf.stack
First, an additional dimension/axis is added to each of the two sequences using tf.expand_dims.
The use of tf.tile and tf.transpose can be shown by example: Assume the first two axes of params and index have shape (5,3). We want the first tensor to be:
[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]
We want the second tensor to be:
[[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]
These two tensors almost function like specifying the coordinates in a grid for the associated indices.
The final part of tf.stack combines the three tensors on a new third axis, so that the result has the same 3 axes as params.
Keep in mind if you have more or less axes than in the question, you need to modify the number of coordinate-specifying tensors in tf.stack accordingly.
What you want is like a custom reduction function. If you are keeping something like index of maximum value at indices then I would suggest using tf.reduce_max:
max_params = tf.reduce_max(params_tensor, reduction_indices=[2])
Otherwise, here is one way to get what you want (Tensor objects are not assignable so we create a 2d list of tensors and pack it using tf.pack):
import tensorflow as tf
import numpy as np
with tf.Graph().as_default():
params_tensor = tf.pack(np.random.randint(1,256, [5,5,10]).astype(np.int32))
indices = tf.pack(np.random.randint(1,10,[5,5]).astype(np.int32))
output = [ [None for j in range(params_tensor.get_shape()[1])] for i in range(params_tensor.get_shape()[0])]
for i in range(params_tensor.get_shape()[0]):
for j in range(params_tensor.get_shape()[1]):
output[i][j] = params_tensor[i,j,indices[i,j]]
output = tf.pack(output)
with tf.Session() as sess:
params_tensor,indices,output = sess.run([params_tensor,indices,output])
print params_tensor
print indices
print output
I know I'm late, but I recently had to do something similar, and was able to to do it using Ragged Tensors:
output = tf.gather(params, tf.RaggedTensor.from_tensor(indices), batch_dims=-1, axis=-1)
Hope it helps