I have 2 tensors of shape (100, 4) and (100, 2).
I would like to perform a concatenation operation, in TensorFlow, similar to np.hstack, in NumPy, so that the output would be of shape (100, 6). Is there a TensorFlow function to do that?
You can use tf.concat for this purpose as follows:
sess=tf.Session()
t1 = [[1, 2], [4, 5]]
t2 = [[7, 8, 9], [10, 11, 12]]
res=tf.concat(concat_dim=1,values=[t1, t2])
print(res.eval(session=sess))
This prints
[[ 1 2 7 8 9]
[ 4 5 10 11 12]]
I tried the above code, I got some error. The following code runs fine with tf 1.15 version:
x = tf.constant( [[1, 2,4],[7,8,12]])
y = tf.constant([[88],[99]])
res=tf.concat([x, y],1)
I have a similar problem. I try to concatenate two image Tensors with Keras.
Tensor type and shape
Both are identical but it says that Layer concatenate_X (The Number X changes)was called with an input that isn't a symbolic tensor. Received type: <class 'numpy.ndarray'>
i try to concatenate them like this:
X=concatenate([X1,X2],axis=-1)
There is an easier way of doing it in PyTorch. Suppose t1 is your 100x4 tensor and t2 is 100x2 tensor. You can do something like:
result= torch.concat((t1,t2), axis=1)
The result is a 100x6 tensor.
Related
I have a ResNet network that I am using for a camera pose network. I have replaced the final classifier layer with a 1024 dense layer and then a 7 dense layer (first 3 for xyz, final 4 for quaternion).
My problem is that I want to record the xyz error and the quaternion error as two separate errors or metrics (Instead of just mean absolute error of all 7). The inputs of the custom metric template of customer_error(y_true,y_pred) are tensors. I don't know how to separate the inputs into two different xyz and q arrays. The function runs at compile time, when the tensors are empty and don't have any numpy components.
Ultimately I want to get the median xyz and q error using
median = tensorflow_probability.stats.percentile(input,q=50, interpolation='linear').
Any help would be really appreciated.
You could use the tf.slice() to extract just the first three elements of your model output.
import tensorflow as tf
# enabling eager mode to demo the slice fn
tf.compat.v1.enable_eager_execution()
import numpy as np
# just creating a random array dimesions size (2, 7)
# where 2 is just an arbitrary value chosen for the batch dimension
out = np.arange(0,14).reshape(2,7)
print(out)
# array([[ 0, 1, 2, 3, 4, 5, 6],
# [ 7, 8, 9, 10, 11, 12, 13]])
# put it in a tf variable
out_tf = tf.Variable(out)
# now using the slice operator
xyz = tf.slice(out_tf, begin=[0, 0], size=[-1,3])
# lets see what it looked like
print(xyz)
# <tf.Tensor: id=11, shape=(2, 3), dtype=int64, numpy=
# array([[0, 1, 2],
# [7, 8, 9]])>
Could wrap something like this into your custom metric to get what you need.
def xyz_median(y_true, y_pred):
"""get the median of just the X,Y,Z coords
UNTESTED though :)
"""
# slice to get just the xyz
xyz = tf.slice(out_tf, begin=[0, 0], size=[-1,3])
median = tfp.stats.percentile(xyz, q=50, interpolation='linear')
return median
If i have a tensor 1, 2, 3, 4, 5]. Is there something i can use in the backend to make a tensor [3, 3, 3, 3, 3]?
I am trying to compute a baseline loss based on the average value for the output
avg_true = K.mean(y_true)
baseline = sigmoid_loss(avg_true, y_pred)
I'm not sure if the code here (avg_true, y_pred) is working because avg_true is just a single value and y_pred is a tensor?
Do you need something like this:
X = np.array([1,2,3,4,5])
avg_true = tf.fill(tf.shape(X), tf.reduce_mean(X))
# [3, 3, 3, 3, 3]
I have a tensor with probabilities. This is a dynamic tensor with shape (?, 30) and I am selecting index with the best probability of these 30 values as :
best_probability = tf.argmax(probability, axis = 1)
Now the dimensions of tensor best_probability is (?,). Now I want to select the values with these indices from another tensor called data with dimensions (?, 30, 1024, 3). Essentially from each of the 30 values select one with best probability using best_probability tensor.
The final output should have dimensions of (?, 1024, 3).
PS:- I tried gather_nd but it need indexing of best_probability tensor something like [[0, 9], [1, 10], [2, 15], [3, 25]]. To do so I wrote following snippet.
selected_data = tf.stack(tf.range(probability.shape[0]),
tf.argmax(probability, axis = 1))
This doesn't work as I am dealing with a dynamic tensor. Is there any alternative to solve this problem.
I was able to solve this issue using tf.batch_gather and tf.reshape
selected_data = tf.reshape(tf.batch_gather(data, best_probability),
(-1, data.shape[2],data.shape[3]))
Let's assume, I have a 3 dimensional tensor of shape [a, b, c].
I want to extract one dimension at model run time, e.g. [4, 2, c], so I would end up with with a 1 dimensional tensor [c].
The parameters of a and b are stored in different tensors of shape [a, 1] and [b, 1], so using tf.slice is not an option, as tf.slice only accepts a 1 dimensional scalar tensor.
Any ideas?
Thanks!
You can use the tf.reshape function, although you need to pass the number of entries in the new tensor as an argument. For instance:
import tensorflow as tf
# Define a 3D tensor
tensor3d = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
# Convert the 3D tensor to 1D
tensor1d = tf.reshape(tensor3d, [12])
Probably you need to split the tensor with tf.split.
https://www.tensorflow.org/api_docs/python/tf/split
I have two following tensors (note that they are both Tensorflow tensors which means they are still virtually symbolic at the time I construct the following slicing op before I launch a tf.Session()):
params: has shape (64,784, 256)
indices: has shape (64, 784)
and I want to construct an op that returns the following tensor:
output: has shape (64,784) where
output[i,j] = params_tensor[i,j, indices[i,j] ]
What is the most efficient way in Tensorflow to do so?
ps: I tried with tf.gather but couldn't make use of it to perform the operation I described above.
Many thanks.
-Bests
You can get exactly what you want using tf.gather_nd. The final expression is:
tf.gather_nd(params, tf.stack([tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[0]), 1), [1, tf.shape(indices)[1]]), tf.transpose(tf.tile(tf.expand_dims(tf.range(tf.shape(indices)[1]), 1), [1, tf.shape(indices)[0]])), indices], 2))
This expression has the following explanation:
tf.gather_nd does what you expected and uses the indices to gather the output from the params
tf.stack combines three separate tensors, the last of which is the indices. The first two tensors specify the ordering of the first two dimensions (axis 0 and axis 1 of params/indices)
For the example provided, this ordering is simply 0, 1, 2, ..., 63 for axis 0, and 0, 1, 2, ... 783 for axis 1. These sequences are obtained with tf.range(tf.shape(indices)[0]) and tf.range(tf.shape(indices)[1]), respectively.
For the example provided, indices has shape (64, 784). The other two tensors from the last point above need to have this same shape in order to be combined with tf.stack
First, an additional dimension/axis is added to each of the two sequences using tf.expand_dims.
The use of tf.tile and tf.transpose can be shown by example: Assume the first two axes of params and index have shape (5,3). We want the first tensor to be:
[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]
We want the second tensor to be:
[[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]
These two tensors almost function like specifying the coordinates in a grid for the associated indices.
The final part of tf.stack combines the three tensors on a new third axis, so that the result has the same 3 axes as params.
Keep in mind if you have more or less axes than in the question, you need to modify the number of coordinate-specifying tensors in tf.stack accordingly.
What you want is like a custom reduction function. If you are keeping something like index of maximum value at indices then I would suggest using tf.reduce_max:
max_params = tf.reduce_max(params_tensor, reduction_indices=[2])
Otherwise, here is one way to get what you want (Tensor objects are not assignable so we create a 2d list of tensors and pack it using tf.pack):
import tensorflow as tf
import numpy as np
with tf.Graph().as_default():
params_tensor = tf.pack(np.random.randint(1,256, [5,5,10]).astype(np.int32))
indices = tf.pack(np.random.randint(1,10,[5,5]).astype(np.int32))
output = [ [None for j in range(params_tensor.get_shape()[1])] for i in range(params_tensor.get_shape()[0])]
for i in range(params_tensor.get_shape()[0]):
for j in range(params_tensor.get_shape()[1]):
output[i][j] = params_tensor[i,j,indices[i,j]]
output = tf.pack(output)
with tf.Session() as sess:
params_tensor,indices,output = sess.run([params_tensor,indices,output])
print params_tensor
print indices
print output
I know I'm late, but I recently had to do something similar, and was able to to do it using Ragged Tensors:
output = tf.gather(params, tf.RaggedTensor.from_tensor(indices), batch_dims=-1, axis=-1)
Hope it helps