Indexing tensor with another tensor - tensorflow

I have a tensor xi of shape (?, 20, 10) and another tensor y_data of shape (?, 20, 1). I want to use the y_data tensor to "index" the xi tensor in order to do something like tf.exp(xi[y_data] - tf.log(tf.reduce_sum(xi, axis=2)).
E.g. tf.exp(xi[:, :, 4] - tf.log(tf.reduce_sum(xi, axis=2))) results in a tensor of shape (?, 20). I just want to get the index, here 4, out ot another tensor.
Thanks in advance!

In this case, I would use a loop on the possible values for y_data which I will assume go from 0 to 9.
result = tf.zeros(tf.shape(y_data), tf.float32)
for i in range(10):
result = tf.where(tf.equal(y_data, i), tf.exp(xi[:, :, i:i+1]), result)
result = tf.reshape(result, [-1, 20])
result -= tf.log(tf.reduce_sum(xi, axis=2))
Probably not the most efficient but that's the only way I could think of.

Related

Tensorflow landmark heatmap

I am trying to draw landmark heatmaps with tensorflow.
My current approach is using tf.scatter_nd like this:
def draw_lmarks(x):
def draw_lmarks_inner(x2):
return tf.scatter_nd(x2[0], x2[1], shape=(IMGSIZE, IMGSIZE))
ret = tf.map_fn(draw_lmarks_inner, x, dtype="float32")
return tf.reshape(tf.reduce_max(ret, axis=0), [IMGSIZE, IMGSIZE, 1])
return tf.map_fn(draw_lmarks, [locations, vals], dtype="float32")
But this is quite slow as i have to create an IMAGESIZE*IMAGESIZE image for each batch times landmarks.
So i poked around and found tf.tensor_scatter_nd_update which i could use like:
img = tf.zeros((IMGSIZE,IMGSIZE), dtype="float32")
def draw_lmarks(x):
return tf.tensor_scatter_nd_update(img, x[0], x[1])
imgs = tf.map_fn(draw_lmarks, [locations, vals], dtype="float32")
Which allows me to only generate batch_size images which runs considerably faster.
... BUT, this doesn't use the highest values at one point but instead simply overwrites.
There is the tf.scatter_max function which sounds like what i need but this seems to expect different shaped inputs.
Is there a way to use the second approach but instead of overwriting values takes the maximum value at one point ?
Shapes:
location = (-1, 68, 16, 16, 2)
vals = (-1, 68, 16, 16)
To visualize:
This is what the second (faster) function returns:
while i need something like
I think you will be much better off by first setting the seeds of your landmarks and then convolve the result with your heatmap template. Something like
import tensorflow as tf
num_loc = 10
im_dim = 32
locations = tf.random.uniform((num_loc, 2), maxval=im_dim, dtype=tf.int32)
centers = tf.scatter_nd(locations, [1]*num_loc, (im_dim, im_dim))
heatmap = tf.nn.conv2d(centers[None, :, :, None], heatmap_template[:, :, None, None], (1, 1, 1, 1), 'SAME')[0, :, :, 0]

How to vectorize the following python code

I'm trying to obtain a matrix, where each element is calculated as follows:
X = torch.ones(batch_size, dim)
X_ = torch.ones(batch_size, dim)
Y = torch.ones(batch_size, dim)
M = torch.zeros(batch_size, batch_size)
for i in range(batch_size):
for j in range(batch_size):
M[i, j] = ((X[i] - X_[i] * Y[j])**2).sum()
It's very slow to calculate M element-wise, is there any suggestion about how to use matrix multiplication to replace the for loops?
Thanks.
If you want to sum() over dim, you can "lift" your 2D problem to 3D and sum there:
M = ((X[:, None, :] - X_[:, None, :] * Y[None, ...])**2).sum(dim=2)
How it works:
X[:, None, :] and X_[:, None, :] are 3D of size (batch_size, 1, dim), and Y[None, ...] is of size (1, batch_size, dim).
When multiplying X_[:, None, :] * Y[None, ...] pytorch broadcasts the dimensions of size 1 to the appropriate dimension to get a result of size (batch_size, batch_size, dim).
Finally, you sum() only over the last dimension (dim=2) to get an output M of size (batch_size, batch_size).
The trick here is done by taking advantage of broadcasting.

Add Placeholder to layer

I have a Tensorflow layer with 2 nodes. These are the output nodes of another 2 larger hidden layers. Now I want to add 2 new nodes to this layer, so I end up with 4 nodes in total, and do some last computation. The added nodes are implemented as Placeholders so far, and have a dynamic shape depending on the batch size. Here is a sketch of the net:
Now I want to concatenate Nodes 3 and 4 to the nodes 1 and 2 of the previously computed layer. I know there is tf.concat for this, but I don't understand how to do this correctly.
How do I add Placeholders of the same batchsize as the original net input to a specific layer?
EDIT:
When I use tf.concat over axis=1, I end up with the following problem:
z = tf.placeholder(tf.float32, shape=[None, 2])
Weight_matrix = weight_variable([4, 2])
bias = bias_variable([4, 2])
concat = tf.concat((dnn_out, z), 1)
h_fc3 = tf.nn.relu(tf.matmul(concat, Weight_matrix) + bias)
Adding the bias to the tf.matmul result throws an InvalidArgumentError: Incompatible shapes: [20,2] vs. [4,2].
Since your data is batched, probably over the first dimension, you need to concatenate over the second (axis=1):
import tensorflow as tf
import numpy as np
dnn_output = tf.placeholder(tf.float32, (None, 2)) # replace with your DNN(input) result
additional_nodes = tf.placeholder(tf.float32, (None, 2))
concat = tf.concat((dnn_output, additional_nodes), axis=1)
print(concat)
# > Tensor("concat:0", shape=(?, 4), dtype=float32)
dense_output = tf.layers.dense(concat, units=2)
print(dense_output)
# > Tensor("dense/BiasAdd:0", shape=(?, 2), dtype=float32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(dense_output, feed_dict={dnn_output: np.ones((5, 2)),
additional_nodes: np.zeros((5, 2))}))

Tensorflow, Cannot feed value of shape ..... for Tensor

I have a problem with linear regression and 3d matrices.
They are all floating point numbers, with labels.
I got started from this code but I changed the matrix:
https://aqibsaeed.github.io/2016-07-07-TensorflowLR/
With 2 dimensions, it is working well but, with 3, I can not get it running.
this is the shape
(387, 7, 10) shape train_x
(387, 1) shape train_x
(43, 7, 10) test_x.shape
(43, 1) test_y.shape
n_dim = f.shape[1]
train_x, test_x, train_y, test_y = train_test_split(f,l,test_size=0.1, shuffle =False)
print(train_x.shape)
print(train_y.shape)
print(test_x.shape)
print(test_y.shape)
learning_rate = 0.01
training_epochs = 1000
cost_history = np.empty(shape=[1],dtype=float)
X = tf.placeholder(tf.float32,[None,n_dim])
Y = tf.placeholder(tf.float32,[None,1])
W = tf.Variable(tf.ones([n_dim,1]))
#init = tf.initialize_all_variables()
init = tf.global_variables_initializer()
y_ = tf.matmul(X, W)
cost = tf.reduce_mean(tf.square(y_ - Y))
training_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
sess.run(init)
for epoch in range(training_epochs):
sess.run(training_step,feed_dict={X:train_x,Y:train_y})
cost_history = np.append(cost_history,sess.run(cost,feed_dict={X: train_x,Y: train_y}))
plt.plot(range(len(cost_history)),cost_history)
plt.axis([0,training_epochs,0,np.max(cost_history)])
plt.show()
pred_y = sess.run(y_, feed_dict={X: test_x})
mse = tf.reduce_mean(tf.square(pred_y - test_y))
print("MSE: %.4f" % sess.run(mse))
fig, ax = plt.subplots()
ax.scatter(test_y, pred_y)
ax.plot([test_y.min(), test_y.max()], [test_y.min(), test_y.max()], 'k--', lw=3)
ax.set_xlabel('Measured')
ax.set_ylabel('Predicted')
plt.show()
</ blink>
this is the mistake
\session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (387, 7, 10) for Tensor 'Placeholder_12:0', which has shape '(?, 7)'
Your error message shows exact reason why it is raised.
The dimension between placeholder and train_x doesn't fit.
train_x has a (387, 7, 10) shape. In usual convention, you have 387 datapoint which has (7, 10) dimension.
But, X (placeholder, the bucket you will put train_x in) has a [None, n_dim] (I guess n_dim is 7) shape.
Using [None, ~] in the first element is only accepted as the number of datapoints, not dimension of your data.
So you need to change [None, n_dim] to [None, 7, 10] in this case.
edited)
In this case, X is not exacty 3D data. just a bunch of 2D data. For direct weight multiplication of 2D data, you need convolution step. That is CNN. But you only have very small dimension data matrix, you just need to reshape (7,10) matrix shape data to (7*10) vector shape data.
Using tf.reshape function.tf.reshape(X, shape=[387, 7*10]) will be works, and also change your W to right dimension to multiply. like, tf.Variable(tf.ones([7*10,1])).

K-means example(tf.expand_dims)

In Example code of Kmeans of Tensorflow,
When use the function 'tf.expand_dims'(Inserts a dimension of 1 into a tensor's shape.) in point_expanded, centroids_expanded
before calculate tf.reduce_sum.
why is these have different indexes(0, 1) in second parameter?
import numpy as np
import tensorflow as tf
points_n = 200
clusters_n = 3
iteration_n = 100
points = tf.constant(np.random.uniform(0, 10, (points_n, 2)))
centroids = tf.Variable(tf.slice(tf.random_shuffle(points), [0, 0],[clusters_n, -1]))
points_expanded = tf.expand_dims(points, 0)
centroids_expanded = tf.expand_dims(centroids, 1)
distances = tf.reduce_sum(tf.square(tf.subtract(points_expanded, centroids_expanded)), 2)
assignments = tf.argmin(distances, 0)
means = []
for c in range(clusters_n):
means.append(tf.reduce_mean(tf.gather(points,tf.reshape(tf.where(tf.equal(assignments, c)), [1, -1])), reduction_indices=[1]))
new_centroids = tf.concat(means,0)
update_centroids = tf.assign(centroids, new_centroids)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(iteration_n):
[_, centroid_values, points_values, assignment_values] = sess.run([update_centroids, centroids, points, assignments])
print("centroids" + "\n", centroid_values)
plt.scatter(points_values[:, 0], points_values[:, 1], c=assignment_values, s=50, alpha=0.5)
plt.plot(centroid_values[:, 0], centroid_values[:, 1], 'kx', markersize=15)
plt.show()
This is done to subtract each centroid from each point. First, make sure you understand the notion of broadcasting (https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
that is linked from tf.subtract (https://www.tensorflow.org/api_docs/python/tf/subtract). Then, you just need to draw the shapes of points, expanded_points, centroids, and expanded_centroids and understand what values get "broadcast" where. Once you do that you will see that broadcasting allows you to compute exactly what you want - subtract each point from each centroid.
As a sanity check, since there are 200 points, 3 centroids, and each is 2D, we should have 200*3*2 differences. This is exactly what we get:
In [53]: points
Out[53]: <tf.Tensor 'Const:0' shape=(200, 2) dtype=float64>
In [54]: points_expanded
Out[54]: <tf.Tensor 'ExpandDims_4:0' shape=(1, 200, 2) dtype=float64>
In [55]: centroids
Out[55]: <tf.Variable 'Variable:0' shape=(3, 2) dtype=float64_ref>
In [56]: centroids_expanded
Out[56]: <tf.Tensor 'ExpandDims_5:0' shape=(3, 1, 2) dtype=float64>
In [57]: tf.subtract(points_expanded, centroids_expanded)
Out[57]: <tf.Tensor 'Sub_5:0' shape=(3, 200, 2) dtype=float64>
If you are having trouble drawing the shapes, you can think of broadcasting the expanded_points with dimension (1, 200, 2) to dimension (3, 200, 2) as copying the 200x2 matrix 3 times along the first dimension. The 3x2 matrix in centroids_expanded (of shape (3, 1, 2)) get copied 200 times along the second dimension.