I have to swap tensor's axes using tf.transpose to do the batch matrix multiplication (as the code shown below).
tensor input_a: shape [10000, 10000]
tensor input_b: shape [batch_size, 10000, 10]
tensor output: shape [batch_size, 10000, 10]
# reshape_input_b: shape [10000, batch_size, 10]
transpose_input_b = tf.transpose(input_b, [1, 0, 2])
# transpose_input_b : shape [10000, batch_size * 10]
reshape_input_b = tf.reshape(transpose_input_b , [10000, -1])
# ret: shape [10000, batch_size * 10]
ret = tf.matmul(input_a, reshape_input_b, a_is_sparse = True)
# reshape_ret: [10000, batch_size, 10]
reshape_ret = tf.reshape(ret, [10000, -1, 10])
# output : [batch_size, 10000, 10]
output = tf.transpose(reshape_ret, [1, 0, 2])
However, it seems very slow. I noticed this in the document page of tf.transpose:
In numpy transposes are memory-efficient constant time operations as they simply return a new view of the same data with adjusted strides.
TensorFlow does not support strides, so transpose returns a new tensor with the items permuted.
So, I think it might be the reason why my code run slowly? Is there any way to swap tensor's axes, or do the batch matrix multiplication efficiently?
I have a tensor X of shape (N,...) and a boolean index mask mask of shape N. I want to shuffle the subarray of X given by mask along the first axis.
How can this be done non-eagerly and, if possible, in place?
Note: I do not need gradients.
You can do that like this:
import tensorflow as tf
def shuffle_mask(x, mask, seed=None):
n = tf.size(mask)
# Get masked indices
idx_masked = tf.cast(tf.where(mask), n.dtype)
# Shuffle masked indices
idx_masked_shuffled = tf.random.shuffle(tf.squeeze(idx_masked, 1), seed=seed)
# Scatter shuffled indices into place
idx_masked_shuffled_scat = tf.scatter_nd(idx_masked, idx_masked_shuffled, [n])
# Combine shuffled and non-shuffled indices
idx_shuffled = tf.where(mask, idx_masked_shuffled_scat, tf.range(n))
# Gather using resulting indices
return tf.gather(x, idx_shuffled)
# Test
with tf.Graph().as_default(), tf.Session() as sess:
tf.random.set_random_seed(0)
x = tf.constant([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]])
mask = tf.constant([True, False, True, True, False])
y = shuffle_mask(x, mask)
print(sess.run(y))
# [[6 7]
# [2 3]
# [0 1]
# [4 5]
# [8 9]]
You cannot do the operation "in place", as there are no in-place operations at all in TensorFlow. Tensors are constant, so you will always be replacing one tensor with another.
I am doing the image semantic segmentation job with unet. I am confused with the last layers for pixel classification. The Unet code is like this:
...
reshape = Reshape((n_classes,self.img_rows * self.img_cols))(conv9)
permute = Permute((2,1))(reshape)
activation = Activation('softmax')(permute)
model = Model(input = inputs, output = activation)
return model
...
Can I just reshape without using Permute like this?
reshape = Reshape((self.img_rows * self.img_cols, n_classes))(conv9)
Updated:
I found the training result is not right when when using the directly reshape way:
reshape = Reshape((self.img_rows * self.img_cols, n_classes))(conv9) // the loss is not convergent
My groundtruth is generated like this:
X = []
Y = []
im = cv2.imread(impath)
X.append(im)
seg_labels = np.zeros((height, width, n_classes))
for spath in segpaths:
mask = cv2.imread(spath, 0)
seg_labels[:, :, c] += mask
Y.append(seg_labels.reshape(width*height, n_classes))
Why reshape directly does not work?
You clearly misunderstand the meaning of each operation and the final goal:
final goal: classification for each pixel, i.e. softmax along the semantic class axis
how to achieve this goal in the original code? Let's see the code line by line:
reshape = Reshape((n_classes,self.img_rows * self.img_cols))(conv9) # L1
permute = Permute((2,1))(reshape) # L2
activation = Activation('softmax')(permute) # L3
L1's output dim = n_class-by-n_pixs, (n_pixs=img_rows x img_cols)
L2's output dim = n_pixs-by-n_class
L3's output dim = n_pixs-by-n_class
Note the default softmax activation is applied to the last axis, i.e. the axis that n_class stands for, which is the semantic class axis.
Therefore, this original code fulfills the final goal of semantic segmentation.
Let's revisit the code that you want to change, which is
reshape = Reshape((self.img_rows * self.img_cols, n_classes))(conv9) # L4
L4's output dim = n_pixs-by-n_class
My guess is that you think L4's output dim matches L2's, and thus L4 is a short-cut that is equivalent to executing L1 and L2.
However, matching the shape does not necessarily mean matching the physical meaning of axes. Why? A simple example will explain.
Say you have 2 semantic classes and 3 pixels. To see the difference assume all three pixels belong to the same class.
In other words, a ground truth tensor will look like this
# cls#1 cls#2
[ [0, 1], # pixel #1
[0, 1], # pixel #2
[0, 1], # pixel #3
]
Assume you have a perfect network and generate the exact response for each pixel, but your solution will create a tensor like below
# cls#1 cls#2
[ [0, 0], # pixel #1
[0, 1], # pixel #2
[1, 1], # pixel #3
]
whose shape is the same as the ground truth's, but fails to match the physical meaning of axes.
This further makes the softmax operation meaningless, because it is supposed to apply to the class dimension, but this dimension does not physically exist. As a result, it leads to the following erroneous output after applying softmax,
# cls#1 cls#2
[ [0.5, 0.5], # pixel #1
[0, 1], # pixel #2
[0.5, 0.5], # pixel #3
]
which completely mess up the training even if it is under the ideal assumption.
Therefore, it is a good habit to write down the physical meaning of each axis of a tensor. When you do any tensor reshape operation, ask yourself whether the physical meaning of an axis is changed in your expected way.
For example, if you have a tensor T of shape batch_dim x img_rows x img_cols x feat_dim, you can do many things and not all of them make sense (due to the problematic physical meaning of axes)
(Wrong) reshape it to whatever x feat_dim, because whatever dimension is meaningless in testing where the batch_size might be different.
(Wrong) reshape it to batch_dim x feat_dim x img_rows x img_cols, because the 2nd dimension is NOT the feature dimension and neither for the 3rd and 4th dimension.
(Correct) permute axes (3,1,2), and this will lead you the tensor of shape batch_dim x feat_dim x img_rows x img_cols, while keeping the physical meaning of each axis.
(Correct) reshape it to batch_dim x whatever x feat_dim. This is also valid, because the whatever=img_rows x img_cols is equivalent to the pixel location dimension, and both the meanings of batch_dim and feat_dim are unchanged.
Your code will still be runnable since the shape will be the same, but the result (backprops) will be different since the values of tensors will be different. For example:
arr = np.array([[[1,1,1],[1,1,1]],[[2,2,2],[2,2,2]],[[3,3,3],[3,3,3]],[[4,4,4],[4,4,4]]])
arr.shape
>>>(4, 2, 3)
#do reshape, then premute
reshape_1 = arr.reshape((4, 2*3))
np.swapaxes(reshape_1, 1, 0)
>>>array([[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4]])
#do reshape directly
reshape_2 = arr.reshape(2*3, 4)
reshape_2
>>>array([[1, 1, 1, 1],
[1, 1, 2, 2],
[2, 2, 2, 2],
[3, 3, 3, 3],
[3, 3, 4, 4],
[4, 4, 4, 4]])
The Reshape and Permute is done to take the softmax at each pixel location. Adding to #meowongac's answer, Reshape preserves the order of the elements. In this case, since the channel dimensions have to be swapped, Reshape followed by Permute is appropriate.
Considering the case of (2,2) image with 3 values at each location,
arr = np.array([[[1,1],[1,1]],[[2,2],[2,2]],[[3,3],[3,3]]])
>>> arr.shape
(3, 2, 2)
>>> arr
array([[[1, 1],
[1, 1]],
[[2, 2],
[2, 2]],
[[3, 3],
[3, 3]]])
>>> arr[:,0,0]
array([1, 2, 3])
The channel values at each location are [1,2,3]. The goal is to swap the channel axis(length 3) to the end.
>>> arr.reshape((2,2,3))[0,0]
array([1, 1, 1]) # incorrect
>>> arr.transpose((1,2,0))[0,0] # similar to what permute does.
array([1, 2, 3]) # correct
More examples at this link: https://discuss.pytorch.org/t/how-to-change-shape-of-a-matrix-without-dispositioning-the-elements/30708
I'm trying to build a neural network where the labels and the number of labels change on input. For example, I could have a final layer of 10 units that represent the logit of their class, but sometimes I will only need units [1,3,4] to calculate cross entropy, some of the units [3,4,5,7] etc.
I tried using different combinations of map_fn, gather, py_fn and while_loop but no one seems to be in my case. Another way might be to list all types of label combinations (I call them network heads) and find some conditional constructs that allow me to choose one based on the value of a placeholder. But I'm not sure how to implement it.
For example:
x = tf.placeholder(dtype=tf.float32, shape=[None,3])
y = tf.placeholder(dtype=tf.int32, shape=[None, 3])
... to_do ...
with tf.Session() as sess:
sess.run(to_do, feed_dict={x: [[1, 3, 4], [3, 7, 8]], y: [[1, 0, 0], [0, 1, 1]]})
Here I need something that return [[1],[7,8]].
Oh no matter. There was a very easy way to get the probabilites I needed for cross-entropy.
x = tf.placeholder(dtype=tf.float32, shape=[None,3])
y = tf.placeholder(dtype=tf.int32, shape=[None, 3])
probabilities = tf.where(tf.equal(y,1), tf.exp(x), tf.zeros_like(x))
normalizing_sum = tf.reduce_sum(probabilities, 1, keep_dims=True)
probabilities/=normalizing_sum
with tf.Session() as sess:
res = sess.run(probabilities, feed_dict={x: [[1, 3, 4], [3, 7, 8]], y: [[1, 0, 0], [0, 1, 1]]})
I have an input to tensorflow of shape [None, 9, 2] (where the None is batch).
To perform further actions (e.g. matmul) on it I need to transform it to [None, 18] shape. How to do it?
You can do it easily with tf.reshape() without knowing the batch size.
x = tf.placeholder(tf.float32, shape=[None, 9,2])
shape = x.get_shape().as_list() # a list: [None, 9, 2]
dim = numpy.prod(shape[1:]) # dim = prod(9,2) = 18
x2 = tf.reshape(x, [-1, dim]) # -1 means "all"
The -1 in the last line means the whole column no matter what the batchsize is in the runtime. You can see it in tf.reshape().
Update: shape = [None, 3, None]
Thanks #kbrose. For the cases where more than 1 dimension are undefined, we can use tf.shape() with tf.reduce_prod() alternatively.
x = tf.placeholder(tf.float32, shape=[None, 3, None])
dim = tf.reduce_prod(tf.shape(x)[1:])
x2 = tf.reshape(x, [-1, dim])
tf.shape() returns a shape Tensor which can be evaluated in runtime. The difference between tf.get_shape() and tf.shape() can be seen in the doc.
I also tried tf.contrib.layers.flatten() in another . It is simplest for the first case, but it can't handle the second.
flat_inputs = tf.layers.flatten(inputs)
You can use dynamic reshaping to get value of batch dimension through tf.batch during runtime, calculate the whole set of new dimensions into tf.reshape. Here's an example of reshaping flat list into square matrix without knowing list length.
tf.reset_default_graph()
sess = tf.InteractiveSession("")
a = tf.placeholder(dtype=tf.int32)
# get [9]
ashape = tf.shape(a)
# slice the list from 0th to 1st position
ashape0 = tf.slice(ashape, [0], [1])
# reshape list to scalar, ie from [9] to 9
ashape0_flat = tf.reshape(ashape0, ())
# tf.sqrt doesn't support int, so cast to float
ashape0_flat_float = tf.to_float(ashape0_flat)
newshape0 = tf.sqrt(ashape0_flat_float)
# convert [3, 3] Python list into [3, 3] Tensor
newshape = tf.pack([newshape0, newshape0])
# tf.reshape doesn't accept float, so convert back to int
newshape_int = tf.to_int32(newshape)
a_reshaped = tf.reshape(a, newshape_int)
sess.run(a_reshaped, feed_dict={a: np.ones((9))})
You should see
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]], dtype=int32)