I would like to whiten each image in a batch. The code I have to do so is this:
def whiten(self, x):
shape = x.shape
x = K.batch_flatten(x)
mn = K.mean(x, 0)
std = K.std(x, 0) + K.epsilon()
r = (x - mn) / std
r = K.reshape(x, (-1,shape[1],shape[2],shape[3]))
return r
#
where x is (?, 320,320,1). I am not keen on the reshape function with a -1 arg. Is there a cleaner way to do this?
Let's see what the -1 does. From the Tensorflow documentation (Because the documentation from Keras is scarce compared to the one from Tensorflow):
If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant.
So what this means:
from keras import backend as K
X = tf.constant([1,2,3,4,5])
K.reshape(X, [-1, 5])
# Add one more dimension, the number of columns should be 5, and keep the number of elements to be constant
# [[1 2 3 4 5]]
X = tf.constant([1,2,3,4,5,6])
K.reshape(X, [-1, 3])
# Add one more dimension, the number of columns should be 3
# For the number of elements to be constant the number of rows should be 2
# [[1 2 3]
# [4 5 6]]
I think it is simple enough. So what happens in your code:
# Let's assume we have 5 images, 320x320 with 3 channels
X = tf.ones((5, 320, 320, 3))
shape = X.shape
# Let's flat the tensor so we can perform the rest of the computation
flatten = K.batch_flatten(X)
# What this did is: Turn a nD tensor into a 2D tensor with same 0th dimension. (Taken from the documentation directly, let's see that below)
flatten.shape
# (5, 307200)
# So all the other elements were squeezed in 1 dimension while keeping the batch_size the same
# ...The rest of the stuff in your code is executed here...
# So we did all we wanted and now we want to revert the tensor in the shape it had previously
r = K.reshape(flatten, (-1, shape[1],shape[2],shape[3]))
r.shape
# (5, 320, 320, 3)
Besides, I can't think of a cleaner way to do what you want to do. If you ask me, your code is already clear enough.
Related
I have an idea for a tensor operation that would not be difficult to implement via iteration, with batch size one. However I would like to parallelize it as much as possible.
I have two tensors with shape (n, 5) called X and Y. X is actually supposed to represent 5 one-dimensional tensors with shape (n, 1): (x_1, ..., x_n). Ditto for Y.
I would like to compute a tensor with shape (n, 25) where each column represents the output of the tensor operation f(x_i, y_j), where f is fixed for all 1 <= i, j <= 5. The operation f has output shape (n, 1), just like x_i and y_i.
I feel it is important to clarify that f is essentially a fully-connected layer from the concatenated [...x_i, ...y_i] tensor with shape (1, 10), to an output layer with shape (1,5).
Again, it is easy to see how to do this manually with iteration and slicing. However this is probably very slow. Performing this operation in batches, where the tensors X, Y now have shape (n, 5, batch_size) is also desirable, particularly for mini-batch gradient descent.
It is difficult to really articulate here why I desire to create this network; I feel it is suited for my domain of 'itemized tabular data' and cuts down significantly on the number of weights per operation, compared to a fully connected network.
Is this possible using tensorflow? Certainly not using just keras.
Below is an example in numpy per AloneTogether's request
import numpy as np
features = 16
batch_size = 256
X_batch = np.random.random((features, 5, batch_size))
Y_batch = np.random.random((features, 5, batch_size))
# one tensor operation to reduce weights in this custom 'layer'
f = np.random.random((features, 2 * features))
for b in range(batch_size):
X = X_batch[:, :, b]
Y = Y_batch[:, :, b]
for i in range(5):
x_i = X[:, i:i+1]
for j in range(5):
y_j = Y[:, j:j+1]
x_i_y_j = np.concatenate([x_i, y_j], axis=0)
# f(x_i, y_j)
# implemented by a fully-connected layer
f_i_j = np.matmul(f, x_i_y_j)
All operations you need (concatenation and matrix multiplication) can be batched.
Difficult part here is, that you want to concatenate features of all items in X with features of all items in Y (all combinations).
My recommended solution is to expand the dimensions of X to [batch, features, 5, 1], expand dimensions of Y to [batch, features, 1, 5]
Than tf.repeat() both tensors so their shapes become [batch, features, 5, 5].
Now you can concatenate X and Y. You will have a tensor of shape [batch, 2*features, 5, 5]. Observe that this way all combinations are built.
Next step is matrix multiplication. tf.matmul() can also do batch matrix multiplication, but I use here tf.einsum() because I want more control over which dimensions are considered as batch.
Full code:
import tensorflow as tf
import numpy as np
batch_size=3
features=6
items=5
x = np.random.uniform(size=[batch_size,features,items])
y = np.random.uniform(size=[batch_size,features,items])
f = np.random.uniform(size=[2*features,features])
x_reps= tf.repeat(x[:,:,:,tf.newaxis], items, axis=3)
y_reps= tf.repeat(y[:,:,tf.newaxis,:], items, axis=2)
xy_conc = tf.concat([x_reps,y_reps], axis=1)
f_i_j = tf.einsum("bfij, fg->bgij", xy_conc,f)
f_i_j = tf.reshape(f_i_j , [batch_size,features,items*items])
Suppose I have a pytorch tensor x of shape [N, N_g, 2]. It can be viewed as N * N_g 2d vectors. Specifically, x[i, j, :] is the 2d vector of the jth group in the ith batch.
Now I am trying to get the coordinates of vectors of top 5 length in each group. So I tried the following:
(i) First I used x_len = (x**2).sum(dim=2).sqrt() to compute their lengths, resulting in x_len.shape==[N, N_g].
(ii) Then I used tk = x_len.topk(5) to get the top 5 lengths in each group.
(iii) The desired output would be a tensor x_top5 of shape [N, 5, 2]. Naturally I thought of using tk.indices to index x so as to obtain x_top5. But I failed as it seems such indexing is not supported.
How can I do this?
A minimal example:
x = torch.randn(10,10,2) # N=10 is the batchsize, N_g=10 is the group size
x_len = (x**2).sum(dim=2).sqrt()
tk = x_len.topk(5)
x_top5 = x[tk.indices]
print(x_top5.shape)
# torch.Size([10, 5, 10, 2])
However, this gives x_top5 as a tensor of shape [10, 5, 10, 2], instead of [10, 5, 2] as desired.
I'm having some trouble porting some code over from tensorflow to pytorch.
So I have a matrix with dimensions 10x30 representing 10 examples each with 30 features. Then I have another matrix with dimensions 10x5 containing indices of the the 5 closest examples for each examples in the first matrix. I want to 'gather' using the indices contained in the second matrix the 5 closet examples for each example in the first matrix leaving me with a 3d tensor of shape 10x5x30.
In tensorflow this is done with tf.gather(matrix1, matrix2). Does anyone know how i could do this in pytorch?
How about this?
matrix1 = torch.randn(10, 30)
matrix2 = torch.randint(high=10, size=(10, 5))
gathered = matrix1[matrix2]
It uses the trick of indexing with an array of integers.
I had a scenario where I had to apply gather() on an array of integers.
Exam-01
torch.Tensor().gather(dim, input_tensor)
# here,
# input_tensor -> tensor(1)
my_list = [0, 1, 2, 3, 4]
my_tensor = torch.IntTensor(my_list)
output = my_tensor.gather(0, input_tensor) # 0 -> is the dimension
Exam-02
torch.gather(param_tensor, dim, input_tensor)
# here,
# input_tensor -> tensor(1)
my_list = [0, 1, 2, 3, 4]
my_tensor = torch.IntTensor(my_list)
output = torch.gather(my_tensor, 0, input_tensor) # 0 -> is the dimension
I am trying to feed data into an LSTM. I am reviewing the code from Tensorflow's RNN tutorial here.
The segment of code of interest is from the reader.py file of the tutorial, in particular the ptb_producer function, which ouputs the X and Y that is used by the LSTM.
raw_data is a list of indexes of words from the ptb.train.txt file. The length of raw_data is 929,589. batch_size is 20, num_steps is 35. Both batch_size and num_steps are based on the LARGEconfig which feeds the data to an LSTM.
I have walked through the code (and added comments for what I've printed) and I understand it up till tf.strided_slice. From the reshape, we have a matrix of indexes of shape (20, 46497).
Strided slice in the first iteration of i, tries to take data from [0, i * num_steps + 1] which is [0,1*35+1] till [batch_size, (i + 1) * num_steps + 1] which is [20, (1+1)*35+1].
Two questions:
where in the matrix is [0,1*35+1] and [20, (1+1)*35+1]? What spots in the (20, 46497) the begin and end in strided_slice is trying to access?
It seems like EVERY iteration of i, will take in data from 0? the very start of the data matrix (20, 46497)?
I guess what I am not understanding is how you would feed data into an LSTM, given the batch size and the num_steps (sequence length).
I have read colahs blog on LSTM and Karpathy's blog on RNN which helps greatly in the understanding of LSTMs, but don't seem to address the exact mechanics of getting data into an LSTM. (maybe I missed something?)
def ptb_producer(raw_data, batch_size, num_steps, name=None):
"""Iterate on the raw PTB data.
This chunks up raw_data into batches of examples and returns Tensors that
are drawn from these batches.
Args:
raw_data: one of the raw data outputs from ptb_raw_data.
batch_size: int, the batch size.
num_steps: int, the number of unrolls.
name: the name of this operation (optional).
Returns:
A pair of Tensors, each shaped [batch_size, num_steps]. The second
element of the tuple is the same data time-shifted to the right by one.
Raises:
tf.errors.InvalidArgumentError: if batch_size or num_steps are too high.
"""
with tf.name_scope(name, "PTBProducer", [raw_data, batch_size, num_steps]):
raw_data = tf.convert_to_tensor(raw_data, name="raw_data", dtype=tf.int32)
data_len = tf.size(raw_data) # prints 929,589
batch_len = data_len // batch_size # prints 46,497
data = tf.reshape(raw_data[0 : batch_size * batch_len],
[batch_size, batch_len])
#this truncates raw data to a multiple of batch_size=20,
#then reshapes to [20, 46497]. prints (20,?)
epoch_size = (batch_len - 1) // num_steps #prints 1327 (number of epoches)
assertion = tf.assert_positive(
epoch_size,
message="epoch_size == 0, decrease batch_size or num_steps")
with tf.control_dependencies([assertion]):
epoch_size = tf.identity(epoch_size, name="epoch_size")
i = tf.train.range_input_producer(epoch_size, shuffle=False).dequeue()
#for each of the 1327 epoches
x = tf.strided_slice(data, [0, i * num_steps], [batch_size, (i + 1) * num_steps]) # prints (?, ?)
x.set_shape([batch_size, num_steps]) #prints (20,35)
y = tf.strided_slice(data, [0, i * num_steps + 1], [batch_size, (i + 1) * num_steps + 1])
y.set_shape([batch_size, num_steps])
return x, y
where in the matrix is [0,1*35+1] and [20, (1+1)*35+1]? What spots in the (20, 46497) the begin and end in strided_slice is trying to access?
So the data matrix is arranged as 20 x 46497. Think of it as 20 sentences with 46497 characters each. The strided slice in your specific example will get character 36 to 70 (71 is not included, typical range semantics) from each line. This is equivalent to
strided = [data[i][36:71] for i in range(20)]
It seems like EVERY iteration of i, will take in data from 0? the very start of the data matrix (20, 46497)?
This is correct. For each iteration we want batch size elements in first dimension i.e. we want a matrix of size batch size x num_steps. Hence the first dimension always goes from 0-19, essentially looping over each sentence. And second dimension increments in num_steps, getting segments of words from each sentence. If we look at the index it extracts from data at each step for x, we will get something like:
- data[0:20,0:35]
- data[0:20,35:70]
- data[0:20,70:105]
- ...
For y we add 1 because we want next character to be predicted.
So your train, label tuples will be like:
- (data[0:20,0:35], data[0:20, 1:36])
- (data[0:20,35:70], data[0:20,36:71])
- (data[0:20,70:105], data[0:20,71:106])
- ...
I have a matrix L of shape (2,5,2). The values along the last axis form a probability distribution. I want to sample another matrix S of shape (2, 5) where each entry is one of the following integers: 0, 1.
For example,
L = [[[0.1, 0.9],[0.2, 0.8],[0.3, 0.7],[0.5, 0.5],[0.6, 0.4]],
[[0.5, 0.5],[0.9, 0.1],[0.7, 0.3],[0.9, 0.1],[0.1, 0.9]]]
One of the samples could be,
S = [[1, 1, 1, 0, 1],
[1, 1, 1, 0, 1]]
The distributions are binomial in the above example. However, in general, the last dimension of L can be any positive integer, so the distributions can be multinomial.
The samples need to be generated efficiently within Tensorflow computation graph. I know how to do this using numpy using the functions apply_along_axis and numpy.random.multinomial.
You can use tf.multinomial() here.
You will first need to reshape your input tensor to shape [-1, N] (where N is the last dimension of L):
# L has shape [2, 5, 2]
L = tf.constant([[[0.1, 0.9],[0.2, 0.8],[0.3, 0.7],[0.5, 0.5],[0.6, 0.4]],
[[0.5, 0.5],[0.9, 0.1],[0.7, 0.3],[0.9, 0.1],[0.1, 0.9]]])
dims = L.get_shape().as_list()
N = dims[-1] # here N = 2
logits = tf.reshape(L, [-1, N]) # shape [10, 2]
Now we can apply the function tf.multinomial() to logits:
samples = tf.multinomial(logits, 1)
# We reshape to match the initial shape minus the last dimension
res = tf.reshape(samples, dims[:-1])
Be cautious when using tf.multinomial(). The inputs to the function should be logits and not probability distributions.
However, in your example, the last axis is a probability distribution.