take samples from a tuple in tensorflow dataset - tensorflow

I have a time series features (numpy array) and a labels (same length in time steps and also numpy array). I used tf.keras.preprocessing.timeseries_dataset_from_array() to get tuples of (feature(batched), labels(batched)).
My issue is that I want to split it into training and validation (test is already separate). I want it shuffled and windowed, so I cannot do it before the timeseries_dataset_from_array() method and after, I have a tuple. Can I do:
(train_features, train_labels) = (output_tuple[0].take(num_training_sample), output_tuple[1].take(num_training_sample))
or does this change the order of features and labels?
Update:
What I have is the following
#feature of size (2757698, 21, 4, 1) time series array
#labels of size (2757698, 3)
w = (timeseries_dataset_from_array(
features[:train_size, :, :],
self.target[:train_size, :], self.WINDOW_SIZE, sequence_stride=1, sampling_rate=1,
batch_size=batch_size, shuffle=True, start_index=start,
end_index=end))
# output is a tuple as a batch of size ((256, 60, 21, 4, 1), (256, 3)), 60 is the window size
Output is a generator that you can feed to your model.fit()!
What I need is for this train_dataset to split to train and validation, after shuffle. As you can see, I already picked the test data from the end which is the most recent.

Related

Tabular data: Implementing a custom tensor layer without resorting to iteration

I have an idea for a tensor operation that would not be difficult to implement via iteration, with batch size one. However I would like to parallelize it as much as possible.
I have two tensors with shape (n, 5) called X and Y. X is actually supposed to represent 5 one-dimensional tensors with shape (n, 1): (x_1, ..., x_n). Ditto for Y.
I would like to compute a tensor with shape (n, 25) where each column represents the output of the tensor operation f(x_i, y_j), where f is fixed for all 1 <= i, j <= 5. The operation f has output shape (n, 1), just like x_i and y_i.
I feel it is important to clarify that f is essentially a fully-connected layer from the concatenated [...x_i, ...y_i] tensor with shape (1, 10), to an output layer with shape (1,5).
Again, it is easy to see how to do this manually with iteration and slicing. However this is probably very slow. Performing this operation in batches, where the tensors X, Y now have shape (n, 5, batch_size) is also desirable, particularly for mini-batch gradient descent.
It is difficult to really articulate here why I desire to create this network; I feel it is suited for my domain of 'itemized tabular data' and cuts down significantly on the number of weights per operation, compared to a fully connected network.
Is this possible using tensorflow? Certainly not using just keras.
Below is an example in numpy per AloneTogether's request
import numpy as np
features = 16
batch_size = 256
X_batch = np.random.random((features, 5, batch_size))
Y_batch = np.random.random((features, 5, batch_size))
# one tensor operation to reduce weights in this custom 'layer'
f = np.random.random((features, 2 * features))
for b in range(batch_size):
X = X_batch[:, :, b]
Y = Y_batch[:, :, b]
for i in range(5):
x_i = X[:, i:i+1]
for j in range(5):
y_j = Y[:, j:j+1]
x_i_y_j = np.concatenate([x_i, y_j], axis=0)
# f(x_i, y_j)
# implemented by a fully-connected layer
f_i_j = np.matmul(f, x_i_y_j)
All operations you need (concatenation and matrix multiplication) can be batched.
Difficult part here is, that you want to concatenate features of all items in X with features of all items in Y (all combinations).
My recommended solution is to expand the dimensions of X to [batch, features, 5, 1], expand dimensions of Y to [batch, features, 1, 5]
Than tf.repeat() both tensors so their shapes become [batch, features, 5, 5].
Now you can concatenate X and Y. You will have a tensor of shape [batch, 2*features, 5, 5]. Observe that this way all combinations are built.
Next step is matrix multiplication. tf.matmul() can also do batch matrix multiplication, but I use here tf.einsum() because I want more control over which dimensions are considered as batch.
Full code:
import tensorflow as tf
import numpy as np
batch_size=3
features=6
items=5
x = np.random.uniform(size=[batch_size,features,items])
y = np.random.uniform(size=[batch_size,features,items])
f = np.random.uniform(size=[2*features,features])
x_reps= tf.repeat(x[:,:,:,tf.newaxis], items, axis=3)
y_reps= tf.repeat(y[:,:,tf.newaxis,:], items, axis=2)
xy_conc = tf.concat([x_reps,y_reps], axis=1)
f_i_j = tf.einsum("bfij, fg->bgij", xy_conc,f)
f_i_j = tf.reshape(f_i_j , [batch_size,features,items*items])

Variable batch size in tensorflow and CNN

I want to feed in a 1-D CNN a sequence of fixed length and want it to make a prediction (regression), but I want to have a variable batch size during training. The tutorials are not really helpful.
In my input layer I have something like this:
input = tf.placeholder(tf.float32, [None, sequence_length], name="input")
y = tf.placeholder(tf.float32, [None, 1], name="y")
so I assume the None dimension, can be the a variable batch size of any number, so the current input dimension is batch_size * sequence_length and I am supposed to feed the network a 2d np array with dimensions any * sequence_length
tf.nn.conv1d expects 3-D, since my input is a single channel that is 1 np array of sequence_length observations the input I will need to feed to the cnn should be 1*batch_size * sequence_length, if I had on the other hand 2 different sequences that I combine to predict a single value in the end it would have been 2*batch_size * sequence_length and I would also need to concatenate the 2 different channels. So in my case I need
input = tf.expand_dims(input, -1)
and then the filter also follow the same:
filter_size = 5
channel_size = 1
num_filters = 10
filter_shape = [filter_size, channel_size, num_filters]
filters = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="filters")
tf.nn.conv1d(value=input, filters=filters, stride=1)
After that I add a FC layer, but the network isn't able to learn anything, even the a basic function such as sin(x), does the code above look correct?
Also how can I do a maxpooling?

tensorflow dataset shuffle then batch or batch then shuffle

I recently began learning tensorflow.
I am unsure about whether there is a difference
x = np.array([[1],[2],[3],[4],[5]])
dataset = tf.data.Dataset.from_tensor_slices(x)
ds.shuffle(buffer_size=4)
ds.batch(4)
and
x = np.array([[1],[2],[3],[4],[5]])
dataset = tf.data.Dataset.from_tensor_slices(x)
ds.batch(4)
ds.shuffle(buffer_size=4)
Also, I am not sure why I cannot use
dataset = dataset.shuffle_batch(buffer_size=2,batch_size=BATCH_SIZE)
as it gives the error
dataset = dataset.shuffle_batch(buffer_size=2,batch_size=BATCH_SIZE)
AttributeError: 'TensorSliceDataset' object has no attribute 'shuffle_batch'
Thank you!
TL;DR: Yes, there is a difference. Almost always, you will want to call Dataset.shuffle() before Dataset.batch(). There is no shuffle_batch() method on the tf.data.Dataset class, and you must call the two methods separately to shuffle and batch a dataset.
The transformations of a tf.data.Dataset are applied in the same sequence that they are called. Dataset.batch() combines consecutive elements of its input into a single, batched element in the output.
We can see the effect of the order of operations by considering the following two datasets:
tf.enable_eager_execution() # To simplify the example code.
# Batch before shuffle.
dataset = tf.data.Dataset.from_tensor_slices([0, 0, 0, 1, 1, 1, 2, 2, 2])
dataset = dataset.batch(3)
dataset = dataset.shuffle(9)
for elem in dataset:
print(elem)
# Prints:
# tf.Tensor([1 1 1], shape=(3,), dtype=int32)
# tf.Tensor([2 2 2], shape=(3,), dtype=int32)
# tf.Tensor([0 0 0], shape=(3,), dtype=int32)
# Shuffle before batch.
dataset = tf.data.Dataset.from_tensor_slices([0, 0, 0, 1, 1, 1, 2, 2, 2])
dataset = dataset.shuffle(9)
dataset = dataset.batch(3)
for elem in dataset:
print(elem)
# Prints:
# tf.Tensor([2 0 2], shape=(3,), dtype=int32)
# tf.Tensor([2 1 0], shape=(3,), dtype=int32)
# tf.Tensor([0 1 1], shape=(3,), dtype=int32)
In the first version (batch before shuffle), the elements of each batch are 3 consecutive elements from the input; whereas in the second version (shuffle before batch), they are randomly sampled from the input. Typically, when training by (some variant of) mini-batch stochastic gradient descent, the elements of each batch should be sampled as uniformly as possible from the total input. Otherwise, it is possible that the network will overfit to whatever structure was in the input data, and the resulting network will not achieve as high an accuracy.
Fully agree to #mrry, but there exists one case where you might want to do batching before shuffling. Suppose you're processing some text data which will be feed into an RNN. Here each sentence is treated as one sequence, and one batch will contain multiple sequences. Since the length of sentences is variable, we need to pad the sentences in a batch to a uniform length. An efficient way to do this is to group sentences of similar length together through batching, and then do shuffling. Otherwise, we may end up batches which are full with the <pad> token.

feeding data into an LSTM - Tensorflow RNN PTB tutorial

I am trying to feed data into an LSTM. I am reviewing the code from Tensorflow's RNN tutorial here.
The segment of code of interest is from the reader.py file of the tutorial, in particular the ptb_producer function, which ouputs the X and Y that is used by the LSTM.
raw_data is a list of indexes of words from the ptb.train.txt file. The length of raw_data is 929,589. batch_size is 20, num_steps is 35. Both batch_size and num_steps are based on the LARGEconfig which feeds the data to an LSTM.
I have walked through the code (and added comments for what I've printed) and I understand it up till tf.strided_slice. From the reshape, we have a matrix of indexes of shape (20, 46497).
Strided slice in the first iteration of i, tries to take data from [0, i * num_steps + 1] which is [0,1*35+1] till [batch_size, (i + 1) * num_steps + 1] which is [20, (1+1)*35+1].
Two questions:
where in the matrix is [0,1*35+1] and [20, (1+1)*35+1]? What spots in the (20, 46497) the begin and end in strided_slice is trying to access?
It seems like EVERY iteration of i, will take in data from 0? the very start of the data matrix (20, 46497)?
I guess what I am not understanding is how you would feed data into an LSTM, given the batch size and the num_steps (sequence length).
I have read colahs blog on LSTM and Karpathy's blog on RNN which helps greatly in the understanding of LSTMs, but don't seem to address the exact mechanics of getting data into an LSTM. (maybe I missed something?)
def ptb_producer(raw_data, batch_size, num_steps, name=None):
"""Iterate on the raw PTB data.
This chunks up raw_data into batches of examples and returns Tensors that
are drawn from these batches.
Args:
raw_data: one of the raw data outputs from ptb_raw_data.
batch_size: int, the batch size.
num_steps: int, the number of unrolls.
name: the name of this operation (optional).
Returns:
A pair of Tensors, each shaped [batch_size, num_steps]. The second
element of the tuple is the same data time-shifted to the right by one.
Raises:
tf.errors.InvalidArgumentError: if batch_size or num_steps are too high.
"""
with tf.name_scope(name, "PTBProducer", [raw_data, batch_size, num_steps]):
raw_data = tf.convert_to_tensor(raw_data, name="raw_data", dtype=tf.int32)
data_len = tf.size(raw_data) # prints 929,589
batch_len = data_len // batch_size # prints 46,497
data = tf.reshape(raw_data[0 : batch_size * batch_len],
[batch_size, batch_len])
#this truncates raw data to a multiple of batch_size=20,
#then reshapes to [20, 46497]. prints (20,?)
epoch_size = (batch_len - 1) // num_steps #prints 1327 (number of epoches)
assertion = tf.assert_positive(
epoch_size,
message="epoch_size == 0, decrease batch_size or num_steps")
with tf.control_dependencies([assertion]):
epoch_size = tf.identity(epoch_size, name="epoch_size")
i = tf.train.range_input_producer(epoch_size, shuffle=False).dequeue()
#for each of the 1327 epoches
x = tf.strided_slice(data, [0, i * num_steps], [batch_size, (i + 1) * num_steps]) # prints (?, ?)
x.set_shape([batch_size, num_steps]) #prints (20,35)
y = tf.strided_slice(data, [0, i * num_steps + 1], [batch_size, (i + 1) * num_steps + 1])
y.set_shape([batch_size, num_steps])
return x, y
where in the matrix is [0,1*35+1] and [20, (1+1)*35+1]? What spots in the (20, 46497) the begin and end in strided_slice is trying to access?
So the data matrix is arranged as 20 x 46497. Think of it as 20 sentences with 46497 characters each. The strided slice in your specific example will get character 36 to 70 (71 is not included, typical range semantics) from each line. This is equivalent to
strided = [data[i][36:71] for i in range(20)]
It seems like EVERY iteration of i, will take in data from 0? the very start of the data matrix (20, 46497)?
This is correct. For each iteration we want batch size elements in first dimension i.e. we want a matrix of size batch size x num_steps. Hence the first dimension always goes from 0-19, essentially looping over each sentence. And second dimension increments in num_steps, getting segments of words from each sentence. If we look at the index it extracts from data at each step for x, we will get something like:
- data[0:20,0:35]
- data[0:20,35:70]
- data[0:20,70:105]
- ...
For y we add 1 because we want next character to be predicted.
So your train, label tuples will be like:
- (data[0:20,0:35], data[0:20, 1:36])
- (data[0:20,35:70], data[0:20,36:71])
- (data[0:20,70:105], data[0:20,71:106])
- ...

No broadcasting for tf.matmul in TensorFlow

I have a problem with which I've been struggling. It is related to tf.matmul() and its absence of broadcasting.
I am aware of a similar issue on https://github.com/tensorflow/tensorflow/issues/216, but tf.batch_matmul() doesn't look like a solution for my case.
I need to encode my input data as a 4D tensor:
X = tf.placeholder(tf.float32, shape=(None, None, None, 100))
The first dimension is the size of a batch, the second the number of entries in the batch.
You can imagine each entry as a composition of a number of objects (third dimension). Finally, each object is described by a vector of 100 float values.
Note that I used None for the second and third dimensions because the actual sizes may change in each batch. However, for simplicity, let's shape the tensor with actual numbers:
X = tf.placeholder(tf.float32, shape=(5, 10, 4, 100))
These are the steps of my computation:
compute a function of each vector of 100 float values (e.g., linear function)
W = tf.Variable(tf.truncated_normal([100, 50], stddev=0.1))
Y = tf.matmul(X, W)
problem: no broadcasting for tf.matmul() and no success using tf.batch_matmul()
expected shape of Y: (5, 10, 4, 50)
applying average pooling for each entry of the batch (over the objects of each entry):
Y_avg = tf.reduce_mean(Y, 2)
expected shape of Y_avg: (5, 10, 50)
I expected that tf.matmul() would have supported broadcasting. Then I found tf.batch_matmul(), but still it looks like doesn't apply to my case (e.g., W needs to have 3 dimensions at least, not clear why).
BTW, above I used a simple linear function (the weights of which are stored in W). But in my model I have a deep network instead. So, the more general problem I have is automatically computing a function for each slice of a tensor. This is why I expected that tf.matmul() would have had a broadcasting behavior (if so, maybe tf.batch_matmul() wouldn't even be necessary).
Look forward to learning from you!
Alessio
You could achieve that by reshaping X to shape [n, d], where d is the dimensionality of one single "instance" of computation (100 in your example) and n is the number of those instances in your multi-dimensional object (5*10*4=200 in your example). After reshaping, you can use tf.matmul and then reshape back to the desired shape. The fact that the first three dimensions can vary makes that little tricky, but you can use tf.shape to determine the actual shapes during run time. Finally, you can perform the second step of your computation, which should be a simple tf.reduce_mean over the respective dimension. All in all, it would look like this:
X = tf.placeholder(tf.float32, shape=(None, None, None, 100))
W = tf.Variable(tf.truncated_normal([100, 50], stddev=0.1))
X_ = tf.reshape(X, [-1, 100])
Y_ = tf.matmul(X_, W)
X_shape = tf.gather(tf.shape(X), [0,1,2]) # Extract the first three dimensions
target_shape = tf.concat(0, [X_shape, [50]])
Y = tf.reshape(Y_, target_shape)
Y_avg = tf.reduce_mean(Y, 2)
As the renamed title of the GitHub issue you linked suggests, you should use tf.tensordot(). It enables contraction of axes pairs between two tensors, in line with Numpy's tensordot(). For your case:
X = tf.placeholder(tf.float32, shape=(5, 10, 4, 100))
W = tf.Variable(tf.truncated_normal([100, 50], stddev=0.1))
Y = tf.tensordot(X, W, [[3], [0]]) # gives shape=[5, 10, 4, 50]