Does `tf.data.Dataset.repeat()` buffer the entire dataset in memory? - tensorflow

Looking at this code example from the TF documentation:
filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(...)
dataset = dataset.shuffle(buffer_size=10000)
dataset = dataset.batch(32)
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
Does the dataset.repeat(num_epochs) require that the entire dataset be loaded into memory? Or is it re-initializing the dataset(s) that came before it when it receives an end-of-dataset exception?
The documentation is ambiguous about this point.

Based on this simple test it appears that repeat does not buffer the dataset, it must be re-initializing the upstream datasets.
n = tf.data.Dataset.range(5).shuffle(buffer_size=5).repeat(2).make_one_shot_iterator().get_next()
[sess.run(n) for _ in range(10)]
Out[83]: [2, 0, 3, 1, 4, 3, 1, 0, 2, 4]
Logic suggests that if repeat were buffering its input, the same random shuffle pattern would have have been repeated in this simple experiment.

Related

Writing and Reading lists to TFRecord example

I want to write a list of integers (or any multidimensional numpy matrix) to one TFRecords example. For both a single value or a list of multiple values I can creates the TFRecord file without error. I know also how to read the single value back from TFRecord file as shown in the below code sample I compiled from various sources.
# Making an example TFRecord
my_example = tf.train.Example(features=tf.train.Features(feature={
'my_ints': tf.train.Feature(int64_list=tf.train.Int64List(value=[5]))
}))
my_example_str = my_example.SerializeToString()
with tf.python_io.TFRecordWriter('my_example.tfrecords') as writer:
writer.write(my_example_str)
# Reading it back via a Dataset
featuresDict = {'my_ints': tf.FixedLenFeature([], dtype=tf.int64)}
def parse_tfrecord(example):
features = tf.parse_single_example(example, featuresDict)
return features
Dataset = tf.data.TFRecordDataset('my_example.tfrecords')
Dataset = Dataset.map(parse_tfrecord)
iterator = Dataset.make_one_shot_iterator()
with tf.Session() as sess:
print(sess.run(iterator.get_next()))
But how can I read back a list of values (e.g. [5,6]) from one example? The featuresDict defines the feature to be of type int64, and it fails when I have multiple values in it and I get below error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Key: my_ints. Can't parse serialized Example.
You can achieve this by using tf.train.SequenceExample. I've edited your code to return both 1D and 2D data. First, you create a list of features which you place in a tf.train.FeatureList. We convert our 2D data to bytes.
vals = [5, 5]
vals_2d = [np.zeros((5,5), dtype=np.uint8), np.ones((5,5), dtype=np.uint8)]
features = [tf.train.Feature(int64_list=tf.train.Int64List(value=[val])) for val in vals]
features_2d = [tf.train.Feature(bytes_list=tf.train.BytesList(value=[val.tostring()])) for val in vals_2d]
featureList = tf.train.FeatureList(feature=features)
featureList_2d = tf.train.FeatureList(feature=features_2d)
In order to get the correct shape of our 2D feature we need to provide context (non-sequential data), this is done with a context dictionary.
context_dict = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[vals_2d[0].shape[0]])),
'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[vals_2d[0].shape[1]])),
'length': tf.train.Feature(int64_list=tf.train.Int64List(value=[len(vals_2d)]))}
Then you place each FeatureList in a tf.train.FeatureLists dictionary. Finally, this is placed in a tf.train.SequenceExample along with the context dictionary
my_example = tf.train.SequenceExample(feature_lists=tf.train.FeatureLists(feature_list={'1D':featureList,
'2D': featureList_2d}),
context = tf.train.Features(feature=context_dict))
my_example_str = my_example.SerializeToString()
with tf.python_io.TFRecordWriter('my_example.tfrecords') as writer:
writer.write(my_example_str)
To read it back into tensorflow you need to use tf.FixedLenSequenceFeature for the sequential data and tf.FixedLenFeature for the context data. We convert the bytes back to integers and we parse the context data in order to restore the correct shape.
# Reading it back via a Dataset
featuresDict = {'1D': tf.FixedLenSequenceFeature([], dtype=tf.int64),
'2D': tf.FixedLenSequenceFeature([], dtype=tf.string)}
contextDict = {'height': tf.FixedLenFeature([], dtype=tf.int64),
'width': tf.FixedLenFeature([], dtype=tf.int64),
'length':tf.FixedLenFeature([], dtype=tf.int64)}
def parse_tfrecord(example):
context, features = tf.parse_single_sequence_example(
example,
sequence_features=featuresDict,
context_features=contextDict
)
height = context['height']
width = context['width']
seq_length = context['length']
vals = features['1D']
vals_2d = tf.decode_raw(features['2D'], tf.uint8)
vals_2d = tf.reshape(vals_2d, [seq_length, height, width])
return vals, vals_2d
Dataset = tf.data.TFRecordDataset('my_example.tfrecords')
Dataset = Dataset.map(parse_tfrecord)
iterator = Dataset.make_one_shot_iterator()
with tf.Session() as sess:
print(sess.run(iterator.get_next()))
This will output the sequence of [5, 5] and the 2D numpy arrays. This blog post has a more in depth look at defining sequences with tfrecords https://dmolony3.github.io/Working%20with%20image%20sequences.html

Given a dataframe with N elements, how can make m smaller dataframes such that the size of each m is some fraction of N?

I have a dataset (call it Data) with ~25000 instances that I want to split into a train set, development set, and test set. I want it to be such that,
train set = 0.7*Data
development set = 0.1*Data
test set = 0.2*Data
When making the split, I want the instances to be randomly sampled and NOT REPEATED between the 3 sets. This is why I can't use something like,
train_set = Data.sample(frac=0.7)
dev_set = Data.sample(frac=0.1)
train_set = Data.sample(frac=0.2)
where instances from Data may be repeated in the sets. Is there a build in function that I am missing or could you help me write a function for doing this?
I will use an array to demonstrate an example of what I am looking for.
A = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
splits = [0.7, 0.1, 0.2]
def splitFunction(data, array_of_splits):
// I need your help here
splits = splitFunction(A, splits)
#output
[[1, 3, 8, 9, 6, 7, 2], [4], [5, 0]]
Thank you in advance!
from random import shuffle
def splitFunction(data, array_of_splits):
data_copy = data[:] # copy data if don't want to change original array
shuffle(data_copy) # randomizes data
splits = []
startIndex = 0
for val in array_of_splits:
split = data_copy[startIndex:startIndex + val*len(data)]
startIndex = startIndex + val*len(data)
splits.append(split)
return splits

Tensorflow dataset batching for complex data

I tried to follow the example in this link:
https://www.tensorflow.org/programmers_guide/datasets
but I am totally lost about how to run the session. I understand the first argument is the operations to run, and feed_dict is the placeholders (my understanding is the batches of the training or test dataset),
So, here is my code:
batch_size = 100
handle_mix = tf.placeholder(tf.float64, shape=[])
handle_src0 = tf.placeholder(tf.float64, shape=[])
handle_src1 = tf.placeholder(tf.float64, shape=[])
handle_src2 = tf.placeholder(tf.float64, shape=[])
handle_src3 = tf.placeholder(tf.float64, shape=[])
I create the dataset from mp4 tracks and stems, reading mixture and sources magnitudes, and pad them to be suitable to batching
dataset = tf.data.Dataset.from_tensor_slices(
{"x_mixed":padded_lbl, "y_src0": padded_src[0], "y_src1":
padded_src[1],"y_src2": padded_src[1], "y_src3": padded_src[1]})
dataset = dataset.shuffle(1000).repeat().batch(batch_size)
iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
from the example I should do:
next_element = iterator.get_next()
training_init_op = iterator.make_initializer(dataset)
for _ in range(20):
# Initialize an iterator over the training dataset.
sess.run(training_init_op)
for _ in range(100):
sess.run(next_element)
However, I have a loss, summaries, and optimiser operations and need to feed the data as batches, following another example as:
l, _, summary = sess.run([loss_fn, optimizer, summary_op], feed_dict= {handle_mix: batch_mix, handle_src0: batch_src0, handle_src1: batch_src1, handle_src2: batch_src2, handle_src3: batch_src3})
So I thought something like:
batch_mix, batch_src0, batch_src1, batch_src2, batch_src3 = data.train.next_batch(batch_size)
or maybe a separate run to fetch the batches first, then run the optimisation as above, such as:
batch_mix, batch_src0, batch_src1, batch_src2, batch_src3 = sess.run(next_element)
l, _, summary = sess.run([loss_fn, optimizer, summary_op], feed_dict={handle_mix: batch_mix, handle_src0: batch_src0, handle_src1: batch_src1, handle_src2: batch_src2, handle_src3: batch_src3})
This last attempt, returned string names of the batches as created in the tf.data.Dataset.from_tensor_slices ("x_mixed", "y_src0", ... etc) and failed to cast to tf.float64 placeholders in the session.
Can you please let me know how to create this dataset, there might be an error in the structure from tensor slices in the first place, then how to batch them,
thank you very much,
The issue is that you packed your data into a dict when creating the dataset from tensor slices. This will result in iterator.get_next() returning each batch as a dict as well. If we do something like
d = {"a": 1, "b": 2}
k1, k2 = d
we get k1 == "a" and k2 == "b" (or the other way around due to unordered dict keys). That is, your attempt at unpacking the result of sess.run(next_element) just gives you the dict keys whereas you are interested in the dict values (tensors). This should work instead:
next_element = iterator.get_next()
x_mixed = next_element["x_mixed"]
y_src0 = next_element["y_src0"]
...
If you then build your model based on the variables x_mixed etc, it should work fine. Note that with the tf.data API you don't need placeholders! Tensorflow will see that your model output requires e.g. x_mixed, which is gotten from iterator.get_next(), so it will simply execute this op whenever you try to sess.run() your loss function/optimizer etc. If you're more comfortable with placeholders you can of course keep using them, just remember to unpack the dict properly. This should be about right:
batch_dict = sess.run(next_element)
l, _, summary = sess.run([loss_fn, optimizer, summary_op], feed_dict={handle_mix: batch_dict["x_mixed"], ... })

Split a dataset created by Tensorflow dataset API in to Train and Test?

Does anyone know how to split a dataset created by the dataset API (tf.data.Dataset) in Tensorflow into Test and Train?
Assuming you have all_dataset variable of tf.data.Dataset type:
test_dataset = all_dataset.take(1000)
train_dataset = all_dataset.skip(1000)
Test dataset now has first 1000 elements and the rest goes for training.
You may use Dataset.take() and Dataset.skip():
train_size = int(0.7 * DATASET_SIZE)
val_size = int(0.15 * DATASET_SIZE)
test_size = int(0.15 * DATASET_SIZE)
full_dataset = tf.data.TFRecordDataset(FLAGS.input_file)
full_dataset = full_dataset.shuffle()
train_dataset = full_dataset.take(train_size)
test_dataset = full_dataset.skip(train_size)
val_dataset = test_dataset.skip(val_size)
test_dataset = test_dataset.take(test_size)
For more generality, I gave an example using a 70/15/15 train/val/test split but if you don't need a test or a val set, just ignore the last 2 lines.
Take:
Creates a Dataset with at most count elements from this dataset.
Skip:
Creates a Dataset that skips count elements from this dataset.
You may also want to look into Dataset.shard():
Creates a Dataset that includes only 1/num_shards of this dataset.
Disclaimer I stumbled upon this question after answering this one so I thought I'd spread the love
Most of the answers here use take() and skip(), which requires knowing the size of your dataset before hand. This isn't always possible, or is difficult/intensive to ascertain.
Instead what you can do is to essentially slice the dataset up so that 1 every N records becomes a validation record.
To accomplish this, lets start with a simple dataset of 0-9:
dataset = tf.data.Dataset.range(10)
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Now for our example, we're going to slice it so that we have a 3/1 train/validation split. Meaning 3 records will go to training, then 1 record to validation, then repeat.
split = 3
dataset_train = dataset.window(split, split + 1).flat_map(lambda ds: ds)
# [0, 1, 2, 4, 5, 6, 8, 9]
dataset_validation = dataset.skip(split).window(1, split + 1).flat_map(lambda ds: ds)
# [3, 7]
So the first dataset.window(split, split + 1) says to grab split number (3) of elements, then advance split + 1 elements, and repeat. That + 1 effectively skips the 1 element we're going to use in our validation dataset.
The flat_map(lambda ds: ds) is because window() returns the results in batches, which we don't want. So we flatten it back out.
Then for the validation data we first skip(split), which skips over the first split number (3) of elements that were grabbed in the first training window, so we start our iteration on the 4th element. The window(1, split + 1) then grabs 1 element, advances split + 1 (4), and repeats.
 
Note on nested datasets:
The above example works well for simple datasets, but flat_map() will generate an error if the dataset is nested. To address this, you can swap out the flat_map() with a more complicated version that can handle both simple and nested datasets:
.flat_map(lambda *ds: ds[0] if len(ds) == 1 else tf.data.Dataset.zip(ds))
#ted's answer will cause some overlap. Try this.
train_ds_size = int(0.64 * full_ds_size)
valid_ds_size = int(0.16 * full_ds_size)
train_ds = full_ds.take(train_ds_size)
remaining = full_ds.skip(train_ds_size)
valid_ds = remaining.take(valid_ds_size)
test_ds = remaining.skip(valid_ds_size)
use code below to test.
tf.enable_eager_execution()
dataset = tf.data.Dataset.range(100)
train_size = 20
valid_size = 30
test_size = 50
train = dataset.take(train_size)
remaining = dataset.skip(train_size)
valid = remaining.take(valid_size)
test = remaining.skip(valid_size)
for i in train:
print(i)
for i in valid:
print(i)
for i in test:
print(i)
Now Tensorflow doesn't contain any tools for that.
You could use sklearn.model_selection.train_test_split to generate train/eval/test dataset, then create tf.data.Dataset respectively.
You can use shard:
dataset = dataset.shuffle() # optional
trainset = dataset.shard(2, 0)
testset = dataset.shard(2, 1)
See:
https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard
The upcoming TensorFlow 2.10.0 will have a tf.keras.utils.split_dataset function, see the rc3 release notes:
Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
In case size of the dataset is known:
from typing import Tuple
import tensorflow as tf
def split_dataset(dataset: tf.data.Dataset,
dataset_size: int,
train_ratio: float,
validation_ratio: float) -> Tuple[tf.data.Dataset, tf.data.Dataset, tf.data.Dataset]:
assert (train_ratio + validation_ratio) < 1
train_count = int(dataset_size * train_ratio)
validation_count = int(dataset_size * validation_ratio)
test_count = dataset_size - (train_count + validation_count)
dataset = dataset.shuffle(dataset_size)
train_dataset = dataset.take(train_count)
validation_dataset = dataset.skip(train_count).take(validation_count)
test_dataset = dataset.skip(validation_count + train_count).take(test_count)
return train_dataset, validation_dataset, test_dataset
Example:
size_of_ds = 1001
train_ratio = 0.6
val_ratio = 0.2
ds = tf.data.Dataset.from_tensor_slices(list(range(size_of_ds)))
train_ds, val_ds, test_ds = split_dataset(ds, size_of_ds, train_ratio, val_ratio)
A robust way to split dataset into two parts is to first deterministically map every item in the dataset into a bucket with, for example, tf.strings.to_hash_bucket_fast. Then you can split the dataset into two by filtering by the bucket. If you split your data into five buckets, you get 80-20 split assuming that the split is even.
As an example, assume that your dataset contains dictionaries with key filename. We split the data into five buckets based on this key. With this add_fold function, we add the key "fold" in the dictionaries:
def add_fold(buckets: int):
def add_(sample, label):
fold = tf.strings.to_hash_bucket(sample["filename"], num_buckets=buckets)
return {**sample, "fold": fold}, label
return add_
dataset = dataset.map(add_fold(buckets=5))
Now we can split the dataset into two disjoint datasets with Dataset.filter:
def pick_fold(fold: int):
def filter_fn(sample, _):
return tf.math.equal(sample["fold"], fold)
return filter_fn
def skip_fold(fold: int):
def filter_fn(sample, _):
return tf.math.not_equal(sample["fold"], fold)
return filter_fn
train_dataset = dataset.filter(skip_fold(0))
val_dataset = dataset.filter(pick_fold(0))
The key that you use for hashing should be one that captures the correlations in the dataset. For example, if your samples collected by the same person are correlated and you want all samples with the same collector end up in the same bucket (and the same split), you should use the collector name or ID as the hashing column.
Of course, you can skip the part with dataset.map and do the hashing and filtering in one filter function. Here's a full example:
dataset = tf.data.Dataset.from_tensor_slices([f"value-{i}" for i in range(10000)])
def to_bucket(sample):
return tf.strings.to_hash_bucket_fast(sample, 5)
def filter_train_fn(sample):
return tf.math.not_equal(to_bucket(sample), 0)
def filter_val_fn(sample):
return tf.math.logical_not(filter_train_fn(sample))
train_ds = dataset.filter(filter_train_fn)
val_ds = dataset.filter(filter_val_fn)
print(f"Length of training set: {len(list(train_ds.as_numpy_iterator()))}")
print(f"Length of validation set: {len(list(val_ds.as_numpy_iterator()))}")
This prints:
Length of training set: 7995
Length of validation set: 2005
Can't comment, but above answer has overlap and is incorrect. Set BUFFER_SIZE to DATASET_SIZE for perfect shuffle. Try different sized val/test size to verify. Answer should be:
DATASET_SIZE = tf.data.experimental.cardinality(full_dataset).numpy()
train_size = int(0.7 * DATASET_SIZE)
val_size = int(0.15 * DATASET_SIZE)
test_size = int(0.15 * DATASET_SIZE)
full_dataset = full_dataset.shuffle(BUFFER_SIZE)
train_dataset = full_dataset.take(train_size)
test_dataset = full_dataset.skip(train_size)
val_dataset = test_dataset.take(val_size)
test_dataset = test_dataset.skip(val_size)

Saving and running wide_deep.py model

I've been playing around with the Tensorflow Wide and Deep tutorial using the census dataset.
The linear/wide tutorial states:
We will train a logistic regression model, and given an individual's information our model will output a number between 0 and 1
At the moment, I can't work out how to predict the output of an individual input (copied from the unit test):
TEST_INPUT_VALUES = {
'age': 18,
'education_num': 12,
'capital_gain': 34,
'capital_loss': 56,
'hours_per_week': 78,
'education': 'Bachelors',
'marital_status': 'Married-civ-spouse',
'relationship': 'Husband',
'workclass': 'Self-emp-not-inc',
'occupation': 'abc',
}
How can we predict and output whether this person is likely to earn <50k (0) or >=50k (1)?
The function is predict, but I didn't figure out how to input one example data directly (I tried numpy_input_fn and dict of tensors).
Instead, using the input function in wide_deep.py to write down the data into a temporary csv file then read it, the predict function can be used:
TEST_INPUT = ('18,Self-emp-not-inc,987,Bachelors,12,Married-civ-spouse,abc,'
'Husband,zyx,wvu,34,56,78,tsr,<=50K')
# Create temporary CSV file
input_csv = '/tmp/census_model/test.csv'
with tf.gfile.Open(input_csv, 'w') as temp_csv:
temp_csv.write(TEST_INPUT)
# restore model trained by wide_deep.py with same model_dir and model_type
model = wide_deep.build_estimator(FLAGS.model_dir, FLAGS.model_type)
pred_iter = model.predict(input_fn=lambda: wide_deep.input_fn(input_csv, 1, False, 1))
for pred in pred_iter:
# print(pred)
print(pred['classes'])
There are other attributes like probability, logits etc in pred.
Hookay, I can answer this now.. So if you want to evaluate the test set accuracy, you can follow the accepted answer, but if you want to make your own predictions, here are the steps.
First, construct a new input_fn, notice you need to alter the columns and the default column values since the label column wouldn't be there.
def parse_csv(value):
print('Parsing', data_file)
columns = tf.decode_csv(value, record_defaults=_PREDICT_COLUMNS_DEFAULTS)
features = dict(zip(_PREDICT_COLUMNS, columns))
return features
def predict_input_fn(data_file):
assert tf.gfile.Exists(data_file), ('%s not found. Please make sure the path is correct.' % data_file)
dataset = tf.data.TextLineDataset(data_file)
dataset = dataset.map(parse_csv, num_parallel_calls=5)
dataset = dataset.batch(1) # => This is very important to get the rank correct
iterator = dataset.make_one_shot_iterator()
features = iterator.get_next()
return features
Then you can just call it simply by
results = model.predict(
input_fn=lambda: predict_input_fn(data_file='test.csv')
)