how does tensorflow.data.Dataset repeat(count=None) method work - tensorflow

Tensorflow data.Dataset has a method repeat(count=None)(https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable#repeat), which repeats this dataset so each original value is seen count times. If count is set to None (by default), the dataset is to be repeated indefinitely. My question is, in the extreme case, how is the dataset of infinite size is handled
and stored in memory? When I try checking its contents using as_numpy_iterator(), the system will get stuck.

The data.Dataset is not a simple repetition of your data based on the repeat(count=X) method. It returns a python iterable, generating an iterator object.
An iterator is an object that implements next, which is expected to return the next element of the iterable object that returned it, and raise a StopIteration exception when no more elements are available.
Source
Having a Dataset with "infinite repetitions" will load "indefinitely" a number of samples equal to your batch size. So, generally speaking, what you store in memory is one batch of samples. Moreover, shuffling with a buffer with a size greater than your dataset will help you having a representative batch based on the distribution of the elements in your actual dataset.

Related

Tensorflow data.Dataset.map and memory storage

I have a dataset of images that is too large to store on memory. What I plan to do is loading pairs of the paths to the images and corresponding labels as my dataset, then use a generator function during training to convert only the paths in my batch to images before feeding them to the network.
Is data.Dataset.map() a good way to do this? Does it return a mapping function, that can be applied only to the current batch during training, or does it perform the mapping operation on the whole dataset at once, occupying lots of memory? In the second case, what is an alternative?
A few tutorials I went through made me believe the mapping takes place per batch, but this quote from the documentation suggests a whole new dataset is returned: "This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input."
The key thing to understand here is that tf.data.Dataset objects are generally "lazy" in that elements are only processed as needed (in a batched Dataset, elements == batches). When iterating over a dataset, this usually means that only the next requested element is prepared and then returned. So to answer your question: When using map to load data from disk, and applying this to a dataset of file names, only one batch of the loaded data should be stored in memory at the same time, and you should be able to process the dataset just fine. However, this can significantly slow down training if loading the files is a bottleneck in terms of speed.
There are some exceptions though, for example:
When you use the shuffle method, you need to provide a buffer size, and AFAIK the entire buffer is preprocessed at once. This can lead to issues since you want a large buffer for good shuffling, but this requires more memory. Thus you probably want to use shuffle before applying map.
The prefetch method results in multiple elements being prepared in order to avoid the model having to wait for the next batch to be processed.
Note that this lazy behavior also has some disadvantages, e.g.
You can only iterate over datasets sequentially; there is no random access.
A dataset doesn't even know how many elements it contains (this would require iterating over the entire set).

How exactly does tf.data.Dataset.interleave() differ from map() and flat_map()?

My current understanding is:
Different map_func: Both interleave and flat_map expect "A function mapping a dataset element to a dataset". In contrast, map expects "A function mapping a dataset element to another dataset element".
Arguments: Both interleave and map offer the argument num_parallel_calls, whereas flat_map does not. Moreover, interleave offers these magical arguments block_length and cycle_length. For cycle_length=1, the documentation states that the outputs of interleave and flat_map are equal.
Last, I have seen data loading pipelines without interleave as well as ones with interleave. Any advice when to use interleave vs. map or flat_map would be greatly appreciated
//EDIT: I do see the value of interleave, if we start out with different datasets, such as in the code below
files = tf.data.Dataset.list_files("/path/to/dataset/train-*.tfrecord")
dataset = files.interleave(tf.data.TFRecordDataset)
However, is there any benefit of using interleave over map in a scenario such as the one below?
files = tf.data.Dataset.list_files("/path/to/dataset/train-*.png")
dataset = files.map(load_img, num_parallel_calls=tf.data.AUTOTUNE)
Edit:
Can map not also be used to parallelize I/O?
Indeed, you can read images and labels from a directory with map function. Assume this case:
list_ds = tf.data.Dataset.list_files(my_path)
def process_path(path):
### get label here etc. Images need to be decoded
return tf.io.read_file(path), label
new_ds = list_ds.map(process_path,num_parallel_calls=tf.data.experimental.AUTOTUNE)
Note that, now it is multi-threaded as num_parallel_calls has been set.
The advantage of interlave() function:
Suppose you have a dataset
With cycle_length you can out that many elements from the dataset, i.e 5, then 5 elements are out from the dataset and a map_func can be applied.
After, fetch dataset objects from newly generated objects, block_length pieces of data each time.
In other words, interleave() function can iterate through your dataset while applying a map_func(). Also, it can work with many datasets or data files at the same time. For example, from the docs:
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
However, is there any benefit of using interleave over map in a
scenario such as the one below?
Both interleave() and map() seems a bit similar but their use-case is not the same. If you want to read dataset while applying some mapping interleave() is your super-hero. Your images may need to be decoded while being read. Reading all first, and decoding may be inefficient when working with large datasets. In the code snippet you gave, AFAIK, the one with tf.data.TFRecordDataset should be faster.
TL;DR interleave() parallelizes the data loading step by interleaving the I/O operation to read the file.
map() will apply the data pre-processing to the contents of the datasets.
So you can do something like:
ds = train_file.interleave(lambda x: tf.data.Dataset.list_files(directory_here).map(func,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
tf.data.experimental.AUTOTUNE will decide the level of parallelism for buffer size, CPU power, and also for I/O operations. In other words, AUTOTUNE will handle the level dynamically at runtime.
num_parallel_calls argument spawns multiple threads to utilize multiple cores for parallelizing the tasks. With this you can load multiple datasets in parallel, reducing the time waiting for the files to be opened; as interleave can also take an argument num_parallel_calls. Image is taken from docs.
In the image, there are 4 overlapping datasets, that is determined by the argument cycle_length, so in this case cycle_length = 4.
FLAT_MAP: Maps a function across the dataset and flattens the result. If you want to make sure order stays the same you can use this. And it does not take num_parallel_calls as an argument. Please refer docs for more.
MAP:
The map function will execute the selected function on every element of the Dataset separately. Obviously, data transformations on large datasets can be expensive as you apply more and more operations. The key point is, it can be more time consuming if CPU is not fully utilized. But we can use parallelism APIs:
num_of_cores = multiprocessing.cpu_count() # num of available cpu cores
mapped_data = data.map(function, num_parallel_calls = num_of_cores)
For cycle_length=1, the documentation states that the outputs of
interleave and flat_map are equal
cycle_length --> The number of input elements that will be processed concurrently. When set it to 1, it will be processed one-by-one.
INTERLEAVE: Transformation operations like map can be parallelized.
With parallelism of the map, at the top the CPU is trying to achieve parallelization in transformation, but the extraction of data from the disk can cause overhead.
Besides, once the raw bytes are read into memory, it may also be necessary to map a function to the data, which of course, requires additional computation. Like decrypting data etc. The impact of the various data extraction overheads needs to be parallelized in order to mitigate this with interleaving the contents of each dataset.
So while reading the datasets, you want to maximize:
Source of image: deeplearning.ai

Tensorflow Shuffle Batch Non Deterministic

I am trying to get deterministic behaviour from tf.train.shuffle_batch(). I could, instead, use tf.train.batch() which works fine (always the same order of elements), but I need to get examples from multiple tf-records and so I am stuck with shuffle_batch().
I am using:
random.seed(0)
np.random.seed(0)
tf.set_random_seed(0)
data_entries = tf.train.shuffle_batch(
[data], batch_size=batch_size, num_threads=1, capacity=512,
seed=57, min_after_dequeue=32)
But every time I restart my script I get slightly different results (not completely different, but about 20% of the elements are in the wrong order).
Is there anything I am missing?
Edit: Solved it! See my answer below!
Maybe I misunderstood something, but you can collect multiple tf-records in a queue with tf.train.string_input_producer(), then read the examples into tensors and finally use tf.train.batch().
Take a look at CIFAR-10 input.
Answering my own question:
First the reason shuffle_batch is non deterministic:
The time until I request a batch is inherently random.
In that time, a random number of tensors are available.
Tensorflow calls a shuffle operation that is seeded but depending on the number of items, it will return a different order.
So no matter the seeding, the order is always different unless the number of elements is constant. So the solution is to keep the number of elements constant, but how we do it?
By setting capacity=min_after_dequeue+batch_size. This will force Tensorflow to fill up the queue until it reaches full capacity before dequeuing an item. Therefore, at the time of the shuffle operation, we have capacity many items which is a constant number.
So why are we doing this? Because one tf.record contains many examples but we want examples from multiple tf.records. With a normal batch we would first get all the examples of one record and then of the next one. This also means we should set min_after_dequeue to something larger than the number of items in one tf.record. In my example, I have 50 examples in one file so I set min_after_dequeue=2048.
Alternatively, we can also shuffle the examples before creating the tf.records, but this was not possible for me because I read tf.records from multiple directories (each with their own dataset).
Last Note: You should also use a batch size of 1 to be super save.

How to reuse one tensor when creating a tensorflow dataset iterator of a pair of tensors?

Imagine the case that I want to pair samples from one pool of data with samples from another pool of data together to feed into the network. But many samples from the first pool should be paired with the same sample in the second pool. (let's assume all samples are of the same shape).
For example, if we denote the samples from the first pool as f_i, samples from the second pool as g_j, I might want to have a mini-batch of samples as below (each line is one sample in the mini-batch):
(f_0, g_0)
(f_1, g_0)
(f_2, g_0)
(f_3, g_0)
...
(f_10, g_0)
(f_11, g_1)
(f_12, g_1)
(f_13, g_1)
...
(f_19, g_1)
...
If the data from the second pool are small (like labels), then I can simply store them together with samples from the first pool to tfrecords. But in my case the data from the second pool are of the same size as data from the first pool (for example, both are movie segments). Then saving them in pair in one tfrecords files seems to almost double the disk space use.
I wonder if there is any way in which I can only save all the samples from the second pool once on the disk, but still feed the data to my network as the way I wanted? (Assume I already have already specified the mapping between samples in the first pool and those from the second pool based on their file names).
Thanks a lot!
You can use an iterator for each one of the tfrecords (or pool of samples), so you get two iterators where each can iterate on its own pace. When you run get_next() on an iterator, the next sample is returned, so you have to keep it in a tensor and manually feed it. Quoting from the documentation:
(Note that, like other stateful objects in TensorFlow, calling Iterator.get_next() does not immediately advance the iterator. Instead you must use the returned tf.Tensor objects in a TensorFlow expression, and pass the result of that expression to tf.Session.run() to get the next elements and advance the iterator.)
So all you need is a couple of loops that iterate and combine samples from each iterator as a pair, and then you can feed this when you run your desired operation. For example:
g_iterator = g_dataset.make_one_shot_iterator()
get_next_g = g_iterator.get_next()
f_iterator = f_dataset.make_one_shot_iterator()
get_next_f = f_iterator.get_next()
# loop g:
temp_g = session.run(get_next_g)
# loop f:
temp_f = session.run(get_next_f)
session.run(train, feed_dict={f: temp_f, g: temp_g})
Does this answer your question?

Tensorflow--how to limit epochs with evaluation only?

Given that I train a model; save it off with metagraph/save.Saver, and the load that graph into a new script/process to test against test data, what is the best way to make sure I only iterate over the test data once?
With my training data, I want to be able to iterate over the entire data set for an arbitrary number of iterations. I use
tf.train.string_input_producer()
to drive a queue of loading files for training, so I can safely leave num_epochs as default (=None) and let other controls drive training termination.
However, when I run the graph for evaluation, I just want to the evaluate the test set once (and gather the appropriate statistics).
Initial attempted solution:
Make a tensor for Epochs, and pass that into tf.train.string_input_producer, and then tf.Assign it to the appropriate value based on test/train.
But:
tf.train.string_input_producer only takes integers as num_epochs, so this isn't possible...unless I'm missing something.
Further notes: I use
tf.train.batch()
to read-in test/train data that has been serialized into protocol buffers (https://www.tensorflow.org/versions/r0.11/how_tos/reading_data/index.html#file-formats), so I have minimal visibility into how the data is loaded and how far along it is.
tf.train.batch apparently will throw tf.errors.OutOfRangeError, but I'm not clear how to catch that successfully, or if that is even what I really want to do. I tried a very naive
try...except...finally
(like in https://www.tensorflow.org/versions/r0.11/how_tos/reading_data/index.html#creating-threads-to-prefetch-using-queuerunner-objects), which didn't catch the error from tf.train.batch.