Tensorflow TFrecord file of unequal size - tensorflow

I currently split my data into several TFrecord files and then read the data by shuffling & interleaving them. My code is below:
path_to_files = glob('train_*.tfrecord')
n_files = len(path_to_files)
tf_dataset = tf.data.Dataset.list_files(path).shuffle(n_files)
tf_dataset = tf_dataset.interleave(lambda filename: tf.data.TFRecordDataset(filename, num_parallel_reads=4).map(parseFunc), cycle_length=n_files)
tf_dataset = tf_dataset.shuffle(buffer_size=n_files*3)
tf_dataset = tf_dataset.batch(batchsize)
tf_dataset = tf_dataset.prefetch(buffer_size=batchsize)
I have 2 questions:
1) Is my cod eindeed doing what I intend it to do. Namely, does it randomly sample samples from each of the TFrecord files equally
2) What happens if the TFrecord files contain very different amount of samples (e.g. 1 will have 50 samples and another 500). Does this affect the randomness at all?
Thanks!

So I ran a simulation to test this as follows: I saved 3 files with:
file 1: ~1000 samples of the number 1
file 2: ~2000 samples of the number 2
file 3: ~3000 samples of the number 3
Then I loaded the iterator with the above code and sampled batches until the iterator ran out. Below are my results.
As can be seen in the figure, TF does NOT weigh the TFrecord files by their size when it randomly samples from them. Rather, it randomly samples from each of the unequal sized files with equal probability until one of them runs out of samples. Then it continues from each of the remaining files with equal probability.
Take home: to have truly random sampling, make sure that your TFrecord files are either equally sized or the labels are homogenously distributed between them

Q1: Not exactly. First of all, this line don't needs explicit shuffle, list_files already has this parameter. It's can be controlled by seed value.
tf_dataset = tf.data.Dataset.list_files(path, shuffle=True, seed=1).
Without repeat function, you'll get end of sequence error when iterator exhausts all your files. So it should be like this. With None passed as a value it will iterate indefinitely, or you can set exact number of epochs.
tf_dataset = tf.data.Dataset.list_files(path, shuffle=True, seed).repeat()
Q2: It's ok if the sizes of files is different. The only outcome is that contents of large file will have higher chance of being chosen by iterator. But it won't affect randomness. This line will do it's job, shuffling interleaved dataset. The only thing to remember, is that shuffle buffer controls amount of data, loaded into memory. It's generally recommended to set it to number of examples in dataset (number of all examples in all files), but in some cases it may become a substantial overhead and even cause OOM.
tf_dataset = tf_dataset.shuffle(buffer_size=n_files*3)

Related

Why is TensorFlow's tf.data.Dataset.shuffle so slow?

The shuffle step in the following code works very slow for a moderate buffer_size (say 1000):
filenames = tf.constant(filenames)
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
dataset = dataset.map(_parse_function)
dataset = dataset.batch(batch_size)
dataset = dataset.shuffle(buffer_size)
If we use numpy to shuffle the data, the code looks as follows:
idx = np.arange(len(filenames))
np.random.shuffle(idx)
new_filenames = [filenames[i] for i in idx]
next_batch_filenames = new_filenames[:batch_size]
# get the corresponding files in batch
This is much faster. I wonder if TF does something beyond simply shuffles the data.
As Anton Codes wrote, your first snippet shuffles batches of whatever _parse_function parses from your files (probably feature data), while your second snippet only shuffles filenames.
If shuffling on file level is sufficient, you can actually achieve (roughly) the same performance via the tf.data.Dataset API:
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
dataset = dataset.shuffle(len(filenames)) # shuffle file names
dataset = dataset.map(_parse_function)
dataset = dataset.batch(batch_size)
This practice of shuffling "pointers" to your training samples instead of the samples themselves can often improve performance.
NumPy might still be a little bit more efficient though, due to the overhead of shuffling inside the computational graph (which tf.data.Dataset.shuffle does, there is actually a C++ kernel specifically for this operation).
The advantage of the tf.data.Dataset approach is that it can automatically reshuffle the Dataset after each epoch.
The comparison is of two quite different operations.
Your dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) reads from disk. Like physical long term storage, possibly a magnetic spinning hard drive. This is slow. If you have the ability to store all of this in ram instead, or on an ultra fast raid style flash drive, then you'll address your largest bottle neck.
You also have a _parse_function that is fired off for each data point, every time there is a data read. The computation of that parse will take time and depending on what is in there it could be significant.
The comparison to numpy isn't really fair, in that your numpy example doesn't involve reading from disk or parsing data.
That should be the bulk of the difference. If you've addressed the above, the next place to look for more speedup is with these lines
3) dataset = dataset.map(_parse_function)
4) dataset = dataset.batch(batch_size)
5) dataset = dataset.shuffle(buffer_size)
These are your code lines. Line 4 makes batches of data, possibly 32 (batch_size for sure). Then line 5 kicks in and tries to shuffle your batches of 32 in a buffer of length 1000. That happens every time the training loop requests a new training batch. The shuffle step shuffles all those big batches, picks out the first one and adds a new one ... every ... single ... time.
We can reverse the order of batch and shuffle like so
3) dataset = dataset.map(_parse_function)
4) dataset = dataset.shuffle(buffer_size)
5) dataset = dataset.batch(batch_size)
This is better anyway, because before the contents of the batches were always the same but the order was mixed. This way the contents of the batches will be randomized also. Next, the shuffle has to only shuffle 1000 items, not 32x1000 items. Last, we can challenge if we really need a buffer size of 1000. Let's say our data set is 2000 items. A buffer size of 320 and a batch size of 32 will certainly randomize our data well, effectively giving any data in the buffer a 10% of going into the next batch and a 90% of being pushed back to mix with other data. That's pretty good. A buffer size of 64 and a batch size of 64 seems almost useless, other than the items are pulled out of the batch randomly one at a time, and so actually have a chance of not getting drawn and mixing with later data. Just not so much.

Shuffling the training dataset with Tensorflow object detection api

I'm working on a logo detection algorithm using the Faster-RCNN model with the Tensorflow object detection api.
My dataset is alphabetically ordered (so there are a hundred adidas logo, then hundred apple logo etc.). And i would like it to be shuffled while training.
I've put some values in the config file:
train_input_reader:{
shuffle: true
queue_capacity: some value
min_after_dequeue : some other value}
However whatever are the values, I'm putting in, algorithm is at first training on all of the a's logos (adidas, apple and so on) and only a lapse of time after starting to see the b's logos (bmw etc.) and the c's one etc.
Of course I could just shuffle my input dataset directly, but I would like to understand the logic behind it.
PS: I've seen this post about shuffling and min_after_dequeue, but I still dont quite get it. My batch size is 1 so it shouldn't be using tf.train.shuffle_batch() but only tf.RandomShuffleQueue
My training dataset size is 5000 and if I write min_after_dequeue: 4000 or 5000 it is still not shuffled right. Why though?
Update:
#AllenLavoie It's a bit hard for me; as there is a lot of dependencies and I'm new to Tensorflow.
But in the end the queue is constructed by
tf.contrib.slim.parallel_reader.parallel_read( _, string_tensor = parallel_reader.parallel_read(
config.input_path,
reader_class=tf.TFRecordReader,
num_epochs=(input_reader_config.num_epochs
if input_reader_config.num_epochs else None),
num_readers=input_reader_config.num_readers,
shuffle=input_reader_config.shuffle,
dtypes=[tf.string, tf.string],
capacity=input_reader_config.queue_capacity,
min_after_dequeue=input_reader_config.min_after_dequeue)
It seems that when I'm putting num_readers = 1 in the config file the dataset is finally shuffling as I want, (at least in the beginning), but when there are more somehow on the start the logos are getting in the alphabetical order.
I recommend shuffling the dataset prior to training. The way shuffling currently happens is imperfect and my guess at what is happening is that at the beginning the queue starts off empty and only gets examples that start with 'A' --- after a while it may be more shuffled, but there is no getting around the beginning part when the queue hasn't been filled yet.

Mixing properties of multiple records in one .tfrecords file

I have a dataset of around 1M examples. I each example to a separate .tfrecord file, which resulted in around 500GB sitting in some network location.
Reading multiple small files from this network location is extremely slow, so I'm thinking about grouping around 100 examples into one .tfrecord file.
I am worried though, that examples from the same .tfrecords file will always appear in the same minibatch (or one minibatch after each other), which is bad for the proper mixing of training data I want to have.
my input pipeline is the following:
I have a tf.train.string_input_producer(files, capacity=100000) for the filenames queue, using TFRecordReader.read to read from the filenames queue, and use tf.train.batch that creates an examples queue and returns a batch from it using dequeue_many.
I fear that once the filenames queue dequeues a filename, all examples from it will be read and enqueued into the examples FIFO queue created by tf.train.batch, which will result in the same examples being in the same minibatches over and over.
Is it really going to have the same examples in the same minibatch over and over? If so, should I create a Shuffle queue for examples, instead of using tf.train.batch?
One of the points of TFRecord is to store many files in the same location to overcome the problem of opening/closing many files. So your approach of one tfrecord per one example does not make sense. You can put even all examples in one file or have 10k per file. Regarding shuffling: there are two types shuffling which serve different purposes and shuffle different things:
tf.train.string_input_producer shuffle: Boolean. If true, the strings are randomly shuffled within each epoch.. So if you have a few files ['file1', 'file2', ..., 'filen'] this randomly selects a file from this list. If case of false, the files follow one after each other.
tf.train.shuffle_batch Creates batches by randomly shuffling tensors. So it takes batch_size tensors from your queue (you will need to create a queue with tf.train.start_queue_runners ) and shuffles them.

Incorporating very large constants in Tensorflow

For example, the comments for the Tensorflow image captioning example model state:
NOTE: This script will consume around 100GB of disk space because each image
in the MSCOCO dataset is replicated ~5 times (once per caption) in the output.
This is done for two reasons:
1. In order to better shuffle the training data.
2. It makes it easier to perform asynchronous preprocessing of each image in
TensorFlow.
The primary goal of this question is to see if there is an alternative to this type of duplication. In my use case, storing the data in this way would require each image to be duplicated in the TFRecord files many more times, on the order of 20 - 50 times.
I should note first that I have already fed the images through VGGnet to extract 4096 dim features, and I have these stored as a mapping between filename and the vectors.
Before switching over to Tensorflow, I had been feeding batches containing filename strings and then looking up the corresponding vector on a per-batch basis. This allows me to store all of the image data in ~15GB without needing to duplicate the data on disk.
My first attempt to do this in in Tensorflow involved storing indices in the TFExample buffers and then doing a "preprocessing" step to slice into the corresponding matrix:
img_feat = pd.read_pickle("img_feats.pkl")
img_matrix = np.stack(img_feat)
preloaded_images = tf.Variable(img_matrix)
first_image = tf.slice(preloaded_images, [0,0], [1,4096])
However, in this case, Tensorflow disallows a variable larger than 2GB. So my next thought was to partition this across several variables:
img_tensors = []
for i in range(NUM_SPLITS):
with tf.Graph().as_default():
img_tensors.append(tf.Variable(img_matrices[i], name="preloaded_images_%i"%i))
first_image = tf.concat(1, [tf.slice(t, [0,0], [1,4096//NUM_SPLITS]) for t in img_tensors])
In this case, I'm forced to store each partition on a separate graph, because it seems any one graph cannot be this large either. However, now the concat fails because each tensor I am concatenating is on a separate graph.
Any advice on incorporating a large amount (~15GB) of preloaded into the Tensorflow graph.
Potentially related is this question; however in this case I'd like to override the decoding of the actual JPEG file with the preprocessed value in a tensor op.

Getting each example exactly once

For monitoring my model's performance on my evaluation dataset, I'm using tf.train.string_input_producer for the filenames queue on .tfr files, then I feed the parsed examples to the tf.train.batch function, that produces batches of a fixed size.
Assume my evaluation dataset contains exactly 761 examples (a prime number). To read all the examples exactly once, I have to have a batch size that divides 761, but there is no such, except 1 that will be too slow and 761 that will not fit in my GPU. Any standard way for reading each example exactly once?
Actually, my dataset size is not 761, but there is no number in the reasonable range of 50-300 that divides it exactly. Also I'm working with many different datasets, and finding a number that approximately divides the number of examples in each dataset can be a hassle.
Note that using the num_epochs parameter to tf.train.string_input_producer does not solve the issue.
Thanks!
You can use reader.read_up_to as in this example. Your last batch will be smaller, so you need to make sure your network doesn't hard-wire batch-size anywhere