This is follow up to these SO questions
What is the need to do sharding of TFRecords files?
optimal size of a tfrecord file
and this passage from this tutorial
For this small dataset we will just create one TFRecords file for the
training-set and another for the test-set. But if your dataset is very
large then you can split it into several TFRecords files called
shards. This will also improve the random shuffling, because the
Dataset API only shuffles from a smaller buffer of e.g. 1024 elements
loaded into RAM. So if you have e.g. 100 TFRecords files, then the
randomization will be much better than for a single TFRecords file.
https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/18_TFRecords_Dataset_API.ipynb
So there is an optimal file size, but I am wondering, if there's an optimal number of elements? Since it's the elements itself that's being distributed to the GPUs cores?
Are you trying to optimize:
1 initial data randomization?
2 data randomization across training batches and/or epochs?
3 training/validation throughput (ie, gpu utilization)?
Initial data randomization should be handled when data are initially saved into sharded files. This can be challenging, assuming you can't read the data into memory. One approach is to read all the unique data ids into memory, shuffle those, do your train/validate/test split, and then write your actual data to file shards in that randomized order. Now your data are initially shuffled/split/sharded.
Initial data randomization will make it easier to maintain randomization during training. However, I'd still say it is 'best practice' to re-shuffle file names and re-shuffle a data memory buffer as part of the train/validate data streams. Typically, you'll set up an input stream using multiple threads/processes. The first step is to randomize the file input streams by re-shuffling the filenames. This can be done like:
train_files = tf.data.Dataset.list_files('{}/d*.tfr'.format(train_dir),
shuffle=True)
Now, if your initial data write was already randomized, you 'could' read the entire data from one file, before going to the next, but that would still impact re-randomization throughout the training process, so typically you interleave file reads, reading a certain number of records from each file. This also improves throughput, assuming you are using multiple file read processes (which you should do, to maximize gpu throughput).
blocksize = 1000 # samples read from one file before switching files
train_data = train_files.interleave(interleaveFiles,
block_length=blocksize,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
Here, we're reading 1000 samples from each file, before going on to the next. Again, to re-shuffle the training data each epoch (which may or may not be critical), we re-shuffle the data in memory, setting a memory buffer based on what's available on the machine and how large our data items are (note - before formatting the data for gpu).
buffersize = 1000000 # samples read before shuffling in memory
train_data = train_data.shuffle(buffersize,
reshuffle_each_iteration=True)
train_data = train_data.repeat()
The repeat() call is just to allow the data set to 'wrap around' during training. This may or may not be important, depending on how you set up your training process.
To optimize throughput, you can do 2 things:
1 alter the order of operations in the data input stream. Typically, if you put your randomization operations early, they can operate on 'low weight' entities, like file names, rather than on tensors.
2 use pre-fetching to let your cpu processes stream data during gpu calculations
train_data = train_data.map(mapData,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_data = train_data.padded_batch(batchsize)
train_data = train_data.prefetch(10)
So, mapping and batching happens last (this is usually preferred for maximizing gpu throughput, but it can depend on other factors, like data size (pre and post-tensorizing), and how computationally expensive your map function is).
Finally, you can tune the prefetch size to maximize gpu throughput, constrained by system memory and memory speed.
So, how does this all impact the 'optimal' number of data items in each sharded file?
Obviously, if your data/file size is > your blocksize, blocksize becomes irrelevant, and you might as well read each file completely. Typically, if you are going to use this paradigm, you wand blocksize << data/file. I use 10x; so if my blocksize is 1000, I have ~10,000 data items in the file. This may not be optimal, but so far I can maintain >90% gpu usage using this approach on my specific hardware. If you want to tune for your hardware, you could start somewhere at ~10x and adjust, based on whatever you are specifically trying to optimize.
If you have very large numbers of files, you may run into problems maintaining good file read streams, but on a modern system you should be able to get to 100,000 files or more and still be fine. Moving large numbers of files around can be difficult, but usually easier than having very small numbers of very big files, so there are some (broad) constraints on file sizes that can impact how many data items/file you end up with. Generally speaking, I'd say having on the order of 100s of files would be ideal for a large dataset. That way you can easily stream files across a network efficiently (again, that will depend on your network). If the data set is small, you'll have 10s to 50s of files, which is fine for streaming, depending on file size (I typically try to hit 100-300MB/file, which works well for moving things around a LAN or WAN).
So, I think file-size and number-of-files places much stronger constraints on your process than number of data items/file, so long as you have an appropriate number of data items/file, given your file read blocksize. Again, you could hyper-shard your files (1 data item/file?), and read entire files into memory, without using file blocking. That might work, and it would certainly be lightweight to shuffle file names, rather than data items. But you might also end up with millions of files!
To really optimize, you'll need to set up an end-to-end training system on a particular machine, and then tweak it to see what works best for your particular data, network, and hardware. So long as your data are effectively randomized and your data files are easy to store/use/share, you just want to optimize gpu throughput. I would be surprised if reordering the data input stream and pre-fetching doesn't get you there.
My main issue is : I have 204 GB training tfrecords for 2 million images, and 28GB for validation tf.records files, of 302900 images. it takes 8 hour to train one epoch and this will take 33 day for training. I want to speed that by using multiple threads and shards but I am little bit confused about couple of things.
In tf.data.Dataset API there is shard function , So in the documentation they mentioned the following about shard function :
Creates a Dataset that includes only 1/num_shards of this dataset.
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can skip elements as follows:
d = tf.data.TFRecordDataset(FLAGS.input_file)
d = d.shard(FLAGS.num_workers, FLAGS.worker_index)
d = d.repeat(FLAGS.num_epochs)
d = d.shuffle(FLAGS.shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=FLAGS.num_map_threads)
Important caveats:
Be sure to shard before you use any randomizing operator (such as shuffle).
Generally it is best if the shard operator is used early in the dataset pipeline. >For example, when reading from a set of TFRecord files, shard before converting >the dataset to input samples. This avoids reading every file on every worker. The >following is an example of an efficient sharding strategy within a complete >pipeline:
d = Dataset.list_files(FLAGS.pattern)
d = d.shard(FLAGS.num_workers, FLAGS.worker_index)
d = d.repeat(FLAGS.num_epochs)
d = d.shuffle(FLAGS.shuffle_buffer_size)
d = d.repeat()
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=FLAGS.num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=FLAGS.num_map_threads)
So my question regarding the code above is when I try to makes d.shards of my data using shard function, if I set the number of shards (num_workers)to 10 , I will have 10 splits of my data , then should I set the num_reader in d.interleave function to 10 to guarantee that each reader take one split from the 10 split?
and how I can control which split the function interleave will take? because if I set the shard_index (worker_index) in shard function to 1 it will give me the first split. Can anyone give me an idea how can I perform this distributed training using the above functions?
then what about the num_parallel_call . should I set it to 10 as well?
knowing that I have single tf.records file for training and another one for validation , I don't split the tf.records files into multiple files.
First of all, how come dataset is 204GB for only 2million images? I think your image is way too large. Try to resize the image. After all, you would probably need to resize it to 224 x 224 in the end.
Second, try to reduce the size of your model. your model could be either too deep or not efficient enough.
Third, try to parallelize your input reading process. It could the bottleneck.
Assume I have a minibatch as a result of this code:
test_minibatch = reader_test.next_minibatch(10)
How can I get to the indexes of this minibatch as reference into the original data? Assume my test dataset was 100 rows. How can I know which 10 rows out of the 100 original rows are in the minibatch?
Can you create a column with unique Id's (usually called a GUID / UUID) and read that in the reader. This is one way to map your sample to master set. It scales well with very large datasets spanning multiple disks and distributed computing frameworks.
My dataset is comprised of audio segments of between 5-180 seconds. The number of examples is small enough to allow caching it in memory, instead of reading from the disk over and over. Storing the data in a constant tensor / variable and using tf.train.slice_input_producer will allow me to cache the dataset in memory, but it requires storing all the data in one matrix. Since some examples are much longer than others, this matrix might be unnecessarily large and perhaps too large for the RAM.
I can simply have a list of numpy arrays for my data, and do the whole input reading-randomizing-preprocessing in a non-tensforflow way with a feed_dict, but I wonder if there is a way to do it without completely giving up on tensorflow for the input reading-randomizing-preprocessing part.
Thanks!
The more recent tf.data library provides a tf.data.Dataset.cache method to cache an entire dataset into memory or into a file.
For instance:
dataset = ...
dataset = dataset.map(preprocessing_fn) # apply preprocessing
dataset = dataset.cache() # cache entire dataset in memory after preprocessing
I've provided more details on how to use cache() in this answer.
For example, the comments for the Tensorflow image captioning example model state:
NOTE: This script will consume around 100GB of disk space because each image
in the MSCOCO dataset is replicated ~5 times (once per caption) in the output.
This is done for two reasons:
1. In order to better shuffle the training data.
2. It makes it easier to perform asynchronous preprocessing of each image in
TensorFlow.
The primary goal of this question is to see if there is an alternative to this type of duplication. In my use case, storing the data in this way would require each image to be duplicated in the TFRecord files many more times, on the order of 20 - 50 times.
I should note first that I have already fed the images through VGGnet to extract 4096 dim features, and I have these stored as a mapping between filename and the vectors.
Before switching over to Tensorflow, I had been feeding batches containing filename strings and then looking up the corresponding vector on a per-batch basis. This allows me to store all of the image data in ~15GB without needing to duplicate the data on disk.
My first attempt to do this in in Tensorflow involved storing indices in the TFExample buffers and then doing a "preprocessing" step to slice into the corresponding matrix:
img_feat = pd.read_pickle("img_feats.pkl")
img_matrix = np.stack(img_feat)
preloaded_images = tf.Variable(img_matrix)
first_image = tf.slice(preloaded_images, [0,0], [1,4096])
However, in this case, Tensorflow disallows a variable larger than 2GB. So my next thought was to partition this across several variables:
img_tensors = []
for i in range(NUM_SPLITS):
with tf.Graph().as_default():
img_tensors.append(tf.Variable(img_matrices[i], name="preloaded_images_%i"%i))
first_image = tf.concat(1, [tf.slice(t, [0,0], [1,4096//NUM_SPLITS]) for t in img_tensors])
In this case, I'm forced to store each partition on a separate graph, because it seems any one graph cannot be this large either. However, now the concat fails because each tensor I am concatenating is on a separate graph.
Any advice on incorporating a large amount (~15GB) of preloaded into the Tensorflow graph.
Potentially related is this question; however in this case I'd like to override the decoding of the actual JPEG file with the preprocessed value in a tensor op.