I have a quick question about the randomizationWindow parameter of the reader. It says in the documentation it controls how much of the data is in memory – but I’m a little unclear what effect it will have on the randomness of the data. If the training data file starts with one distribution of data, and ends in another completely different distribution, will setting a randomization window smaller than the data size cause the data fed to the trainer not to be from a homogenous distribution? I just wanted to double check.
To give a bit more detail on randomization/IO:
All corpus/data is always splitted in chunks. Chunks help to make IO efficient, because all sequences of a chunk are read in one go (usually a chunk is 32/64MB).
When it comes to randomization, there are two steps there:
all chunks are randomized
given the randomization window of N samples the randomizer creates a rolling window of M chunks that in total have approximately N samples in them. All sequences inside this rolling window are randomized. When all sequences of a chunk have been processed, the randomizer can release it and start loading the next one asynchronously.
When the randomizationWindow is set to a window smaller than the entire data size, the entire data size is chunked into randomizationWindow sized chunks and the order of chunks is randomized. Then within each chunk, the samples are randomized.
Related
I have a dataset of images that is too large to store on memory. What I plan to do is loading pairs of the paths to the images and corresponding labels as my dataset, then use a generator function during training to convert only the paths in my batch to images before feeding them to the network.
Is data.Dataset.map() a good way to do this? Does it return a mapping function, that can be applied only to the current batch during training, or does it perform the mapping operation on the whole dataset at once, occupying lots of memory? In the second case, what is an alternative?
A few tutorials I went through made me believe the mapping takes place per batch, but this quote from the documentation suggests a whole new dataset is returned: "This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input."
The key thing to understand here is that tf.data.Dataset objects are generally "lazy" in that elements are only processed as needed (in a batched Dataset, elements == batches). When iterating over a dataset, this usually means that only the next requested element is prepared and then returned. So to answer your question: When using map to load data from disk, and applying this to a dataset of file names, only one batch of the loaded data should be stored in memory at the same time, and you should be able to process the dataset just fine. However, this can significantly slow down training if loading the files is a bottleneck in terms of speed.
There are some exceptions though, for example:
When you use the shuffle method, you need to provide a buffer size, and AFAIK the entire buffer is preprocessed at once. This can lead to issues since you want a large buffer for good shuffling, but this requires more memory. Thus you probably want to use shuffle before applying map.
The prefetch method results in multiple elements being prepared in order to avoid the model having to wait for the next batch to be processed.
Note that this lazy behavior also has some disadvantages, e.g.
You can only iterate over datasets sequentially; there is no random access.
A dataset doesn't even know how many elements it contains (this would require iterating over the entire set).
I'm building a Deep Neural Network in Kotlin (I know Python would be better, but I have to do that in Kotlin).
For training the net I need a huge amount of data from the MNIST database, this means I need to read about 60,000 images from a single file in IDX format and store them for simultaneous use.
Every image consists of 784 Bytes. So the total size is:
784*60,000 = 47,040,000 = ~47 MB of training data.
Which ain't that much, since I'm running the JVM in an 8GB RAM env.
After reading an image i need to convert it to a KMatrix, a custom data structure for matrix math operations. Under the hood of a KMatrix there's an Array<Array<Double>>.
I need a structure to store all the images at once, so I'm currently using a List<KMatrix>, which basically tranlates to a List<Array<Array<Double>>>
The problem is that while building the List<KMatrix> the Garbage Collector runs out of memory, launching a OutOfMemoryException: GC overhead limit exceeded.
I wonder if the problem is which data structures I'm using (i.e. should I use an ArrayList instead of an Array?) or maybe how I'm building the entire thing up (i.e. I need some optimization work to do).
I'll put the code, if needed, as soon as I can.
Thanks for your help.
Self-answer with the summarized solution (Thanks to answers by #Tenfour04 and #gidds)
As #Tenfour04 stated, you have basically three alternatives to the Array<Array<Double>> for the KMatrix:
an Array<DoubleArray> which mantains the same logic as the original, but saving lots of memory and increasing performance;
a 1-Dimensional DoubleArray which saves a bit of extra memory and performance, but with increased complexity given by the index-mapping of the array (the [i;j] element of the matrix is given by the [i * w + j] element of the array), and this probably isn't worth it as #gidds pointed out;
a 1-D DoubleBuffer created with ByteBuffer.allocateDirect(8 * size).asDoubleBuffer(), which improves performances even further but has only get and put methods, so it is useless if you need simple and direct set operations.
Conclusion
I choose the option 2, since in my case I'm performing very intensive operations, but in common cases, probably option 1 is the best as it is balanced in complexity and performance.
If you need a highest-performance structure and read/put methods are enough, I'd say that option 3 is what you're looking for.
Hope this helps someone
This is follow up to these SO questions
What is the need to do sharding of TFRecords files?
optimal size of a tfrecord file
and this passage from this tutorial
For this small dataset we will just create one TFRecords file for the
training-set and another for the test-set. But if your dataset is very
large then you can split it into several TFRecords files called
shards. This will also improve the random shuffling, because the
Dataset API only shuffles from a smaller buffer of e.g. 1024 elements
loaded into RAM. So if you have e.g. 100 TFRecords files, then the
randomization will be much better than for a single TFRecords file.
https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/18_TFRecords_Dataset_API.ipynb
So there is an optimal file size, but I am wondering, if there's an optimal number of elements? Since it's the elements itself that's being distributed to the GPUs cores?
Are you trying to optimize:
1 initial data randomization?
2 data randomization across training batches and/or epochs?
3 training/validation throughput (ie, gpu utilization)?
Initial data randomization should be handled when data are initially saved into sharded files. This can be challenging, assuming you can't read the data into memory. One approach is to read all the unique data ids into memory, shuffle those, do your train/validate/test split, and then write your actual data to file shards in that randomized order. Now your data are initially shuffled/split/sharded.
Initial data randomization will make it easier to maintain randomization during training. However, I'd still say it is 'best practice' to re-shuffle file names and re-shuffle a data memory buffer as part of the train/validate data streams. Typically, you'll set up an input stream using multiple threads/processes. The first step is to randomize the file input streams by re-shuffling the filenames. This can be done like:
train_files = tf.data.Dataset.list_files('{}/d*.tfr'.format(train_dir),
shuffle=True)
Now, if your initial data write was already randomized, you 'could' read the entire data from one file, before going to the next, but that would still impact re-randomization throughout the training process, so typically you interleave file reads, reading a certain number of records from each file. This also improves throughput, assuming you are using multiple file read processes (which you should do, to maximize gpu throughput).
blocksize = 1000 # samples read from one file before switching files
train_data = train_files.interleave(interleaveFiles,
block_length=blocksize,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
Here, we're reading 1000 samples from each file, before going on to the next. Again, to re-shuffle the training data each epoch (which may or may not be critical), we re-shuffle the data in memory, setting a memory buffer based on what's available on the machine and how large our data items are (note - before formatting the data for gpu).
buffersize = 1000000 # samples read before shuffling in memory
train_data = train_data.shuffle(buffersize,
reshuffle_each_iteration=True)
train_data = train_data.repeat()
The repeat() call is just to allow the data set to 'wrap around' during training. This may or may not be important, depending on how you set up your training process.
To optimize throughput, you can do 2 things:
1 alter the order of operations in the data input stream. Typically, if you put your randomization operations early, they can operate on 'low weight' entities, like file names, rather than on tensors.
2 use pre-fetching to let your cpu processes stream data during gpu calculations
train_data = train_data.map(mapData,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_data = train_data.padded_batch(batchsize)
train_data = train_data.prefetch(10)
So, mapping and batching happens last (this is usually preferred for maximizing gpu throughput, but it can depend on other factors, like data size (pre and post-tensorizing), and how computationally expensive your map function is).
Finally, you can tune the prefetch size to maximize gpu throughput, constrained by system memory and memory speed.
So, how does this all impact the 'optimal' number of data items in each sharded file?
Obviously, if your data/file size is > your blocksize, blocksize becomes irrelevant, and you might as well read each file completely. Typically, if you are going to use this paradigm, you wand blocksize << data/file. I use 10x; so if my blocksize is 1000, I have ~10,000 data items in the file. This may not be optimal, but so far I can maintain >90% gpu usage using this approach on my specific hardware. If you want to tune for your hardware, you could start somewhere at ~10x and adjust, based on whatever you are specifically trying to optimize.
If you have very large numbers of files, you may run into problems maintaining good file read streams, but on a modern system you should be able to get to 100,000 files or more and still be fine. Moving large numbers of files around can be difficult, but usually easier than having very small numbers of very big files, so there are some (broad) constraints on file sizes that can impact how many data items/file you end up with. Generally speaking, I'd say having on the order of 100s of files would be ideal for a large dataset. That way you can easily stream files across a network efficiently (again, that will depend on your network). If the data set is small, you'll have 10s to 50s of files, which is fine for streaming, depending on file size (I typically try to hit 100-300MB/file, which works well for moving things around a LAN or WAN).
So, I think file-size and number-of-files places much stronger constraints on your process than number of data items/file, so long as you have an appropriate number of data items/file, given your file read blocksize. Again, you could hyper-shard your files (1 data item/file?), and read entire files into memory, without using file blocking. That might work, and it would certainly be lightweight to shuffle file names, rather than data items. But you might also end up with millions of files!
To really optimize, you'll need to set up an end-to-end training system on a particular machine, and then tweak it to see what works best for your particular data, network, and hardware. So long as your data are effectively randomized and your data files are easy to store/use/share, you just want to optimize gpu throughput. I would be surprised if reordering the data input stream and pre-fetching doesn't get you there.
So I set the randomization window to 100,000. In my log I can see that it's oscillating between 0 errors and a lot of errors, which makes me wonder if the data is truly random. The training data is made up of sequences where the input is typically about 50 tokens and the output is 6 tokens for about 99% of the sequences, and maybe about 400 tokens in the other 1% (and these sequences are the most important to learn how to output, of course). It seems like more than one of the longer sequences may be getting clumped together, and that's why the error rate might go up all of a sudden. Is that possible?
Please try to specify larger randomization window if your samples are small, i.e. randomizationWindow=100000000. It can be that your window is only a single chunk - then the data will be only randomized inside, not between chunks.
(You can see how the data is splitted if you specify verbosity=4 in the reader section, the randomized windows [) information).
The more data you can put in memory - the better. Also from the perf perspective, because (after initial load) while the data being processed the readers can start prefetching new chunks and your GPU won't be IO bound.
For monitoring my model's performance on my evaluation dataset, I'm using tf.train.string_input_producer for the filenames queue on .tfr files, then I feed the parsed examples to the tf.train.batch function, that produces batches of a fixed size.
Assume my evaluation dataset contains exactly 761 examples (a prime number). To read all the examples exactly once, I have to have a batch size that divides 761, but there is no such, except 1 that will be too slow and 761 that will not fit in my GPU. Any standard way for reading each example exactly once?
Actually, my dataset size is not 761, but there is no number in the reasonable range of 50-300 that divides it exactly. Also I'm working with many different datasets, and finding a number that approximately divides the number of examples in each dataset can be a hassle.
Note that using the num_epochs parameter to tf.train.string_input_producer does not solve the issue.
Thanks!
You can use reader.read_up_to as in this example. Your last batch will be smaller, so you need to make sure your network doesn't hard-wire batch-size anywhere