How to select batch size automatically to fit GPU? - tensorflow

I am training deep neural networks with a GPU. If I make samples too large, batches too large, or networks too deep, I get an out of memory error. In this case, it is sometimes possible to make smaller batches and still train.
Is it possible to calculate GPU size required for training and determine what batch size to choose beforehand?
UPDATE
If I print network summary, it displays number of "trainable parameters". Can't I estimate from this value? For example, take this, multiply by batch size, double for gradients etc?

PyTorch Lightning recently added a feature called "auto batch size", especially for this! It computes the max batch size that can fit into the memory of your GPU :)
More info can be found here.
Original PR: https://github.com/PyTorchLightning/pytorch-lightning/pull/1638

No, it is not possible to do this automatically. So you need to go through a lot of trial and error to find appropriate size if you want your batch to be as much as possible.
Stanford's CNN class provides some guidance how to estimate the memory size, but all suggestions are related to CNN (not sure what do you train).

I think Salvador here means that it is not possible to analytically compute the best suited batch size, however, as all things are in ML, it is just another hyperparameter, that can be added to your grid search to be computed automatically. Simply evaluate your model's loss or accuracy (however you measure performance) for the best and most stable (least variable) measure given several batch sizes, say some powers of 2, such as 64, 256, 1024, etc. Then keep use the best found batch size. Note that batch size can depend on your model's architecture, machine hardware, etc. For example, if you move your modeling from a local PC to some cloud compute engine (GCP, AWS, Azure,...), then the batch size which was too large for your PC's RAM becomes easily suitable for practically limitless RAM/CPU/GPU (mind the costs).

Related

Why does Training time not reduce when training a keras model after Increasing the batch size in beyond a certain amount

I am currently traing an NLP model in Keras with TF 2.8 where I am experimenting by adding GRU and LSTM layers. When I train the model, I used different batch size to see the impact it had on the accuracy and overal training time.
What I noticed was that after Increasing the batch size after a certain amount the training time doesnt reduce, after a certain amount the training size stayed the same.
I started with a batch size of 2 then slowly increased upto 4096 trying multiples of two, yet after 512 the training time remained the same.
It's often wrongly mentioned that batch learning is as fast or faster than on-line training. In fact, batch-learning is changing the weights once, the complete set of data (the batch) has been presented to the network. Therefore, the weight update frequency is rather slow. This explains why the processing speed in your measurements acts like you observed.
Even if its matrix operation, each row-colum multiplication might be happening on one gpu-core. So, full matrix multiplication is divided on as many cores as possible. For one matrix mul, each gpu-core takes some time, and when you add more images, that time increases, do more rows. If at batch size of 4, your gpu is already at full performance capacity, i.e. all cores are running, then increasing batch size is not going to give any advantage. Your added data just sits in gpu memory and is processed when an nvidia dice gets free of previous operation.
To get a further understanding for the training techniques, have a look at the 2003 paper The general inefficiency of batch training for gradient descent learning. It deals with the comparison of batch and on-line learning.
Also generally, RNN kernels can have O(timesteps) complexity, with batch size having a smaller effect than you might anticipate.

What is the maximum and minimum batch size we can use in .fit() method?

I want to see the effect of batch size on generalization for which I want to run my .fit() method with all the possible batch sizes.
But I was wondering what could be the constraints be on choosing batch sizes?
What does it depend on, a machine?? a dataset?
Any help is highly appreciated
It depends on the size of each sample and your GPU memory, if you're using it, else your RAM. Keep in mind that various other things are loaded in your memory, like the model's parameters, the graph, etc. But strictly for the size of a batch: NUM_SAMPLES * SIZE_OF_SAMPLE.
The batch size you choose is affected by several parameters:
Resources - You need to choose a small enough batch size that will be able to fit inside you CPU / GPU RAM.
Normalization - If you use BatchNorm you should probably use a large batch size, as the BatchNorm layers learn the mean and variance of your batch. The smaller the batches are the larger the deviance between them will be.
Personally, I usually use the largest batch size possible according to my resources. In case the possible batch is small (<16) I swap BatchNorm with other normalization methods such as LayerNorm / InstanceNorm.
The machine's memory.
The training batch size has a huge impact on the required GPU memory for training a neural network.
The GPU memory includes Parameters, optimizer’s variables, intermediate calculations, and workspace variables. So, the larger the batch size, the more samples are being propagated through the neural network in the forward pass. This results in larger intermediate calculations (e.g. layer activation outputs) that need to be stored in GPU memory. Technically speaking, the size of the activations is linearly dependent on the batch size
You can use some walk-around to increase the limitation:
Data-parallelism — use multiple GPUs to train all mini-batches in parallel, each on a single GPU
Gradient accumulation — run the mini-batches sequentially, while accumulating the gradients. The accumulated results are used to update the model variables at the end of the last mini-batch.

Is there any difference between tensor2tensor and pytorch in view of memory?

I'm trying to train seq2seq model(transformer) with pytorch and tensor2tensor.
When using tensor2tensor, the batch size can be like 1024, while pytorch model shows CUDA out of memory error with 8 batch size.
Is there any technique used in tensor2tensor to make best use of memory.
If anyone know this, please tell me.
Thanks in advance.
In Tensor2Tensor by default, the batch size is specified in the number of tokens (subwords) per single GPU. This allows to use a higher number of short sequences (sentences) in one batch or a smaller number of long sequences. Most other toolkits use a fixed batch size specified in the number of sequences. Either way, it may be a good idea to limit the maximum sentence length in training to a reasonable number to prevent Out-of-memory errors and excessive padding.
Some toolkits also prefer to specify the total batch size per all GPU cards.

How to estimate how much GPU memory required for deep learning?

We are trying to train our model for object recognition using tensorflow. Since there are too many images (100GB), I guess our current GPU server (1*2080Ti) could not work. We may need to purchase a more powerful one, but I do not sure how to estimate how much GPU memory we need. Is there some approach to estimate the requirements? thanks!
Your 2080Ti would do just fine for your task. The GPU memory for DL tasks are dependent on many factors such as number of trainable parameters in the network, size of the images you are feeding, batch size, floating point type (FP16 or FP32) and number of activations and etc. I think you get confused about loading all of the images to GPU memory at once. We do not do that, instead we use minibatches of different sizes to fit all of the images and params into memory. Throw any kind of network to your 2080Ti and adjust batch size then your training will run smoothly. You could go with your 2080Ti or can get another or two increase training speed. This blogpost provides beautiful insights about creating optimal DL environments.

Improving cross-validation throughput in TensorFlow, Keras

I am working on CNN models which are intended to predict a protein's structure from its amino acid sequence. I am implementing my CNN's in Keras. The Keras API is the one that comes bundled with TensorFlow 1.4.0, so obviously TensorFlow is my backend. I have installed the GPU version of TensorFlow, and I have verified that the GPU is being used. My GPU is somewhat older, an NVidia GTX 760.
When I perform 3X cross-validation to help select architectures and hyperparameters, I have 50K examples in my training folds and 25K samples in my validation folds. These are decently large data sets, however they're small in comparison to the RAM available in my computer (16 GB) or on my GPU (2 GB). Fully unpacked and expressed as float32 values, with redundancy introduced because of sliding windows, all the folds taken together, input plus target values, occupies 316 MB. I have pre-calculated my folds, and saved files of each fold to disk. When I experiment with architectures and hyperparameters, the same folds are being used in every trial.
I started with networks containing a single hidden layer to see what I could achieve, and then switched to two hidden layers. I used a fixed batch size of 64 for all of my early experiments. Training proceeded quickly enough that I didn't concern myself with speed. Performing a 3X cross-validation for a given architecture typically took about 12 minutes.
But in the last experiment that I did with two-layer networks, I decided to start investigating the effect of batch size. I learned that smaller batch sizes gave me better results, up to a point. Batch sizes of 8 were the smallest ones that I could count on not to crash. My loss values will occasionally flip to NaN with batch sizes of 4, and they will frequently flip to NaN with batch sizes of 1 or 2. After that occurs, the network becomes untrainable. I am aware of the possibility of gradient instability. I think I was getting some.
So why not just use batch sizes of 8 and keep going? The problem is speed. Using two hidden layers, batches of eight took me approximately 35 minutes to cross-validate. Batches of 64, as I mentioned above, took one third that much time. My first experiments with three hidden layers have taken 45 to 65 minutes per trial. And I want to investigate potentially hundreds of architectures and hyperparameters, using still deeper networks. With small batches, I can see that the batch-by-batch progress bar in Keras progresses more slowly. I can see much longer pauses when an epoch ends.
Yes, I can upgrade my GPU to a 10 series. I think that will only double my throughput at most? Yes, I can rent GPU time in the cloud. Eventually I might do that. But if my software is inefficient, I definitely don't want to set it loose in the cloud to burn my money.
It is my understanding (please correct me if I am wrong) that when the GPU is used in a normal TF / Keras work flow, each individual batch is sent separately from the GPU to the CPU. If I am training 50 networks in a 3X cross-validation scheme, this would mean that I'm sending the same data to my GPU 150 times. As I mentioned earlier, all my data occupies at most 316 MB, about 15% of the RAM available on the GPU. Can I devise a workflow which sends this 316 MB to the GPU once, and if so, will that have a useful impact on my throughput? Intuitively, it feels like it should.
Are there other bottlenecks I should be thinking about? Is there a way to profile TF or Keras operations?
Thanks for any advice you may have!
Okay. I know that you're more concerned about throughput from Keras and your hardware, but there are a few things I'd like to mention here:
smaller batch sizes gave me better results
Given you case, where you have not so huge data, assuming you're running the training for fixed number of epochs (say 5), training with lesser batch size is naturally expected to give you a slightly better result as it would mean a higher number of back-prop steps overall compared to that of a higher batch-size. If you're training for a fixed number of training steps instead, I don't know why this is happening.
loss values will occasionally flip to NaN with batch sizes of 4
Again, I'm assuming you're using batch-normalization here, with CNNs. While using BN, it's never actually recommended to use a smaller batch-size like 2 or 4 (or even 8). And probably, one of the reasons why you can be facing NaN with smaller batch-size is if you have low-variance in the current batch and if you take the epsilon value too small, you might have very small values that can lead to numerical instability going forward. But more generally, this might be a case of gradient instability like you mentioned. Consider using gradient clipping to see if it helps.
GPU Workflow
Here, I assume that you have only 1 GPU. And unfortunately, you can't parallelise using single-GPU. To clarify, you shouldn't be concerned about the size of your data for GPU RAM. In most of the single-GPU cases, the current batch stays on the CPU and GPU would only take up the operations. Rather, you should be concerned about the size of parameters that GPU would be computing. Since for 1-layer experiment and 3-layers experiment your operations differ a lot, I don't think it's possible as you can't place multiple ops on same device simultaneously. The best case for you here would be to use a larger batch-size (not too large - as this would reduce the number of back-prop steps in case of training for fixed-epochs), so that you'd cover more data in a single-go.
Just a tip for hyper-paramter tuning, you can consider using Highway-CNNs. These are inspired from gating mechanism of LSTMs where you specify a large number of hidden layers and the network figures out itself on how to control the information flow among the layers. So in short, this would practically eliminate your efforts of tuning the depth of network, and allowing you tune other hyper-params like learning rate or filter-sizes etc.
I hope at least some of this is relevant and helpful to you ;)