How to compute the complexity of machine learning models - tensorflow

I am working on comparison of deep learning models with application in Vehicular network communication security. I want to know how I can compute the complexity of these models to know the performance of my proposed ones. I am making use of tensorflow

You can compare the complexity of two deep networks with respect to space and time.
Regarding space complexity:
Number of parameters in your model -> this is directly proportional to the amount of memory consumed by your model.
Regarding time complexity:
Amount of time it takes to train a single batch for a given batch size.
Amount of time it takes for training to converge
Amount of time it takes to perform inference on a single sample
Some papers also discuss the architecture complexity. For example, if GoogLeNet accuracy is only marginally higher than VGG-net, some people might prefer VGG-net as it is a lot easier to implement.
You can also discuss some analysis on tolerance of your network to hyperparameter tuning i.e. how your performance varies when you change the hyperparameters.
If your model is in a distributed setting, there are other things to mention such as the communication interval as it is the bottleneck sometimes.
In summary, you can discuss pretty much anything you feel that is implemented differently in another network and that is contributing additional complexity without much improvement in accuracy with respect to your network.
I don't think you would want it but there is also an open source project called deepBench to benchmark different deep network models.

Related

How to automatically judge whether the training process of the deep learning model is converged?

When training a deep learning model, I have to look at the loss curve and performance curve to judge whether the training process of the deep learning model is converged.
This has cost me a lot of time. Sometimes, the time of convergence judged by the naked eye is not accurate.
Therefore, I'd like to know whether there exists an algorithm or a package that can automatically judge whether the training process of the deep learning model is converged.
Can anyone help me?
Thanks a lot.
To the risk of disappointing you, I believe there is no such universal algorithm. In my experience, it depends on what you want to achieve, which metrics are important to you and how much time you are willing to let the training go on for.
I have already seen validation losses dramatically go up (a sign of overfitting) while other metrics (mIoU in this case) were still improving on the validation set. In these cases, you need to know what your target is.
It is possible (although it is very rare) that your loss goes up for a substantial amount of time before going down again and reach better levels than before. There is no way to anticipate this.
Finally, and this is arguably a common case if you have tons of training data, your validation loss may continually go down, but do so slower and slower. In this case, the best strategy if you had an infinite amount of time would be to let it keep the training going indefinitely. In practice, this is impossible, and you would need to find the right balance between performance and training time.
If you really need an algorithm, I would suggest this quite simple one :
Compute a validation metric M(i) after each ith epoch on a fixed subset of your validation set or the whole validation set. Let's suppose that the higher M(i)is, the better. Fix k an integer depending on the duration of one training epoch (k~3 should do the trick)
If for some n you have M(n) > max(M(n+1), ..., M(n+k)), stop and keep the network you had at epoch n.
It's far from perfect, but should be enough for simple tasks.
[Edit] If you're not using it yet, I invite you to use TensorBoard to visualize the evolution of your metrics throughout the training. Once set up, it is a huge gain of time.

Parallelization strategies for deep learning

What strategies and forms of parallelization are feasible and available for training and serving a neural network?:
inside a machine across cores (e.g. GPU / TPU / CPU)
across machines on a network or a rack
I'm also looking for evidence for how they may also be used in e.g. TensorFlow, PyTorch or MXNet.
Training
To my knowledge, when training large neural networks on large datasets, one could at least have:
Different cores or machines operate on different parts of the graph ("graph splitting"). E.g. backpropagation through the graph itself can be parallelized e.g. by having different layers hosted on different machines since (I think?) the autodiff graph is always a DAG.
Different cores or machines operate on different samples of data ("data splitting"). In SGD, the computation of gradients across batches or samples can also be parallelized (e.g. the gradients can be combined after computing them independently on different batches). I believe this is also called gradient accumulation (?).
When is each strategy better for what type of problem or neural network? Which modes are supported by modern libraries? and can one combine all four (2x2) strategies?
On top of that, I have read about:
Asynchronous training
Synchronous training
but I don't know what exactly that refers to, e.g. is it the computation of gradients on different data batches or the computation of gradients on different subgraphs? Or perhaps it refers to something else altogether?
Serving
If the network is huge, prediction / inference may also be slow, and the model may not fit on a single machine in memory at serving time. Are there any known multi-core and multi-node prediction solutions that work that can handle such models?
Training
In general, there are two strategies of parallelizing model training: data parallelism and model parallelism.
1. Data parallelism
This strategy splits training data into N partitions, each of which will be trained on different “devices” (different CPU cores, GPUs, or even machines). In contrast to training without data parallelism which produces one gradient per minibatch, we now have N gradients for each minibatch step. The next question is how we should combine these N gradients.
One way to do it is by averaging all the N gradients and then updating the model parameters once based on the average. This technique is called synchronous distributed SGD. By doing the average, we have a more accurate gradient, but with a cost of waiting all the devices to finish computing its own local gradient.
Another way is by not combining the gradients — each gradient will instead be used to update the model parameters independently. So, there will be N parameter updates for each minibatch step, in contrast to only one for the previous technique. This technique is called asynchronous distributed SGD. Because it doesn't have to wait other devices to finish, the async approach will take less time to complete a minibatch step than the sync approach will do. However, the async approach will produce a more noisy gradient, so it might need to complete more minibatch steps to catch up with the performance (in terms of loss) of the sync approach.
There are many papers proposing some improvements and optimizations on either approach, but the main idea is generally the same as described above.
In the literature there's been some disagreement on which technique is better in practice. At the end most people now settle on the synchronous approach.
Data Parallelism in PyTorch
To do synchronous SGD, we can wrap our model with torch.nn.parallel.DistributedDataParallel:
from torch.nn.parallel import DistributedDataParallel as DDP
# `model` is the model we previously initialized
model = ...
# `rank` is a device number starting from 0
model = model.to(rank)
ddp_model = DDP(model, device_ids=[rank])
Then we can train it similarly. For more details, you can refer to the official tutorial.
For doing asynchronous SGD in PyTorch, we need to implement it more manually since there is no wrapper similar to DistributedDataParallel for it.
Data Parallelism in TensorFlow/Keras
For synchronous SGD, we can use tf.distribute.MirroredStrategy to wrap the model initalization:
import tensorflow as tf
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = Model(...)
model.compile(...)
Then we can train it as usual. For more details, you can refer to the official guides on Keras website and TensorFlow website.
For asynchronous SGD, we can use tf.distribute.experimental.ParameterServerStrategy similarly.
2. Model Parallelism
This strategy splits the model into N parts, each of which will be computed on different devices. A common way to split the model is based on layers: different sets of layers are placed on different devices. But we can also split it more intricately depending on the model architecture.
Model Parallelism in TensorFlow and PyTorch
To implement model parallelism in either TensorFlow or PyTorch, the idea is the same: to move some model parameters into a different device.
In PyTorch we can use torch.nn.Module.to method to move a module into a different device. For example, suppose we want to create two linear layers each of which is placed on a different GPU:
import torch.nn as nn
linear1 = nn.Linear(16, 8).to('cuda:0')
linear2 = nn.Linear(8, 4).to('cuda:1')
In TensorFlow we can use tf.device to place an operation into a specific device. To implement the PyTorch example above in TensorFlow:
import tensorflow as tf
from tensorflow.keras import layers
with tf.device('/GPU:0'):
linear1 = layers.Dense(8, input_dim=16)
with tf.device('/GPU:1'):
linear2 = layers.Dense(4, input_dim=8)
For more details you can refer to the official PyTorch tutorial; or if you use TensorFlow you can even use a more high-level library like mesh.
3. Hybrid: Data and Model Parallelism
Recall that data parallelism only splits the training data, whereas model parallelism only splits the model structures. If we have a model so large that even after using either parallelism strategy it still doesn't fit in the memory, we can always do both.
In practice most people prefer data parallelism to model parallelism since the former is more decoupled (in fact, independent) from the model architecture than the latter. That is, by using data parallelism they can change the model architecture as they like, without worrying which part of the model should be parallelized.
Model Inference / Serving
Parallelizing model serving is easier than parallelizing model training since the model parameters are already fixed and each request can be processed independently. Similar to scaling a regular Python web service, we can scale model serving by spawning more processes (to workaround Python's GIL) in a single machine, or even spawning more machine instances.
When we use a GPU to serve the model, though, we need to do more work to scale it. Because of how concurrency is handled differently by a GPU compared to a CPU, in order to maximize the performance, we need to do inference request batching. The idea is when a request comes, instead of immediately processing it, we wait some timeout duration for other requests to come. When the timeout is up, even if the number of requests is only one, we batch them all to be processed on the GPU.
In order to minimize the average request latency, we need to find the optimal timeout duration. To find it we need to observe that there is a trade-off between minimizing the timeout duration and maximizing the number of batch size. If the timeout is too low, the batch size will be small, so the GPU will be underutilized. But if the timeout is too high, the requests that come early will wait too long before they get processed. So, the optimal timeout duration depends on the model complexity (hence, the inference duration) and the average requests per second to receive.
Implementing a scheduler to do request batching is not a trivial task, so instead of doing it manually, we'd better use TensorFlow Serving or PyTorch Serve which already supports it.
To learn more about parallel and distributed learning, you can read this review paper.
As the question is quite broad, I'll try to shed a little different light and touch on different topics than what was shown in
#Daniel's in-depth answer.
Training
Data parallelization vs model parallelization
As mentioned by #Daniel data parallelism is used way more often and is easier to do correctly. Major caveat of model parallelism is the need to wait for part of neural network and synchronization between them.
Say you have a simple feedforward 5 layer neural network spread across 5 different GPUs, each layer for one device. In this case, during each forward pass each device has to wait for computations from the previous layers. In this simplistic case, copying data between devices and synchronization would take a lot longer and won't bring benefits.
On the other hand, there are models better suited for model parallelization like Inception networks, see picture below:
Here you can see 4 independent paths from previous layer which could go in parallel and only 2 synchronization points (Filter concatenation and Previous Layer).
Questions
E.g. backpropagation through the graph itself can be parallelized e.g.
by having different layers hosted on different machines since (I
think?) the autodiff graph is always a DAG.
It's not that easy. Gradients are calculated based on the loss value (usually) and you need to know gradients of deeper layers to calculate gradients for the more shallow ones. As above, if you have independent paths it's easier and may help, but it's way easier on a single device.
I believe this is also called gradient accumulation (?)
No, it's actually reduction across multiple devices. You can see some of that in PyTorch tutorial. Gradient accumulation is when you run your forward pass (either on single or multiple devices) N times and backpropagate (the gradient is kept in the graph and the values are added during each pass) and optimizer only makes a single step to change neural network's weights (and clears the gradient). In this case, loss is usually divided by the number of steps without optimizer. This is used for more reliable gradient estimation, usually when you are unable to use large batches.
Reduction across devices looks like this:
This is all-reduce in data parallelization, each device calculates the values which are send to all other devices and backpropagated there.
When is each strategy better for what type of problem or neural
network?
Described above, data parallel is almost always fine if you have enough of data and the samples are big (up to 8k samples or more can be done at once without very big struggle).
Which modes are supported by modern libraries?
tensorflow and pytorch both support either, most modern and maintained libraries have those functionalities implemented one way or another
can one combine all four (2x2) strategies
Yes, you can parallelize both model and data across and within machines.
synchronous vs asynchronous
asynchronous
Described by #Daniel in brief, but it's worth mentioning updates are not totally separate. That would make little sense, as we would essentially train N different models based on their batches.
Instead, there is a global parameter space, where each replica is supposed to share calculated updates asynchronously (so forward pass, backward, calculate update with optimizer and share this update to global params).
This approach has one problem though: there is no guarantee that when one worker calculated forward pass another worker updated the parameters, so the update is calculated with respect to old set of params and this is called stale gradients. Due to this, convergence might be hurt.
Other approach is to calculate N steps and updates for each worker and synchronize them afterwards, though it's not used as often.
This part was based on great blogpost and you should definitely read it if interested (there is more about staleness and some solutions).
synchronous
Mostly described previously, there are different approaches but PyTorch gathers output from network and backpropagates on them (torch.nn.parallel.DistributedDataParallel)[https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel]. BTW. You should solely this (no torch.nn.DataParallel) as it overcomes Python's GIL problem.
Takeaways
Data parallelization is always almost used when going for speed up as you "only" have to replicate neural network on each device (either over the network or within single machine), run part of batch on each during the forward pass, concatenate them into a single batch (synchronization) on one device and backpropagate on said.
There are multiple ways to do data parallelization, already introduced by #Daniel
Model parallelization is done when the model is too large to fit on single machine (OpenAI's GPT-3 would be an extreme case) or when the architecture is suited for this task, but both are rarely the case AFAIK.
The more and the longer parallel paths the model has (synchronization points), the better it might be suited for model parallelization
It's important to start workers at similar times with similar loads in order not to way for synchronization processes in synchronous approach or not to get stale gradients in asynchronous (though in the latter case it's not enough).
Serving
Small models
As you are after large models I won't delve into options for smaller ones, just a brief mention.
If you want to serve multiple users over the network you need some way to scale your architecture (usually cloud like GCP or AWS). You could do that using Kubernetes and it's PODs or pre-allocate some servers to handle requests, but that approach would be inefficient (small number of users and running servers would generate pointless costs, while large numbers of users may halt the infrastructure and take too long to process resuests).
Other way is to use autoscaling based on serverless approach. Resources will be provided based on each request so it has large scaling abilities + you don't pay when the traffic is low. You can see Azure Functions as they are on the path to improve it for ML/DL tasks, or torchlambda for PyTorch (disclaimer, I'm the author) for smaller models.
Large models
As mentioned previously, you could use Kubernetes with your custom code or ready to use tools.
In the first case, you can spread the model just the same as for training, but only do forward pass. In this way even giant models can be put up on the network (once again, GPT-3 with 175B parameters), but requires a lot of work.
In the second, #Daniel provided two possibilities. Others worth mentioning could be (read respective docs as those have a lot of functionalities):
KubeFlow - multiple frameworks, based on Kubernetes (so auto-scaling, multi-node), training, serving and what not, connects with other things like MLFlow below
AWS SageMaker - training and serving with Python API, supported by Amazon
MLFlow - multiple frameworks, for experiment handling and serving
BentoML - multiple frameworks, training and serving
For PyTorch, you could read more here, while tensorflow has a lot of serving functionality out of the box via Tensorflow EXtended (TFX).
Questions from OP's comment
Are there any forms of parallelism that are better within a machine vs
across machines
The best for of parallelism would probably be within one giant computer as to minimize transfer between devices.
Additionally, there are different backends (at least in PyTorch) one can choose from (mpi, gloo, nccl) and not all of them support direct sending, receiving, reducing etc. data between devices (some may support CPU to CPU, others GPU to GPU). If there is no direct link between devices, those have to be first copied to another device and copied again to target device (e.g. GPU on other machine -> CPU on host -> GPU on host). See pytorch info.
The more data and the bigger network, the more profitable it should be to parallelize computations. If whole dataset can be fit on a single device there is no need for parallelization. Additionally, one should take into account things like internet transfer speed, network reliability etc. Those costs may outweigh benefits.
In general, go for data parallelization if you have lots of of data (say ImageNet with 1.000.000 images) or big samples (say images 2000x2000). If possible, within a single machine as to minimize between-machines transfer. Distribute model only if there is no way around it (e.g. it doesn't fit on GPU). Don't otherwise (there is little to no point to parallelize when training MNIST as the whole dataset will easily fit in RAM and the read will be fastest from it).
why bother build custom ML-specific hardware such as TPUs?
CPUs are not the best suited for highly parallel computations (e.g. matrices multiplication) + CPU may be occupied with many other tasks (like data loading), hence it makes sense to use GPU.
As GPU was created with graphics in mind (so algebraic transformation), it can take some of CPU duties and can be specialized (many more cores when compared to CPU but simpler ones, see V100 for example).
Now, TPUs are tailored specificially for tensor computations (so deep learning mainly) and originated in Google, still WIP when compared to GPUs. Those are suited for certain types of models (mainly convolutional neural networks) and can bring speedups in this case. Additionally one should use the largest batches with this device (see here), best to be divisible by 128. You can compare that to NVidia's Tensor Cores technology (GPU) where you are fine with batches (or layer sizes) divisible by 16 or 8 (float16 precision and int8 respectively) for good utilization (although the more the better and depends on number of cores, exact graphic card and many other stuff, see some guidelines here).
On the other hand, TPUs support still isn't the best, although two major frameworks support it (tensorflow officially, while PyTorch with torch_xla package).
In general, GPU is a good default choice in deep learning right now, TPUs for convolution heavy architectures, though might give some headache tbh. Also (once again thanks #Daniel), TPUs are more power effective, hence should be cheaper when comparing single floating point operation cost.

Can reducing the number of back propagation steps improve training performance?

I want to know how beneficial it would be if we could reduce the number of back propagation steps by 50%.
For example, let's say a neural network performed back propagation 1000 times for training. And another neural network performs back propagation 500 to get trained (Lets assume that both of them gave same accuracy after training). Will the second one be significantly faster? Or does it not matter much? It will increase the speed of training.
If you can train two networks, to the same accuracy, but one of them only needs to process half as much data, then yes that is a good thing.
The resulting network will not be any faster to execute during inference time, but there are still several important benefits to the training process.
Training will take half as long. This is valuable by itself. It is extra valuable when you consider that you can now try twice as many ideas in the same amount of time. That will improve results quality for the entire process.
Faster convergence can reduce generalization error and overfitting. The optimization does not have as many opportunities to "fidget" and find opportunities to overfit.
Extremely fast convergence, called super-convergence, can improve the final training error while still keeping generalization error low, leading to better validation scores too.
Speaking more generally, there is a lot of research and other activity on the topic of how to make networks train as quickly and cheaply as possible. One such benchmark is DAWNBench, which sets a target accuracy to achieve and then ranks approaches based on how fast they reach that target, and how much the GPUs or other infrastructure cost to do it.
This general idea of "cost reduction" is also one of the drivers behind the general idea of Transfer Learning.

Why machine learning algorithms focus on speed and not accuracy?

I study ML and I see that most of the time the focus of the algorithms is run time and not accuracy. Reducing features, taking sample from the data set, using approximation and so on.
Im not sure why its the focus since once I trained my model I dont need to train it anymore if my accuracy is high enough and for that if it will take me 1 hours or 10 days to train my model it does not really matter because I do it only 1 time and my goal is to predict as better as I can my outcomes (minimum loss).
If I train a model to differ between cats and dogs I want it to be the most accurate it can be and not the fasted since once I trained this model I dont need to train any more models.
I can understand why models that depends on fasting changing data need this focus of speed but for general training models I dont understand why the focus is on speed.
Speed is relative term. Accuracy is also relative depending on the difficulty of the task. Currently the goal is to achieve human-like performance for application at reasonable costs because this will replace human labor and cut costs.
From what I have seen in reading papers, people usually focus on accuracy first to produce something that works. Then do ablation studies - studies where pieces of the models are removed or modified - to achieve the same performance in less time or memory requirements.
The field is very experimentally validated. There really isn't much of a theory that states why CNN work so well other than that it can model any function given non-linear activations functions. (https://en.wikipedia.org/wiki/Universal_approximation_theorem) There have been some recent efforts to explain why it works well. One I recall is MobileNetV2: Inverted Residuals and Linear Bottlenecks. The explaination of embedding data into a low dimensional space without losing information might be worth reading.

Is it possible to train Neural Network with low amount of instances?

I have faced some problem when I needed to solve Regression Task and use as minimum instances as possible. When I tried to use Xgboost I had to feed 4 instances to get the reasonable result. But Multilayer Perceptron tuned to overcoming Regression problems has to take 20 instances, tried to change amount of neurons&layers but the answer is still 20 .Is it possible to do something to make Neural Network solve Resgression tasks with from 2 to 4 instances? if yes - explain please what should I do to succeed in it? Maybe there is some correlation between how much instances are needed to train and get reasonable results from Perceptron and how features are valuable inside dataset?
Thanks in advance for any help
With small numbers of samples, there are likely better methods to apply, Xgaboost definitely comes to mind as a method that does quite well at avoiding overfitting.
Neural networks tend to work well with larger numbers of samples. They often over fit to small datasets and underperform other algorithms.
There is, however, an active area of research in semi-supervised techniques using neural networks with large datasets of unlabeled data and small datasets of labeled samples.
Here's a paper to start you down that path, search on 'semi supervised learning'.
http://vdel.me.cmu.edu/publications/2011cgev/paper.pdf
Another area of interest to reduce overfitting in smaller datasets is in multi-task learning.
http://ruder.io/multi-task/
Multi task learning requires the network to achieve multiple target goals for a given input. Adding more requirements tends to reduce the space of solutions that the network can converge on and often achieves better results because of it. To say that differently: when multiple objectives are defined, the parameters necessary to do well at one task are often beneficial for the other task and vice versa.
Lastly, another area of open research is GANs and how they might be used in semi-supervised learning. No papers pop to the forefront of my mind on the subject just now, so I'll leave this mention as a footnote.