I am trying to understand what are the basic difference between Tensorflow Mirror Strategy and Horovod Distribution Strategy.
From the documentation and the source code investigation I found that Horovod (https://github.com/horovod/horovod) is using Message Passing Protocol (MPI) to communicate between multiple nodes. Specifically it uses all_reduce, all_gather of MPI.
From my observation (I may be wrong) Mirror Strategy is also using all_reduce algorithm (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute).
Both of them are using data-parallel, synchronous training approach.
So I am a bit confused how they are different? Is the difference only in implementation or there are other (theoretical) difference?
And how is the performance of mirror strategy compared to horovod?
Mirror Strategy has its own all_reduce algorithm which use remote procedural calls (gRPC) under the hood.
Like you mentioned Horovod uses MPI/GLOO to communicate between multiple processes.
Regarding the performance, one of my colleagues have performed experiments before using 4 Tesla V100 GPUs using the codes from here. The results suggested that 3 settings work the best: replicated with all_reduce_spec=nccl, collective_all_reduce with properly tuned allreduce_merge_scope (e.g. 32), and horovod. I did not see significant differences among these 3.
Related
I realize that this might not be the best platform to ask this, but I think this would be best unbiased one to put my question in.
How would you compare OpenMDAO v/s modeFrontier with regards to there optimization capabilities and application scaling and overall software development? Which one would you pick and why?
If you know of any resources or link do provide.
The most fundamental technical difference is OpenMDAO can pass data + derivative information between components. This means that if you want to use gradient based optimization and have access to at least some tools that provide derivative information, OpenMDAO will have far more effective overall capabilities. This is especially important when doing optimization with high-cost analysis tools (e.g. partial differential equation solvers --- CFD, FEA). In those situations making use of derivatives offers between a 100x and 10000x speedup.
One other difference is that OpenMDAO is designed to run natively on a distributed memory compute cluster. Industrial frameworks can submit jobs to remote clusters and query for the results, but OpenMDAO itself can run on the cluster and has a direct and internal MPI based distributed memory capability. This is critical to it being able to efficiently handle derivatives of those expensive PDE solvers. To the best of my knowledge, OpenMDAO is unique in this regard. This is a low level technical detail that most users never need to directly understand, but the consequence is that if you want to do any kind of high fidelity coupled optimziations (aero-structural, aero-propulsive, aero-thermal) with more than one PDE solver in the loop then OpenMDAO's architecture is going to be by far the most effective.
However, OpenMDAO does not offer a GUI. It does not have the same level of data tracking and visualization tools. Also, I know that mode-frontier offers the ability to split a single model up across multiple computers distributed across an organization. Mode Frontier, along with other tools like ModelCenter and Isight, all offer this kind of smooth user experience and code-free interaction that many find valuable.
Honestly, I'm not sure a direct comparison is really warranted. I think if you have an organization that invests in a commercial integration tool like Mode Fronteir, then you can still use OpenMDAO to create tightly coupled integrated optimizations which you can then include as boxes inside your overall integration framework.
You certainly can use OpenMDAO as a complete integration framework, and it has some advantages in that area related to derivatives and execution in distributed memory environments. But you don't have to, and it certainly does not have to be an exclusive decision.
I have hundreds of models, based on categories, projects,s, etc. Some of the models are heavily used while other models are not used very frequently.
How can I trigger a scale-up operation only in case needed (For the models that are not frequently used), instead of running hundreds of pods serving hundreds of models while most of them are not being used - which is a huge waste of computing resources.
What you are trying to do is to scale deployment to zero when these are not used.
K8s does not provide such functionality out of the box.
You can achieve it using Knative Pod Autoscaler.
Knative is probably the most mature solution available at the moment of writing this answer.
There are also some more experimental solutions like osiris or zero-pod-autoscaler you may find interesting and that may be a good fit for your usecase.
Recently i looked into reinforcement learning and there was one question bugging me, that i could not find an answer for: How is training effectively done using GPUs? To my understanding constant interaction with an environment is required, which for me seems like a huge bottleneck, since this task is often non-mathematical / non-parallelizable. Yet for example Alpha Go uses multiple TPUs/GPUs. So how are they doing it?
Indeed, you will often have interactions with the environment in between learning steps, which will often be better off running on CPU than GPU. So, if your code for taking actions and your code for running an update / learning step are very fast (as in, for example, tabular RL algorithms), it won't be worth the effort of trying to get those on the GPU.
However, when you have a big neural network, that you need to go through whenever you select an action or run a learning step (as is the case in most of the Deep Reinforcement Learning approaches that are popular these days), the speedup of running these on GPU instead of CPU is often enough for it to be worth the effort of running them on GPU (even if it means you're quite regularly ''switching'' between CPU and GPU, and may need to copy some things from RAM to VRAM or the other way around).
When doing off-policy reinforcement learning (which means you can use transitions samples generated by a "behavioral" policy, different from the one you are currently learning), an experience replay is generally used. Therefore, you can grab a bunch of transitions from this large buffer and use a GPU to optimize the learning objective with SGD (c.f. DQN, DDPG).
One instance of CPU-GPU hybrid approach for RL is this - https://github.com/NVlabs/GA3C.
Here, multiple CPUs are used to interact with different instances of the environment. "Trainer" and "Predictor" processes then collect the interactions using multi-process queues, and pass them to a GPU for back-propagation.
We have trained a model using CNTK. We are building a service that is going to load this model and respond to requests to classify sentences. What is the best API to use regarding performance? We would prefer to build a C# service as in https://github.com/Microsoft/CNTK/tree/master/Examples/Evaluation/CSEvalClient but alternatively we are considering building a Python service that is going to load the model in python.
Do you have any recommendations towards one or the other approach? (regarding which API is faster, actively maintained or other parameters you can think of). The next step would be to set up an experiment measuring the performance of both API calls, but was wondering if there is some prior knowledge here that could help us decide.
Thank you
Both APIs are well developed/maintained. For text data I would go with the C# API.
In C# the main focus is fast and easy evaluation and for text loading the data is straightforward.
The Python API is good for development/training of models and at this time not much attention has been paid to evaluation. Furthermore, because of the wealth of packages loading data in exotic formats is easier in Python than C#.
The new C# Eval API based on CNTKLibrary will be available very soon (the first beta is probably next week). This API has functional parity with the C++ and Python API regarding evaluation.
This API supports using multiple threads to serve multiple evaluation requests in parallel, and even better, model parameters of the same loaded model is shared between these threads, which will significantly reduce memory usage in a service environment.
We have also a turorial about how to use Eval API in ASP.Net environment. It still refers to EvalDLL evaluation, but applies to the new C# API too. The document will be updated after the new C# API is released.
I have been playing around with building some deep learning models in Python and now I have a couple of outcomes I would like to be able to show friends and family.
Unfortunately(?), most of my friends and family aren't really up to the task of installing any of the advanced frameworks that are more or less necessary to have when creating these networks, so I can't just send them my scripts in the present state and hope to have them run.
But then again, I have already created the nets, and just using the finished product is considerably less demanding than making it. We don't need advanced graph compilers or GPU compute powers for the show and tell. We just need the ability to make a few matrix multiplications.
"Just" being a weasel word, regrettably. What I would like to do is convert the the whole model (connectivity,functions and parameters) to a model expressed in e.g. regular Numpy (which, though not part of standard library, is both much easier to install and easier to bundle reliably with a script)
I fail to find any ready solutions to do this. (I find it difficult to pick specific keywords on it for a search engine). But it seems to me that I can't be the first guy who wants to use a ready-made deep learning model on a lower-spec machine operated by people who aren't necessarily inclined to spend months learning how to set the parameters in an artificial neural network.
Are there established ways of transferring a model from e.g. Theano to Numpy?
I'm not necessarily requesting those specific libraries. The main point is I want to go from a GPU-capable framework in the creation phase to one that is trivial to install or bundle in the usage phase, to alleviate or eliminate the threshold the dependencies create for users without extensive technical experience.
An interesting option for you would be to deploy your project to heroku, like explained on this page:
https://github.com/sugyan/tensorflow-mnist