What are paid alternative services to KAggle? - google-colaboratory

Kaggle provides 30 hours of GPU usage for free per week. Sometimes, I max out this quota and I need more, or I need a more powerful GPU.
I tried Google Colab Pro subscription, however, it was slower than Kaggle. Are there any cloud computing services for deep learning similar to Kaggle or Google Colab Pro? I prefer to use a fixed type of GPU for a fixed amount of hours for a fixed monthly fee, that is why I do not like AWS because it pay-as-you-use and in general AWS looks too complex for my needs which is learning deep learning by implementing and experimenting different architectures.

Related

Training on Google Colab is slower than on local machine despite having better GPU - why?

I've got a DL model to train and since the data is quite large I store it on my Google Disk which I mount to my Google Colab instance at the beginning of each session. However, I have noticed that the training of the exact same model with exact same script is 1.5-2 times slower on Google Colab than on my personal laptop. The thing is that I checked the Google Colab GPU and it has 12GB RAM (I'm not sure how can I check the exact model), while my laptop GPU is RTX 2060 which has only 6GB RAM. Therefore, as I'm new user of Google Colab, I've been wondering what might be the reason. Is this because data loading from mounted Disk Google with torch DataLoader slows down the process? Or maybe this is because my personal harddrive is SSD and Google Colab might not have SSD attached to my instance? How can I validate further if I'm not doing anything with my Google Colab setup that slows down the training?
The resources for Google Colaboratory are dynamically assigned to user instances. Short, interactive processes are preferred over long running data loading and processes further info can be found in the documentation:
https://research.google.com/colaboratory/faq.html#resource-limits
Specifically quoted from the above link
"GPUs and TPUs are sometimes prioritized for users who use Colab interactively rather than for long-running computations, or for users who have recently used less resources in Colab...As a result, users who use Colab for long-running computations, or users who have recently used more resources in Colab, are more likely to run into usage limits"

Equivalents of Google Colab with more RAM

Im currently working on a project creating simulations from defined models. Most of the testing on down-sized models has taken place in google colab to make use of the GPU accelerator option. However, when up-scaling to the full sized models I now exceed the maximum RAM for google colab. Is there an equivalent service which allows for 25GB RAM rather than 12GB. Access to GPU acceleration is still essential
Note: I am outside the US so Google Colab Pro is not an option

Number of Active Sessions Google Colab Pro

The free version of Google Colab is limited in the number of active sessions that can be running at the same time. For example, I can only train two ML models at the same time.
I wanted to know if Google Colab Pro extends the number of available active sessions such that I can train multiple models at the same time.
Thanks!
Yup, the limit in Colab Pro is higher. Presently, you can use 4 standard GPU backends and 4 high-memory GPU backends concurrently.

What is Google's business model for Tensorflow? also why did Baidu develop PaddlePaddle

I heard Google has over 100 engineers working on Tensorflow. So, what does Google gain from distributing this expensive engine for free? Will Google later charge corporate Tensorflow users? ?
What is google’s plan for Tensorflow? will it be free forever? like a deep learning version of python?
Some people tell me it's to reduce new employee training time. If so, why did Baidu develop it's own deep learning library when that means they should spend extra time training it's employees who are used to using Tensorflow, how to use Baidu's deep learning library
Well if everyone is using tensorflow there are more contributions made to it by the public. That helps google for free.
Even if it doesn't become a product that they sell they can still make money from it.
The thing is they might be able to sell services around it like lessons. They can also sell hardware specifically optimized for tensorflow etc.
Almost every company had its own machine learning framework. The reason they open sourced them was because each of theirs were going to fall much behind if it wasn't open source. A prime example is how Torch was doing so great up until tensorflow became open source as well. (With google's backing and because it's in python instead of lua are a few reasons tf became more popular).

TPU custom chip available with Google Cloud ML

Which type of Hardware is used as part of Google Cloud ML when using TensorFlow? Only CPU or Tensor Processing Unit (custom cards) are also available?
cf this article
Cloud ML currently focuses on CPUs. GPUs and TPUs will be available in the future.
Cloud TPUs are available to the public as of 2018-06-27: https://cloud.google.com/tpu/docs/release-notes
This was announced at Google Next '18:
https://www.blog.google/products/google-cloud/empowering-businesses-and-developers-to-do-more-with-ai/
At the time of writing (December 2017), GPUs are available, see https://cloud.google.com/ml-engine/docs/training-overview
If you use the gcloud command line utility, you can e.g. add the --scale-tier BASIC_GPU option when submitting jobs to mlengine. This currently gives runs your tensorflow code on a Tesla K80.
There is also a CUSTOM scale tier which allows more complex configurations and gives access to P100 GPUs (see https://cloud.google.com/ml-engine/docs/using-gpus ).
The TPU service is in 'alpha' status according to https://cloud.google.com/tpu/ and one needs to sign up to learn more.