How can I get V100 GPU in Colab? - gpu

I have bought the Colab Pro, whereas I can only apply for the P100 for most of the time. V100 can run almost 2 times faster than P100. How can I get a V100 manually?

I have also been using Colab Pro for a long time, and as far as I know these resources are allocated according to Google's availablity. I have been using Tesla P100-PCIE-16GB most of the time, but at random times I get assigned a Tesla V100-SXM2-16GB.
BTW, to print your device name, you can use this command in Pytorch:
import torch
torch.cuda.get_device_name(device=None)

It's about three months since I started using Colab pro, and ever since, I haven't even a single time gotten the V100, and most of the time, I got the P100 and some times T4.
And to get the GPU that you are using in Colab, the best way is to use the command below:
!nvidia-smi

Related

Training on Google Colab is slower than on local machine despite having better GPU - why?

I've got a DL model to train and since the data is quite large I store it on my Google Disk which I mount to my Google Colab instance at the beginning of each session. However, I have noticed that the training of the exact same model with exact same script is 1.5-2 times slower on Google Colab than on my personal laptop. The thing is that I checked the Google Colab GPU and it has 12GB RAM (I'm not sure how can I check the exact model), while my laptop GPU is RTX 2060 which has only 6GB RAM. Therefore, as I'm new user of Google Colab, I've been wondering what might be the reason. Is this because data loading from mounted Disk Google with torch DataLoader slows down the process? Or maybe this is because my personal harddrive is SSD and Google Colab might not have SSD attached to my instance? How can I validate further if I'm not doing anything with my Google Colab setup that slows down the training?
The resources for Google Colaboratory are dynamically assigned to user instances. Short, interactive processes are preferred over long running data loading and processes further info can be found in the documentation:
https://research.google.com/colaboratory/faq.html#resource-limits
Specifically quoted from the above link
"GPUs and TPUs are sometimes prioritized for users who use Colab interactively rather than for long-running computations, or for users who have recently used less resources in Colab...As a result, users who use Colab for long-running computations, or users who have recently used more resources in Colab, are more likely to run into usage limits"

i have a question about using GPU in Colab

Hi Im using Colab for my project
And I have a problem about using GPU in Colab.
Although I Change My Runtime Type to 'GPU'
However, I keep getting a pop-up saying'I am connected to the GPU runtime but I am not using the GPU'.
For reference, I got a message like this because it was less than 10 minutes after I started learning. Is there a need for extra code to use the GPU?

Google Colab GPU speed-up works with 2.x, but not with 1.x

In https://colab.research.google.com/notebooks/gpu.ipynb, which I assume is an official demonstration of GPU speed-up by Google, if I follow the steps, the GPU speed-up (around 60 times faster than with CPU) using Tensorflow 2.x works. However, if I want to use version 1.15 like in https://colab.research.google.com/drive/12dduH7y0GPztxSM0AFlfpjj8FU5x8YSv (the only change compared to the notebook from the first link is getting rid of "%tensorflow_version 2.x" both times), tf.test.gpu_device_name() returns the string /device:GPU:0 but there is no speed-up. I would really love to use the a Tensorflow version between 1.5 and 1.15 though, as the code I want to run uses functions removed in Tensorflow 2.x. Does anyone know how to use Tensorflow 1.x while still getting the GPU speed-up?
In your notebook your code is not executed actually, since you didn't called session.run() nor tf.enable_eager_execution().
Add tf.enable_eager_execution() at the top of your code and you'll see the real difference between cpu and gpu times.

tensorflow does not recognise 2nd GPU (/gpu:1)

I am trying to use 2 GPUs, tensorflow does not recognise the 2nd one. the 2nd GPU is working fine (in widows environment)
When I set CUDA_VISIBLE_DEVICES=0 and run the program I see RTX2070 as GPU0
When I set CUDA_VISIBLE_DEVICES=1 and run the program I see GTX1050as GPU0
When I set CUDA_VISIBLE_DEVICES=0,1 and run the program I see RTX2070 as GPU0
so basically, TF does not recognise GPU1, it one GPU at the same time (GPU 0)
Is there any command to manually define GPU1?
uninstalled and re-installed, Cudann, python 3.7, tensorflow and keras (GPU versions). I am using anaconda on windows 10. tried to change the CUDA_VISIBLE_DEVICES to 0, 1. I dont see any error, but the 2nd GPU does not appear anywhere in python.
the main GPU is RTX2070 (8GB) and 2nd GPU is GTX1050 (2GB). Before i submit i spent sometime searching for solution and did whatever I could find on the internet. drivers are up to date, 64bit version anf latest versions of the software are installed. I dont see any issue, beside not appearing the 2nd GPU.
The codes are working fine on first GPU, both have > 3.5 computational capacity.
Providing the solution here (Answer Section), even though it is present in the Comment Section (Thanks M Student for sharing solution), for the benefit of the community.
Adding this at the beginning of the code resolved the issue
import os
os.environ["TF_MIN_GPU_MULTIPROCESSOR_COUNT"]="2"
os.environ["CUDA_VISIBLE_DEVICES"]="0,1"

Gaming GPUs and TensorFlow

I went through the MNIST tutorial with conv nets and during the training -for the first time- felt the need to use a GPU. I have a Geforce GTX 830M on my laptop and was wondering if I could use it with tensorflow?
Should I invest the time to try to get it working or start searching for a low cost GPU with the right requirements?
[I've been reading about very expensive and highly specialized equipment like the nvidia Digits, equipment's with half precision, etc.]
Looking at this chart the 830M has 5.0 compute capability, so in theory you'll be able to run TensorFlow (which requires 3.5). In practice you'll often hit problems with low memory on laptop GPUs, so you'll likely want to graduate to a desktop to do serious work, but it's a good way to get started.