How to specify which GPU to use when running tensorflow? - tensorflow

We have a DGX-1 in Lab.
I see many tasks are running on different GPU.
For MLperf docker application, I can use NV_GPU=x to assign which GPU to use.
However, I have a python Keras/TensorFlow code, I used this same way, the loading doesn't go to the specified GPU.

You could use CUDA_VISIBLE_DEVICES to specify the GPU to be used by your model:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = 0,1 #Will assign GPUs 0 and 1 to the model

Related

Specify GPU to run training on (TF object detection API, model zoo)

I am training object detection on a device with multiple GPU's, and want to run training on gpu 1 (keeping 0 and 2 free) and cannot see an option to do so when starting training. I have looked through train.py and model_main.py and cannot find a line to change in there as well. Any suggestions?
use CUDA_VISIBLE_DEVICES environment variable.
you can do it by adding the following to your command line:
CUDA_VISIBLE_DEVICES=1 python <your-python-script>
it will enable only GPU 1 for TF.

TensorFlow Keras Sequential API GPU usage

When using TensorFlow's Keras sequential API is there any way to force my model to be trained on a certain piece of hardware? My understanding is that if there is a GPU to use (and I have tensorflow-gpu installed) I will, by default, do my training on the GPU.
Do I have to switch to a different API to gain more control over where my model is deployed?
I am a keras user and I work on ubuntu. I specify a certain GPU as follows:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
where 0 is the number of GPU. By default, tensorflow uses the first GPU (whose number is 0) if there are several ones on your computer. You can obtain the information of GPUs by typing the following command on your terminal:
nvidia-smi
or
watch -n 1 -d nvidia-smi
if you want to refresh your terminal every second. The following picture shows the information of my GPU, and the number of it has been circled by a red box.

Is it necessary to install GPU libraries on Google Colaboratory before using GPU?

I've been trying to use GPU with tensorflow on Colaboratory, but when I do
a = tf.constant(np.random.rand(1000,20000))
b = tf.constant(np.random.rand(20000,1000))
with tf.device('/device:GPU:0'):
c_gpu = tf.matmul(a,b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
print(sess.run(c_gpu))
the devices of the operations are not printed, although the result of the operation is. I suspect it is not using the GPU because I had measured the times of the matrix multiplication for both GPU and CPU and compared them.
No, it is not necessary.
In Colaboratory you should check if Runtime -> Change runtime type the parameter Hardware accelerator is GPU.
Then to test if Tensorflow use it, you can see this interesting sample, it works for me:
https://stackoverflow.com/a/43703735/9250875

Does tensorflow automatically detect GPU or do I have to specify it manually?

I have a code written in tensorflow that I run on CPUs and it runs fine.
I am transferring to a new machine which has GPUs and I run the code on the new machine but the training speed did not improve as expected (takes almost the same time).
I understood that Tensorflow automatically detects GPUs and run the operations on them (https://www.quora.com/How-do-I-automatically-put-all-my-computation-in-a-GPU-in-TensorFlow) & (https://www.tensorflow.org/tutorials/using_gpu).
Do I have to change the code to make it manually runs the operations on GPUs (for now I have a single GPU)? and what would be gained by doing that manually?
Thanks
If the GPU version of TensorFlow is installed and if you don't assign all your tensors to CPU, some of them should be assigned to GPU.
To find out which devices (CPU, GPU) are available to TensorFlow, you can use this:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Regarding the question of the performance, it's quite a broad subject and it really depends of your model, your data and so on. Here are a few and wide remarks on TensorFlow performance.

Configuring Tensorflow to use all CPU's

Reading :
https://www.tensorflow.org/versions/r0.10/resources/faq.html it states :
Does TensorFlow make use of all the devices (GPUs and CPUs) available
on my machine?
TensorFlow supports multiple GPUs and CPUs. See the how-to
documentation on using GPUs with TensorFlow for details of how
TensorFlow assigns operations to devices, and the CIFAR-10 tutorial
for an example model that uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability
greater than 3.5.
Does this mean Tensorflow can automatically make use of all CPU's on given machine or does it ned to be explicitly configured ?
CPUs are used via a "device" which is just a threadpool. You can control the number of threads if you feel like you need more:
sess = tf.Session(config=tf.ConfigProto(
intra_op_parallelism_threads=NUM_THREADS))