tensorflow does not recognise 2nd GPU (/gpu:1) - tensorflow

I am trying to use 2 GPUs, tensorflow does not recognise the 2nd one. the 2nd GPU is working fine (in widows environment)
When I set CUDA_VISIBLE_DEVICES=0 and run the program I see RTX2070 as GPU0
When I set CUDA_VISIBLE_DEVICES=1 and run the program I see GTX1050as GPU0
When I set CUDA_VISIBLE_DEVICES=0,1 and run the program I see RTX2070 as GPU0
so basically, TF does not recognise GPU1, it one GPU at the same time (GPU 0)
Is there any command to manually define GPU1?
uninstalled and re-installed, Cudann, python 3.7, tensorflow and keras (GPU versions). I am using anaconda on windows 10. tried to change the CUDA_VISIBLE_DEVICES to 0, 1. I dont see any error, but the 2nd GPU does not appear anywhere in python.
the main GPU is RTX2070 (8GB) and 2nd GPU is GTX1050 (2GB). Before i submit i spent sometime searching for solution and did whatever I could find on the internet. drivers are up to date, 64bit version anf latest versions of the software are installed. I dont see any issue, beside not appearing the 2nd GPU.
The codes are working fine on first GPU, both have > 3.5 computational capacity.

Providing the solution here (Answer Section), even though it is present in the Comment Section (Thanks M Student for sharing solution), for the benefit of the community.
Adding this at the beginning of the code resolved the issue
import os
os.environ["TF_MIN_GPU_MULTIPROCESSOR_COUNT"]="2"
os.environ["CUDA_VISIBLE_DEVICES"]="0,1"

Related

Can i clear up gpu vram in colab

I'm trying to use aitextgen to finetune 774M gpt 2 on a dataset. unfortunately, no matter what i do, training fails because there are only 80 mb of vram available. how can i clear the vram without restarting the runtime and maybe prevent the vram from being full?
Another solution can be using these code snippets.
1.
!pip install numba
Then:
from numba import cuda
# all of your code and execution
cuda.select_device(0)
cuda.close()
Your problem is discussed in Tensorflow official github. https://github.com/tensorflow/tensorflow/issues/36465
Update: #alchemy reported this to be unrecoverable in terms of turning on.
You can try below code.
device = cuda.get_current_device()
device.reset()
Run the command !nvidia-smi inside a notebook block.
Look for the process id for the GPU that is unnecessary for you to remove for cleaning up vram. Then run the command !kill process_id
It should help you.

StyleGAN 2 images completely black after Tick 0

I am training StyleGAN 2 on my own dataset - https://github.com/NVlabs/stylegan2
It works fine on a single P100 in Google Colab, but when I move the model to Vast.ai and try it on multiple GPU's an odd issue happens.
Everything works up to Tick 0, and after Tick 1, the fake images all come out completely black.
My environment:
Tensorflow 1.15
CUDA 10.0
My training command:
python3 run_training.py --num-gpus=4 --data-dir="/root/data/" --config=config-f --dataset=images1_tf --mirror-augment=true --metrics=none
In rare instances it works and generates proper fakes, but if I interrupt the training with ^C and resume again, then it starts generating the all black images.
I have tried changing datasets, tried it with different machine instances, but the problem persists.
I had the exact same problem with 2 GPUs (GTX 1080 8GB cards in my case) running Tensorflow 1.15 and CUDA 10.2... It would train for exactly 1 tick as you mentioned, and then all subsequent fakes would be a pure black image. On a whim, I upgraded my Nvidia driver from 440 to 450 which also bumped CUDA up to 11. It then began working and generating proper images after tick 1.

how do i find out if tensorflow uses the gpu under windows and python 3.6?

how do i find out if tensorflow uses the gpu?
when I check the GPU in the task manager it says that it is 1% full. I find that a little bit, but I do not know whether the display may also be incorrect for the information.
I find the calculation too fast for only CPU, but actually too slow for GPU ...
is installed
tensorflow and tensorflow-gpu with version 1.15
This code will confirm that tensorflow using GPU or CPU
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Google Colab GPU speed-up works with 2.x, but not with 1.x

In https://colab.research.google.com/notebooks/gpu.ipynb, which I assume is an official demonstration of GPU speed-up by Google, if I follow the steps, the GPU speed-up (around 60 times faster than with CPU) using Tensorflow 2.x works. However, if I want to use version 1.15 like in https://colab.research.google.com/drive/12dduH7y0GPztxSM0AFlfpjj8FU5x8YSv (the only change compared to the notebook from the first link is getting rid of "%tensorflow_version 2.x" both times), tf.test.gpu_device_name() returns the string /device:GPU:0 but there is no speed-up. I would really love to use the a Tensorflow version between 1.5 and 1.15 though, as the code I want to run uses functions removed in Tensorflow 2.x. Does anyone know how to use Tensorflow 1.x while still getting the GPU speed-up?
In your notebook your code is not executed actually, since you didn't called session.run() nor tf.enable_eager_execution().
Add tf.enable_eager_execution() at the top of your code and you'll see the real difference between cpu and gpu times.

Low NVIDIA GPU Usage with Keras and Tensorflow

I'm running a CNN with keras-gpu and tensorflow-gpu with a NVIDIA GeForce RTX 2080 Ti on Windows 10. My computer has a Intel Xeon e5-2683 v4 CPU (2.1 GHz). I'm running my code through Jupyter (most recent Anaconda distribution). The output in the command terminal shows that the GPU is being utilized, however the script I'm running takes longer than I expect to train/test on the data and when I open the task manager it looks like the GPU utilization is very low. Here's an image:
Note that the CPU isn't being utilized and nothing else on the task manager suggests anything is being fully utilized. I don't have an ethernet connection and am connected to Wifi (don't think this effects anything but I'm not sure with Jupyter since it runs through the web broswers). I'm training on a lot of data (~128GB) which is all loaded into the RAM (512GB). The model I'm running is a fully convolutional neural network (basically a U-Net architecture) with 566,290 trainable parameters. Things I tried so far:
1. Increasing batch size from 20 to 10,000 (increases GPU usage from ~3-4% to ~6-7%, greatly decreases training time as expected).
2. Setting use_multiprocessing to True and increasing number of workers in model.fit (no effect).
I followed the installation steps on this website: https://www.pugetsystems.com/labs/hpc/The-Best-Way-to-Install-TensorFlow-with-GPU-Support-on-Windows-10-Without-Installing-CUDA-1187/#look-at-the-job-run-with-tensorboard
Note that this installation specifically DOESN'T install CuDNN or CUDA. I've had trouble in the past with getting tensorflow-gpu running with CUDA (although I haven't tried in over 2 years so maybe it's easier with the latest versions) which is why I used this installation method.
Is this most likely the reason why the GPU isn't being fully utilized (no CuDNN/CUDA)? Does it have something to do with the dedicated GPU memory usage being a bottleneck? Or maybe something to do with the network architecture I'm using (number of parameters, etc.)?
Please let me know if you need any more information about my system or the code/data I'm running on to help diagnose. Thanks in advance!
EDIT: I noticed something interesting in the task manager. An epoch with batch size of 10,000 takes around 200s. For the last ~5s of each epoch, the GPU usage increases to ~15-17% (up from ~6-7% for the first 195s of each epoch). Not sure if this helps or indicates there's a bottleneck somewhere besides the GPU.
You for sure need to install CUDA/Cudnn to fully utilize GPU with tensorflow. You can double check that the packages are installed correctly and if the GPU is available to tensorflow/keras by using
import tensorflow as tf
tf.config.list_physical_devices("GPU")
and the output should look something like [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
if the device is available.
If you've installed CUDA/Cudnn correctly then all you need to do is change copy --> cuda in the dropdown menu in the task manager which will show the number of active cuda cores. The other indicators for the GPU will not be active when running tf/keras because there is no video encoding/decoding etc to be done; it is simply using the cuda cores on the GPU so the only way to track GPU usage is to look at the cuda utilization (when considering monitoring from the task manager)
I would first start by running one of the short "tests" to ensure Tensorflow is utilizing the GPU. For example, I prefer #Salvador Dali's answer in that linked question
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
If Tensorflow is indeed using your GPU you should see the result of the matrix multplication printed. Otherwise a fairly long stack trace stating that "gpu:0" cannot be found.
If this all works well that I would recommend utilizing Nvidia's smi.exe utility. It is available on both Windows and Linux and AFAIK installs with the Nvidia driver. On a windows system it is located at
C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe
Open a windows command prompt and navigate to that directory. Then run
nvidia-smi.exe -l 3
This will show you a screen like so, that updates every three seconds.
Here we can see various information about the state of the GPUs and what they are doing. Of specific interest in this case is the "Pwr: Usage/Cap" and "Volatile GPU-Util" columns. If your model is indeed using the/a GPU these columns should increase "instantaneously" once you start training the model.
You most likely will see an increase in fan speed and temperature unless you have a very nice cooling solution. In the bottom of the printout you should also see a Process with a name akin to "python" or "Jupityr" running.
If this fails to provide an answers as to the slow training times than I would surmise the issue lies with the model and code itself. And I think its is actually the case here. Specifically viewing the Windows Task Managers listing for "Dedicated GPU Memory Usage" pinged at basically maximum.
If you have tried #KDecker's and #OverLordGoldDragon's solution, low GPU usage is still there, I would suggest first investigating your data pipeline. The following two figures are from tensorflow official guides data performance, they are well illustrated how data pipeline will affect the GPU efficiency.
As you can see, prepare data in parallel with the training will increase the GPU usage. In this situation, CPU processing is becoming the bottleneck. You need to find a mechanism to hide the latency of preprocessing, such as changing the number of processes, size of butter etc. The efficiency of CPU should match the efficiency of the GPU. In this way, the GPU will be maximally utilized.
Take a look at Tensorpack, and it has detailed tutorials of how to speed up your input data pipeline.
Everything works as expected; your dedicated memory usage is nearly maxed, and neither TensorFlow nor CUDA can use shared memory -- see this answer.
If your GPU runs OOM, the only remedy is to get a GPU with more dedicated memory, or decrease model size, or use below script to prevent TensorFlow from assigning redundant resources to the GPU (which it does tend to do):
## LIMIT GPU USAGE
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # don't pre-allocate memory; allocate as-needed
config.gpu_options.per_process_gpu_memory_fraction = 0.95 # limit memory to be allocated
K.tensorflow_backend.set_session(tf.Session(config=config)) # create sess w/ above settings
The unusual increased usage you observe may be shared memory resources being temporarily accessed due to exhausting other available resources, especially with use_multiprocessing=True - but unsure, could be other causes
There seems to have been a change to the installation method you referenced : https://www.pugetsystems.com/labs/hpc/The-Best-Way-to-Install-TensorFlow-with-GPU-Support-on-Windows-10-Without-Installing-CUDA-1187
It is now much easier and should eliminate the problems you are experiencing.
Important Edit You don't seem to be looking at the actual compute of the GPU, look at the attached image:
read following two pages ,u will get idea to properly setup with GPU
https://medium.com/#kegui/how-do-i-know-i-am-running-keras-model-on-gpu-a9cdcc24f986
https://datascience.stackexchange.com/questions/41956/how-to-make-my-neural-netwok-run-on-gpu-instead-of-cpu