I have installed CUDA and cuDNN, but the last was not working, giving a lot of error messages in theano. Now I am training moderate sized deep conv nets in Keras/Tensorflow, without getting any cuDNN error messages. How can I check if cuDNN is now being used?
tl;dr: If tensorflow-gpu works, then CuDNN is used.
The prebuilt binaries of TensorFlow (at least since version 1.3) link to the CuDNN library. If CuDNN is missing, an error message will tell you ImportError: Could not find 'cudnn64_7.dll'. TensorFlow requires that this DLL be installed....
According to the TensorFlow install documentation for version 1.5, CuDNN must be installed for GPU support even if you build it from source. There are still a lot of fallbacks in the TensorFlow code for the case of CuDNN not being available -- as far as I can tell it used to be optional in prior versions.
Here are two lines from the TensorFlow source that explicitly tell and force that CuDNN is required for gpu acceleration.
There is a special GPU version of TensorFlow that needs to be installed in order to use the GPU (and CuDNN). Make sure the installed python package is tensorflow-gpu and not just tensorflow.
You can list the packages containing "tensorflow" with conda list tensorflow (or just pip list, if you do not use anaconda), but make sure you have the right environment activated.
When you run your scripts with GPU support, they will start like this:
Using TensorFlow backend.
2018- ... C:\tf_jenkins\...\gpu\gpu_device.cc:1105] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7845
To test it, just type into the console:
import tensorflow as tf
tf.Session()
To check if you "see" the CuDNN from your python environment and therewith validate a correct PATH variable, you can try this:
import ctypes
ctypes.WinDLL("cudnn64_7.dll") # use the file name of your cudnn version here.
You might also want to look into the GPU optimized Keras Layers.
CuDNNLSTM
CuDNNGRU
They are significantly faster:
https://keras.io/layers/recurrent/#cudnnlstm
We saw a 10x improvement going from the LSTM to CuDNNLSTM Keras layers.
Note:
We also saw a 10x increase in VMS (virtual memory) usage on the machine. So there are tradeoffs to consider.
Related
I'm trying to use my laptop RTX 3070 GPU for CNN model training because I have to employ a exhastive grid search to tune the hyper parameters. I tried many different methods however, I could not get it done. Can anyone kindly point me in the right direction?
I followed the following procedure.
The procedure:
Installed the NVIDIA CUDA Toolkit 11.2
Installed NVIDIA cuDNN 8.1 by downloading and pasting the files (bin,include,lib) into the NVIDIA GPU Computing Toolkit/CUDA/V11.2
Setup the environment variable by including the path in the system path for both bin and libnvvm.
Installed tensorflow 2.11 and python 3.8 in a new conda environment.
However, I was unable to setup the system to use the GPU that is available. The code seems to be only using the CPU and when I query the following request I get the below output.
query:
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Output:
TensorFlow version: 2.11.0
Num GPUs Available: 0
Am I missing something here or anyone has the same issue like me?
You should use DirectML plugin. From tensorflow 2.11 Gpu support has been dropped for native windows. you need to use DirectML plugin.
You can follow the tutorial here to install
According to this link: //pypi.org/project/tensorflow-gpu/ , the "tensorflow-gpu" package is no longer supported and users should instead use the "tensorflow" package, which supposedly supports the GPU.
However after, installing the tensorflow 2.11 package, it will not even detect my GPU device. It only runs on the CPU. How does one use the GPU with Tensorflow 2.11?
It appears that Tensorflow 2.10 is the last version to support the GPU on windows: https://discuss.tensorflow.org/t/2-10-last-version-to-support-native-windows-gpu/12404
I'm using Anaconda prompt to install:
Tensorflow 2.10.0
cudatoolkit 11.3.1
cudnn 8.2.1
I'm using Windows 11 and a RTX 3070 Nvidia graphic card. And all the drives have been updated.
And I tried downloading another version of CUDA and CUdnn in exe.file directly from CUDA website. And
added the directories into system path.The folder looks like this:
But whenever I type in:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
It gives me Num GPUs Available: 0
And surely it eats my CPU for computing everything.
It once succeeded when I used Colab and use GPU as the accelerator then created a session using my GPU. That time the GPU load has been maximized. But later don't know how, I can't use my own GPU for training in Colab or even their default free GPU.
Please help. ChatGPT doesn't give me correct information since it only referred to knowledge before 2020. It keeps asking me to install 'tensorflow-gpu' which has already been removed.
I installed the tensorflow-gpu version, and tried to test the GPU setup as suggested
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
However, I got the following information
Num GPUs Available: 0
My machine does have GPU card, shown as follows, why it is not picked by Tensorflow
It is likely that you do not have the right combination of the following:
CUDA
CuDNN
TensorFlow
Please check my answer here, for a correct combination of the aforementioned : Tensorflow 2.0 can't use GPU, something wrong in cuDNN? :Failed to get convolution algorithm. This is probably because cuDNN failed to initialize
I attached RTX 3080 to my computer. but when training on keras 2.3.1 and tensorflow 1.15, I got some error "failed to run cuBLAS_STATUS_EXECUTION_FAILED, did not mem zero GPU location . . . check failed:start_event !=nullptr && stop_event != nullptr" I think the problem is that recently released rtx 3080 and CUDA 11 is not yet support the keras 2.xx and tensorflow 1.xx. is this right? And what make that problem?
At the moment of writing this, currently Nvidia 30xx series only fully support CUDA version 11.x, see https://forums.developer.nvidia.com/t/can-rtx-3080-support-cuda-10-1/155849/2
Tensorflow 1.15 wasn't fully supported on CUDA since version 10.1 and newer, for probably similar reason as described in the link above. Unfortunately TensorFlow version 1.x is no longer supported or maintained, see https://github.com/tensorflow/tensorflow/issues/43629#issuecomment-700709796
TensorFlow 2.4 is your best bet with an Ampere GPU. It has now a stable release, and it has official support for CUDA 11.0, see https://www.tensorflow.org/install/source#gpu
As TensorFlow 1.x is never going to be updated or maintained by TensorFlow team, I would strongly suggest moving to TensorFlow 2.x, excluding personal preferences, it's better in almost every way and has tf.compat module for backwards compatibility with TensorFlow 1.x code, if rewriting you code base is not an option. However, even that module is no longer maintained, really showing that version 1.x is dead, see https://www.tensorflow.org/guide/versions#what_is_covered
However, if you're dead set on using TensorFlow 1.15, you might have a chance with Nvidia Tensorflow, which apparently has support for version 1.15 on Ampere GPUs, see https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/