I have noticed that some newer TensorFlow versions are incompatible with older CUDA and cuDNN versions. Does an overview of the compatible versions or even a list of officially tested combinations exist? I can't find it in the TensorFlow documentation.
TL;DR) See this table: https://www.tensorflow.org/install/source#gpu
Generally:
Check the CUDA version:
cat /usr/local/cuda/version.txt
and cuDNN version:
grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h
and install a combination as given below in the images or here.
The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:
Minor configurations:
Since the given specifications below in some cases might be too broad, here is one specific configuration that works:
tensorflow-gpu==1.12.0
cuda==9.0
cuDNN==7.1.4
The corresponding cudnn can be downloaded here.
Tested build configurations
Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart (for official TF wheels).
(figures updated May 20, 2020)
Linux GPU
Linux CPU
macOS GPU
macOS CPU
Windows GPU
Windows CPU
Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.
The compatibility table given in the tensorflow site does not contain specific minor versions for cuda and cuDNN. However, if the specific versions are not met, there will be an error when you try to use tensorflow.
For tensorflow-gpu==1.12.0 and cuda==9.0, the compatible cuDNN version is 7.1.4, which can be downloaded from here after registration.
You can check your cuda version using
nvcc --version
cuDNN version using
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
tensorflow-gpu version using
pip freeze | grep tensorflow-gpu
UPDATE:
Since tensorflow 2.0, has been released, I will share the compatible cuda and cuDNN versions for it as well (for Ubuntu 18.04).
tensorflow-gpu = 2.0.0
cuda = 10.0
cuDNN = 7.6.0
if you are coding in jupyter notebook, and want to check which cuda version tf is using, run the follow command directly into jupyter cell:
!conda list cudatoolkit
!conda list cudnn
and to check if the gpu is visible to tf:
tf.test.is_gpu_available(
cuda_only=False, min_cuda_compute_capability=None
)
You can use this configuration for cuda 10.0 (10.1 does not work as of 3/18), this runs for me:
tensorflow>=1.12.0
tensorflow_gpu>=1.4
Install version tensorflow gpu:
pip install tensorflow-gpu==1.4.0
Thanks for the first answer.
Something about backward compatibility.
I can successfully install tensorflow-2.4.0 with cuda-11.1 and cudnn 8.0.5.
Source: https://www.tensorflow.org/install/source#gpu
I had installed CUDA 10.1 and CUDNN 7.6 by mistake. You can use following configurations (This worked for me - as of 9/10). :
Tensorflow-gpu == 1.14.0
CUDA 10.1
CUDNN 7.6
Ubuntu 18.04
But I had to create symlinks for it to work as tensorflow originally works with CUDA 10.
sudo ln -s /opt/cuda/targets/x86_64-linux/lib/libcublas.so /opt/cuda/targets/x86_64-linux/lib/libcublas.so.10.0
sudo cp /usr/lib/x86_64-linux-gnu/libcublas.so.10 /usr/local/cuda-10.1/lib64/
sudo ln -s /usr/local/cuda-10.1/lib64/libcublas.so.10 /usr/local/cuda-10.1/lib64/libcublas.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusolver.so.10 /usr/local/cuda/lib64/libcusolver.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcurand.so.10 /usr/local/cuda/lib64/libcurand.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcufft.so.10 /usr/local/cuda/lib64/libcufft.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so /usr/local/cuda/lib64/libcudart.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusparse.so.10 /usr/local/cuda/lib64/libcusparse.so.10.0
And add the following to my ~/.bashrc -
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/targets/x86_64-linux/lib/
I had a similar problem after upgrading to TF 2.0. The CUDA version that TF was reporting did not match what Ubuntu 18.04 thought I had installed. It said I was using CUDA 7.5.0, but apt thought I had the right version installed.
What I eventually had to do was grep recursively in /usr/local for CUDNN_MAJOR, and I found that /usr/local/cuda-10.0/targets/x86_64-linux/include/cudnn.h did indeed specify the version as 7.5.0.
/usr/local/cuda-10.1 got it right, and /usr/local/cuda pointed to /usr/local/cuda-10.1, so it was (and remains) a mystery to me why TF was looking at /usr/local/cuda-10.0.
Anyway, I just moved /usr/local/cuda-10.0 to /usr/local/old-cuda-10.0 so TF couldn't find it any more and everything then worked like a charm.
It was all very frustrating, and I still feel like I just did a random hack. But it worked :) and perhaps this will help someone with a similar issue.
Related
I am trying to use Tensorflow 2.7.0 with GPU, but I am constantly running into the same issue:
2022-02-03 08:32:31.822484: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/username/.cache/pypoetry/virtualenvs/poetry_env/lib/python3.7/site-packages/cv2/../../lib64:/home/username/miniconda3/envs/project/lib/
2022-02-03 08:32:31.822528: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
This issue has already appeared multiple times here & on github. However, the solutions usually proposed to a) download the missing CUDA files, b) downgrade/upgrade to the correct CUDA version, c) set the correct LD_LIBRARY_PATH.
I have been already using my PC with CUDA-enabled PyTorch, and I did not have a single issue there. My nvidia-smi returns 11.0 version, which is exactly the only I want to have. Also, if I try to run:
import os
LD_LIBRARY_PATH = '/home/username/miniconda3/envs/project/lib/'
print(os.path.exists(os.path.join(LD_LIBRARY_PATH, "libcudart.so.11.0")))
it returns True. This is exactly the part of LD_LIBRARY_PATH from the error message, where Tensorflow, apparently, cannot see the libcudart.so.11.0 (which IS there).
Is there something really obvious that I am missing?
nvidia-smi output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.156.00 Driver Version: 450.156.00 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
nvcc:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Firstly:
Can you find out where the "libcudart.so.11.0" is
If you lost it at error stack, you can replace the "libcudart.so.11.0" by your word in below:
sudo find / -name 'libcudart.so.11.0'
Outputs in my system. This result shows where the "libcudart.so.11.0" is in the system:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0
If the result shows nothing, please make sure you have install cuda or other staff that must install in your system.
Second, add the path to environment file.
# edit /etc/profile
sudo vim /etc/profile
# append path to "LD_LIBRARY_PATH" in profile file
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.1/targets/x86_64-linux/lib
# make environment file work
source /etc/profile
You may also refer to this link
Third thing you may try is:
conda install cudatoolkit
Installing the correct version of cuda 11.3 and cudnn 8.2.1 for tf2.8. Based on this blog https://www.tensorflow.org/install/source#gpu using following commands.
conda uninstall cudatoolkit
conda install cudnn
Then exporting LD path - dynamic link loader path after finding location by
this sudo find / -name 'libcudnn' System was able to find required libraries and use GPU for training.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/usr/miniconda3/envs/tf2/lib/
Hope it helped.
Faced the same issue with tensorflow 2.9 and cuda 11.7 on arch linux x86_64 with 2 nvidia gpus (1080ti / titan rtx) and solved it:
It is not absolutely necessary to respect the compatibility matrix (cuda 11.7 vs 11.2 so minor superior version). But python 3 version was downgraded according to the tensorflow comp matrix (3.10 to 3.7).
Note that you can have multiple cuda version installed and manage it by symlink on linux. (win should be different a bit)
setup with conda and python 3.7
sudo pacman -S base-devel cudnn
conda activate tf-2.9
conda uninstall cudatoolkit && conda install cudnn
I've also had to update gcc for another lib (out of topic)
conda install -c conda-forge gcc=12.1.0
added the snippet for debug according to tf-gpu docs
import tensorflow as tf
tf.config.list_physical_devices('GPU')
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
I now see 2 gpu detected instead of 0, training time is divided by 10.
nvidia-smi reports ram usage maxed and power level raised from 9W to 150W validating the usage of the gpu (the other was left idle).
Rootcause: cudnn was not installed system-wide.
I am trying to install tensorflow-gpu 1.15 using Conda for an easy install of CUDA and cuDNN. The problem is that checking the compatibility chart of the official web I need python 3.6, CUDA 10.0 and cuDNN 7.4.
Searching the Conda rep via conda search cudnn it says that there isn't cuDNN 7.4. Is there any other way to install the required packages? Or maybe tensorflow 1.15 also works with other combinations of versions?
As a side note, python 3.6, tensorflow-gpu 1.15 and CUDA 10 install correctly, but it seems I can't use the GPU correctly without cuDNN.
I just recently started using Conda, so maybe there is a straight forward way to do this that I don't realize. My Conda version is 4.9.1 (miniconda version).
---update---
Just in case I add the error while trying conda create -n myenv -c conda-forge tensorflow-gpu=1.15:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: -
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package _tflow_select conflicts for:
_tflow_select==2.1.0=gpu
tensorflow==1.15.0 -> _tflow_select[version='2.1.0|2.3.0|2.2.0',build='gpu|mkl|eigen']
Note that strict channel priority may have removed packages required for satisfiability.
I am not sure if that is the problem, but I installed the following way
conda create -n tensorflow1.15 python=3.5
conda activate tensorflow1.15
conda install cudatoolkit=10.0
conda install cudnn=7.3.1
pip3 install tensorflow-gpu==1.15
And it seems to works perfectly with the GPU. I didn't know that cuDNN 7.3.1 worked like 7.4. The best way is to install tensorflow with conda, but it give me an error of trying to install tensorflow-gpu=2.X.
Also maybe it's interesting to say that you can search CUDA and similar official installers with conda search -c nvidia <packageName>.
I would let conda handle all the dependencies itself by installing tensorflow via conda, not pip. The GPU version of tensorflow is available e.g. in the popular conda-forge channel:
conda create -n myenv -c conda-forge tensorflow-gpu=1.15
The best setup for TensorFlow 1.15 is to follow this guide here: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/tensorflow-1.14/install.html#tf-install. The CUDA version which is recommended is 10.0 and the cudNN version 7.6.5
Attention to the protobuf version which will be installed, if you execute the gpu version it's 4.21.1, but you have to rewrite it with the command: pip install --upgrade tensorflow-gpu==1.15 "protobuf<4.0". If you use the cpu version its recommended to use this version here:(https://github.com/protocolbuffers/protobuf/releases/tag/v3.4.0) to avoid errors.Just download the protoc-3.4.0-win32.zip (windows)
Hope that helps.
I am building a Deep Learning rig with a GeForce RTX 2060.
I am wanting to use baselines-stable which isn't tensorflow 2.0 compatible yet.
According to here and here, tensorflow-gpu-1.15 is only listed as compatible with CUDA 10.0, not CUDA 10.1.
Attempting to download CUDA from Nvidia, the option for Ubuntu 20.04 is not available for CUDA 10.0.
Searching the apt-cache does not result in CUDA 10.0 either.
$ sudo apt-cache policy nvidia-cuda-toolkit
[sudo] password for lansford:
nvidia-cuda-toolkit:
Installed: (none)
Candidate: 10.1.243-3
Version table:
10.1.243-3 500
500 http://us.archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages
I would highly prefer not to have to reinstall the OS with an older version of Ubuntu. However experimenting with reinforcement learning was the motive for purchasing this PC.
I see some possible clues that it might be possible to build tensorflow-gpu-1.15 from source with cuda 10.1 support. I also saw a random comment that tensorflow-gpu-1.15 will just-work with tf 1.15, but I am not wanting to make a miss-step installing things until I have a signal that is the direction to go. Uninstalling things isn't always straightforward.
Should I install CUDA 10.1 and cross my fingers 1.15 will like it.
Should I download the install for CUDA 10.0 for a the older Ubuntu version and see if it will install anyway
Should I attempt to compile tensorflow from source against CUDA 10.1 (heh heh heh)
Should I install and older version of Ubuntu and hope I don't go obsolete too quickly.
Given the situation is there a way to run tensorflow 1.15 with gpu support on Ubuntu 20.04.1?
As this also bothered me I found a working solution that I think is more versatile than using docker containers.
The main idea is from here (not to claim credit from others).
To make a working solution for Ubuntu 20.04 and TensorFlow 1.15 one needs:
Cuda 10.0 (to work with tf 1.15).
I have some trouble finding this version because it's not officially available for Ubuntu 20.04. I resolved to the Ubuntu 18.04 version though which works fine.
Archive toolkits here.
Final toolkit for Ubuntu here (as it's obvious not 20.04 version is available).
I chose runfile as method which resulted into 1 main runfile and 1 patch runfile being available:
cuda_10.0.130_410.48_linux.run
cuda_10.0.130.1_linux.run
The toolkit can be safely installed using the instructions provided with no risk since each version allocates a different folder in the system (typically this would be /usr/local/cuda-10.0/).
The corresponding cudnn for cuda 10.0
I had this one from a previous installation but its shouldn't be hard to download it also. The version I used is cudnn-10.0-linux-x64-v7.6.5.32.tgz.
Cudnn basically just copies files in the right places (do not actually install anything that is). So, an extraction of the compressed file and copy to the folder would suffice:
$ sudo cp cuda/include/cudnn.h /usr/local/cuda-10.0/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda-10.0/lib64
$ sudo chmod a+r /usr/local/cuda-10.0/include/cudnn.h /usr/local/cuda-10.0/lib64/libcudnn*
Upto this point although installed the system is unaware of the presence of cuda 10.0. So, all call to it will fail as if non existent. We should update the relevant system environment for cuda 10.0. One way (there are others) system-wide is to create (in not existent) a /etc/profile.d/cuda.sh which will contain the update to the LD_LIBRARY_PATH variable. It should contain something like:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-11.3/lib64:/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
This command would normally do the work:
$ sudo sh -c ‘echo export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-11.3/lib64:/usr/local/cuda-10.0/lib64:\$LD_LIBRARY_PATH > /etc/profile.d/cuda.sh’
This requires a restart though to be evaluated I think. Anyway, this way the system will search for the relevant so files in:
a) /usr/local/cuda/lib64 (the default symbolic link) and it will fail
b) to the virtually same as the latter /usr/local/cuda-11.3/lib64 and also fail BUT it will search also
c) /usr/local/cuda-10.0/lib64 which will be successful.
The supported versions of python for cuda 10.0 ends with 3.7 so an older version should be installed. This means obligatory a virtual environment (since messing with system python is never not a good idea).
One can install python 3.7 for example using this repository which contains old (and new versions of python):
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get install python3.7
This just installs python3.7 to the system it does not make it default. The default is the previous one.
Create a virtual environment and add the desired python as the default interpreter. For me this works:
virtualenv -p python3.7 ~/tensorflow_1-15
which creates a new venv with Python 3.7 in it.
Now populate with all required modules and you are set to go.
I went ahead and went with the docker approach. The Tensorflow documentation seems to be pushing in that direction anyway. Using docker only the Nvidia driver needs to be installed. You do need to have nvidia support installed in docker for it to work.
This contains the CUDA environment with the Tensorflow version so I can work with 1.15 and with the latest 2.x versions of Tensorflow on the same computer which require different CUDA versions.
It doesn't install anything besides docker stuff to get messy on the computer and difficult to pull back out.
I can still install Tensorflow natively on the computer at some point in the future when the libraries become availabe without compiling from source.
Here is the command which launches jupyter and mounts the current directory from my computer to /tf/bob which shows up in jupyter.
docker run -it --mount type=bind,source="$(pwd)",target=/tf/bob -u $(id -u):$(id -g) -p 8888:8888 tensorflow/tensorflow:1.15.2-gpu-py3-jupyter
Can I install cuda 10.2 for using tensorflow 2.1 or it has to be cuda 10.1?
I am using ubuntu 18.04 and I have a NVIDIA Quadro P5000.
Providing the solution here (Answer Section), even though it is present in the Comment Section, for the benefit of the community.
No, as per Tensorflow documentation, TensorFlow supports CUDA 10.1 (TensorFlow >= 2.1.0), please refer compatible version details
Pytorch need CUDA 10.2 but Tensorflow need cuda 10.1. Is it a joke?
No, you can use cuda version 10.2 with tensorflow 2.0.
It is quite simple.
WHY:
When run "import tensorflow", the tensorflow will search a library named 'libcudart.so.$.$' in LD_LIBRARY_PATH. For tensorflow 2.1.0-2.3.0 with cuda 10.1, it's 'libcudart.so.10.1'. With cuda 10.2, we don't have 'libcudart.so.10.1', so there will be a error.
In fact there are not any difference between cuda 10.1 and cuda 10.2, so we can solve this problem through the soft links.
HOW
cd /usr/local/cuda-10.2/targets/x86_64-linux/lib/
ln -s libcudart.so.10.2.89 libcudart.so.10.1
/usr/local/cuda-10.2/extras/CUPTI/lib64
ln -s libcupti.so.10.2.75 libcupti.so.10.1
cd /usr/local/cuda-10.2/lib64
ln -s libcudnn.so.8 libcudnn.so.7
vim /etc/profile
export CUDA_HOME=/usr/local/cuda-10.2
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${CUDA_HOME}/extras/CUPTI/lib64
export PATH=${CUDA_HOME}/bin:${PATH}
source /etc/profile
Click the button to see the picture.
Done!
I have noticed that some newer TensorFlow versions are incompatible with older CUDA and cuDNN versions. Does an overview of the compatible versions or even a list of officially tested combinations exist? I can't find it in the TensorFlow documentation.
TL;DR) See this table: https://www.tensorflow.org/install/source#gpu
Generally:
Check the CUDA version:
cat /usr/local/cuda/version.txt
and cuDNN version:
grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h
and install a combination as given below in the images or here.
The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:
Minor configurations:
Since the given specifications below in some cases might be too broad, here is one specific configuration that works:
tensorflow-gpu==1.12.0
cuda==9.0
cuDNN==7.1.4
The corresponding cudnn can be downloaded here.
Tested build configurations
Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart (for official TF wheels).
(figures updated May 20, 2020)
Linux GPU
Linux CPU
macOS GPU
macOS CPU
Windows GPU
Windows CPU
Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.
The compatibility table given in the tensorflow site does not contain specific minor versions for cuda and cuDNN. However, if the specific versions are not met, there will be an error when you try to use tensorflow.
For tensorflow-gpu==1.12.0 and cuda==9.0, the compatible cuDNN version is 7.1.4, which can be downloaded from here after registration.
You can check your cuda version using
nvcc --version
cuDNN version using
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
tensorflow-gpu version using
pip freeze | grep tensorflow-gpu
UPDATE:
Since tensorflow 2.0, has been released, I will share the compatible cuda and cuDNN versions for it as well (for Ubuntu 18.04).
tensorflow-gpu = 2.0.0
cuda = 10.0
cuDNN = 7.6.0
if you are coding in jupyter notebook, and want to check which cuda version tf is using, run the follow command directly into jupyter cell:
!conda list cudatoolkit
!conda list cudnn
and to check if the gpu is visible to tf:
tf.test.is_gpu_available(
cuda_only=False, min_cuda_compute_capability=None
)
You can use this configuration for cuda 10.0 (10.1 does not work as of 3/18), this runs for me:
tensorflow>=1.12.0
tensorflow_gpu>=1.4
Install version tensorflow gpu:
pip install tensorflow-gpu==1.4.0
Thanks for the first answer.
Something about backward compatibility.
I can successfully install tensorflow-2.4.0 with cuda-11.1 and cudnn 8.0.5.
Source: https://www.tensorflow.org/install/source#gpu
I had installed CUDA 10.1 and CUDNN 7.6 by mistake. You can use following configurations (This worked for me - as of 9/10). :
Tensorflow-gpu == 1.14.0
CUDA 10.1
CUDNN 7.6
Ubuntu 18.04
But I had to create symlinks for it to work as tensorflow originally works with CUDA 10.
sudo ln -s /opt/cuda/targets/x86_64-linux/lib/libcublas.so /opt/cuda/targets/x86_64-linux/lib/libcublas.so.10.0
sudo cp /usr/lib/x86_64-linux-gnu/libcublas.so.10 /usr/local/cuda-10.1/lib64/
sudo ln -s /usr/local/cuda-10.1/lib64/libcublas.so.10 /usr/local/cuda-10.1/lib64/libcublas.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusolver.so.10 /usr/local/cuda/lib64/libcusolver.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcurand.so.10 /usr/local/cuda/lib64/libcurand.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcufft.so.10 /usr/local/cuda/lib64/libcufft.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so /usr/local/cuda/lib64/libcudart.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusparse.so.10 /usr/local/cuda/lib64/libcusparse.so.10.0
And add the following to my ~/.bashrc -
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/targets/x86_64-linux/lib/
I had a similar problem after upgrading to TF 2.0. The CUDA version that TF was reporting did not match what Ubuntu 18.04 thought I had installed. It said I was using CUDA 7.5.0, but apt thought I had the right version installed.
What I eventually had to do was grep recursively in /usr/local for CUDNN_MAJOR, and I found that /usr/local/cuda-10.0/targets/x86_64-linux/include/cudnn.h did indeed specify the version as 7.5.0.
/usr/local/cuda-10.1 got it right, and /usr/local/cuda pointed to /usr/local/cuda-10.1, so it was (and remains) a mystery to me why TF was looking at /usr/local/cuda-10.0.
Anyway, I just moved /usr/local/cuda-10.0 to /usr/local/old-cuda-10.0 so TF couldn't find it any more and everything then worked like a charm.
It was all very frustrating, and I still feel like I just did a random hack. But it worked :) and perhaps this will help someone with a similar issue.