I am trying to use Tensorflow 2.7.0 with GPU, but I am constantly running into the same issue:
2022-02-03 08:32:31.822484: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/username/.cache/pypoetry/virtualenvs/poetry_env/lib/python3.7/site-packages/cv2/../../lib64:/home/username/miniconda3/envs/project/lib/
2022-02-03 08:32:31.822528: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
This issue has already appeared multiple times here & on github. However, the solutions usually proposed to a) download the missing CUDA files, b) downgrade/upgrade to the correct CUDA version, c) set the correct LD_LIBRARY_PATH.
I have been already using my PC with CUDA-enabled PyTorch, and I did not have a single issue there. My nvidia-smi returns 11.0 version, which is exactly the only I want to have. Also, if I try to run:
import os
LD_LIBRARY_PATH = '/home/username/miniconda3/envs/project/lib/'
print(os.path.exists(os.path.join(LD_LIBRARY_PATH, "libcudart.so.11.0")))
it returns True. This is exactly the part of LD_LIBRARY_PATH from the error message, where Tensorflow, apparently, cannot see the libcudart.so.11.0 (which IS there).
Is there something really obvious that I am missing?
nvidia-smi output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.156.00 Driver Version: 450.156.00 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
nvcc:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Firstly:
Can you find out where the "libcudart.so.11.0" is
If you lost it at error stack, you can replace the "libcudart.so.11.0" by your word in below:
sudo find / -name 'libcudart.so.11.0'
Outputs in my system. This result shows where the "libcudart.so.11.0" is in the system:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0
If the result shows nothing, please make sure you have install cuda or other staff that must install in your system.
Second, add the path to environment file.
# edit /etc/profile
sudo vim /etc/profile
# append path to "LD_LIBRARY_PATH" in profile file
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.1/targets/x86_64-linux/lib
# make environment file work
source /etc/profile
You may also refer to this link
Third thing you may try is:
conda install cudatoolkit
Installing the correct version of cuda 11.3 and cudnn 8.2.1 for tf2.8. Based on this blog https://www.tensorflow.org/install/source#gpu using following commands.
conda uninstall cudatoolkit
conda install cudnn
Then exporting LD path - dynamic link loader path after finding location by
this sudo find / -name 'libcudnn' System was able to find required libraries and use GPU for training.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/usr/miniconda3/envs/tf2/lib/
Hope it helped.
Faced the same issue with tensorflow 2.9 and cuda 11.7 on arch linux x86_64 with 2 nvidia gpus (1080ti / titan rtx) and solved it:
It is not absolutely necessary to respect the compatibility matrix (cuda 11.7 vs 11.2 so minor superior version). But python 3 version was downgraded according to the tensorflow comp matrix (3.10 to 3.7).
Note that you can have multiple cuda version installed and manage it by symlink on linux. (win should be different a bit)
setup with conda and python 3.7
sudo pacman -S base-devel cudnn
conda activate tf-2.9
conda uninstall cudatoolkit && conda install cudnn
I've also had to update gcc for another lib (out of topic)
conda install -c conda-forge gcc=12.1.0
added the snippet for debug according to tf-gpu docs
import tensorflow as tf
tf.config.list_physical_devices('GPU')
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
I now see 2 gpu detected instead of 0, training time is divided by 10.
nvidia-smi reports ram usage maxed and power level raised from 9W to 150W validating the usage of the gpu (the other was left idle).
Rootcause: cudnn was not installed system-wide.
Related
I've been able to successfully set up an Ubuntu 18.04 server with nvidia-smi 418.39, Driver version 418.39, and CUDA 10.1
I now have a user who wants to run TensorFlow but insists that it is not compatible with CUDA 10.1, only CUDA 10. There is no statement confirming this online anywhere that I can find, nor is it in any release patch notes from TF. Because setting this system up was kind of a pain to do, I'm a little hesitant to try downgrading just one version.
Does anyone have verification whether TensorFlow 1.12 does or does not work with CUDA 10.1?
I can confirm that even tf 1.13.1 only works with CUDA 10.0 for me, not 10.1.
Don't know if symlink will work through.
If you try to run tf 1.13.1 on CUDA 10.1, it will give you "ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory"
TensorFlow 1.12 (and even later versions 1.13.1 and 2.0.0-alpha0) could not be built against CUDA 10.1, thus can be considered incompatible.
I have tried building TensorFlow from source with GPU support. The TensorFlow versions I considered were 1.13.1 and 2.0.0-alpha0. The machine I used runs CentOS 7.6 with GCC 4.8.5. I have the NVIDIA Driver version 418.67 installed (which has the release date 2019.5.7 and supports CUDA Toolkit 10.1).
I succeeded in building both TensorFlow versions with CUDA 10.0 and cuDNN 7.6.0 + NCCL 2.4.7 (for CUDA 10.0). Note that you don't need to have the GPU attached to the machine (especially if you're using a VM in the cloud) while you're building TensorFlow with GPU support.
However, when I switched to CUDA 10.1 and cuDNN 7.6.0 + NCCL 2.4.7 (for CUDA 10.1), none of these TensorFlow versions could be built. Besides the changes in location of libcublas, another source of the error is no libcudart.so* are found in cuda-10.1/lib64/ (while they do exist in cuda-10.0/lib64/).
I can also confirm that tf 1.13.1 does not work with CUDA 10.1. While importing tensorflow you will get the following error
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory
running ldconfig -v shows the difference
libcublas.so.10.0 vs libcublas.so.10.1.0.105
I have noticed that some newer TensorFlow versions are incompatible with older CUDA and cuDNN versions. Does an overview of the compatible versions or even a list of officially tested combinations exist? I can't find it in the TensorFlow documentation.
TL;DR) See this table: https://www.tensorflow.org/install/source#gpu
Generally:
Check the CUDA version:
cat /usr/local/cuda/version.txt
and cuDNN version:
grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h
and install a combination as given below in the images or here.
The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:
Minor configurations:
Since the given specifications below in some cases might be too broad, here is one specific configuration that works:
tensorflow-gpu==1.12.0
cuda==9.0
cuDNN==7.1.4
The corresponding cudnn can be downloaded here.
Tested build configurations
Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart (for official TF wheels).
(figures updated May 20, 2020)
Linux GPU
Linux CPU
macOS GPU
macOS CPU
Windows GPU
Windows CPU
Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.
The compatibility table given in the tensorflow site does not contain specific minor versions for cuda and cuDNN. However, if the specific versions are not met, there will be an error when you try to use tensorflow.
For tensorflow-gpu==1.12.0 and cuda==9.0, the compatible cuDNN version is 7.1.4, which can be downloaded from here after registration.
You can check your cuda version using
nvcc --version
cuDNN version using
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
tensorflow-gpu version using
pip freeze | grep tensorflow-gpu
UPDATE:
Since tensorflow 2.0, has been released, I will share the compatible cuda and cuDNN versions for it as well (for Ubuntu 18.04).
tensorflow-gpu = 2.0.0
cuda = 10.0
cuDNN = 7.6.0
if you are coding in jupyter notebook, and want to check which cuda version tf is using, run the follow command directly into jupyter cell:
!conda list cudatoolkit
!conda list cudnn
and to check if the gpu is visible to tf:
tf.test.is_gpu_available(
cuda_only=False, min_cuda_compute_capability=None
)
You can use this configuration for cuda 10.0 (10.1 does not work as of 3/18), this runs for me:
tensorflow>=1.12.0
tensorflow_gpu>=1.4
Install version tensorflow gpu:
pip install tensorflow-gpu==1.4.0
Thanks for the first answer.
Something about backward compatibility.
I can successfully install tensorflow-2.4.0 with cuda-11.1 and cudnn 8.0.5.
Source: https://www.tensorflow.org/install/source#gpu
I had installed CUDA 10.1 and CUDNN 7.6 by mistake. You can use following configurations (This worked for me - as of 9/10). :
Tensorflow-gpu == 1.14.0
CUDA 10.1
CUDNN 7.6
Ubuntu 18.04
But I had to create symlinks for it to work as tensorflow originally works with CUDA 10.
sudo ln -s /opt/cuda/targets/x86_64-linux/lib/libcublas.so /opt/cuda/targets/x86_64-linux/lib/libcublas.so.10.0
sudo cp /usr/lib/x86_64-linux-gnu/libcublas.so.10 /usr/local/cuda-10.1/lib64/
sudo ln -s /usr/local/cuda-10.1/lib64/libcublas.so.10 /usr/local/cuda-10.1/lib64/libcublas.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusolver.so.10 /usr/local/cuda/lib64/libcusolver.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcurand.so.10 /usr/local/cuda/lib64/libcurand.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcufft.so.10 /usr/local/cuda/lib64/libcufft.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so /usr/local/cuda/lib64/libcudart.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusparse.so.10 /usr/local/cuda/lib64/libcusparse.so.10.0
And add the following to my ~/.bashrc -
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/targets/x86_64-linux/lib/
I had a similar problem after upgrading to TF 2.0. The CUDA version that TF was reporting did not match what Ubuntu 18.04 thought I had installed. It said I was using CUDA 7.5.0, but apt thought I had the right version installed.
What I eventually had to do was grep recursively in /usr/local for CUDNN_MAJOR, and I found that /usr/local/cuda-10.0/targets/x86_64-linux/include/cudnn.h did indeed specify the version as 7.5.0.
/usr/local/cuda-10.1 got it right, and /usr/local/cuda pointed to /usr/local/cuda-10.1, so it was (and remains) a mystery to me why TF was looking at /usr/local/cuda-10.0.
Anyway, I just moved /usr/local/cuda-10.0 to /usr/local/old-cuda-10.0 so TF couldn't find it any more and everything then worked like a charm.
It was all very frustrating, and I still feel like I just did a random hack. But it worked :) and perhaps this will help someone with a similar issue.
I have noticed that some newer TensorFlow versions are incompatible with older CUDA and cuDNN versions. Does an overview of the compatible versions or even a list of officially tested combinations exist? I can't find it in the TensorFlow documentation.
TL;DR) See this table: https://www.tensorflow.org/install/source#gpu
Generally:
Check the CUDA version:
cat /usr/local/cuda/version.txt
and cuDNN version:
grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h
and install a combination as given below in the images or here.
The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:
Minor configurations:
Since the given specifications below in some cases might be too broad, here is one specific configuration that works:
tensorflow-gpu==1.12.0
cuda==9.0
cuDNN==7.1.4
The corresponding cudnn can be downloaded here.
Tested build configurations
Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart (for official TF wheels).
(figures updated May 20, 2020)
Linux GPU
Linux CPU
macOS GPU
macOS CPU
Windows GPU
Windows CPU
Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.
The compatibility table given in the tensorflow site does not contain specific minor versions for cuda and cuDNN. However, if the specific versions are not met, there will be an error when you try to use tensorflow.
For tensorflow-gpu==1.12.0 and cuda==9.0, the compatible cuDNN version is 7.1.4, which can be downloaded from here after registration.
You can check your cuda version using
nvcc --version
cuDNN version using
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
tensorflow-gpu version using
pip freeze | grep tensorflow-gpu
UPDATE:
Since tensorflow 2.0, has been released, I will share the compatible cuda and cuDNN versions for it as well (for Ubuntu 18.04).
tensorflow-gpu = 2.0.0
cuda = 10.0
cuDNN = 7.6.0
if you are coding in jupyter notebook, and want to check which cuda version tf is using, run the follow command directly into jupyter cell:
!conda list cudatoolkit
!conda list cudnn
and to check if the gpu is visible to tf:
tf.test.is_gpu_available(
cuda_only=False, min_cuda_compute_capability=None
)
You can use this configuration for cuda 10.0 (10.1 does not work as of 3/18), this runs for me:
tensorflow>=1.12.0
tensorflow_gpu>=1.4
Install version tensorflow gpu:
pip install tensorflow-gpu==1.4.0
Thanks for the first answer.
Something about backward compatibility.
I can successfully install tensorflow-2.4.0 with cuda-11.1 and cudnn 8.0.5.
Source: https://www.tensorflow.org/install/source#gpu
I had installed CUDA 10.1 and CUDNN 7.6 by mistake. You can use following configurations (This worked for me - as of 9/10). :
Tensorflow-gpu == 1.14.0
CUDA 10.1
CUDNN 7.6
Ubuntu 18.04
But I had to create symlinks for it to work as tensorflow originally works with CUDA 10.
sudo ln -s /opt/cuda/targets/x86_64-linux/lib/libcublas.so /opt/cuda/targets/x86_64-linux/lib/libcublas.so.10.0
sudo cp /usr/lib/x86_64-linux-gnu/libcublas.so.10 /usr/local/cuda-10.1/lib64/
sudo ln -s /usr/local/cuda-10.1/lib64/libcublas.so.10 /usr/local/cuda-10.1/lib64/libcublas.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusolver.so.10 /usr/local/cuda/lib64/libcusolver.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcurand.so.10 /usr/local/cuda/lib64/libcurand.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcufft.so.10 /usr/local/cuda/lib64/libcufft.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so /usr/local/cuda/lib64/libcudart.so.10.0
sudo ln -s /usr/local/cuda/targets/x86_64-linux/lib/libcusparse.so.10 /usr/local/cuda/lib64/libcusparse.so.10.0
And add the following to my ~/.bashrc -
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/targets/x86_64-linux/lib/
I had a similar problem after upgrading to TF 2.0. The CUDA version that TF was reporting did not match what Ubuntu 18.04 thought I had installed. It said I was using CUDA 7.5.0, but apt thought I had the right version installed.
What I eventually had to do was grep recursively in /usr/local for CUDNN_MAJOR, and I found that /usr/local/cuda-10.0/targets/x86_64-linux/include/cudnn.h did indeed specify the version as 7.5.0.
/usr/local/cuda-10.1 got it right, and /usr/local/cuda pointed to /usr/local/cuda-10.1, so it was (and remains) a mystery to me why TF was looking at /usr/local/cuda-10.0.
Anyway, I just moved /usr/local/cuda-10.0 to /usr/local/old-cuda-10.0 so TF couldn't find it any more and everything then worked like a charm.
It was all very frustrating, and I still feel like I just did a random hack. But it worked :) and perhaps this will help someone with a similar issue.
I am installing latest Tensorflow library in my ubuntu 16.04 machine.
For this I downloaded and Installed Latest Cuda toolkits and Cuda nn libraries.
After Installation I checked it out using following commands.
(/home/naseer/anaconda2/) naseer#naseer-Virtual-Machine:~/anaconda2$ python
Python 2.7.13 |Anaconda 4.3.1 (64-bit)| (default, Dec 20 2016, 23:09:15)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:102] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH: /usr/local/cuda-8.0.61/lib64
I tensorflow/stream_executor/cuda/cuda_dnn.cc:2259] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
What does the above output mean? does it mean that Tensorflow will correctly run on my Nvidia GPU enabled system or do I need to do something else?
My local Directory Structure:
I have added following screen shot that shows various library path in my local directories.
My Understanding
I have feeling that it is trying to open CUDA library in the path /usr/local/cuda-8.0.61/lib64 when infact there are paths of /usr/local/cuda-8.0/lib64 and /usr/local/cuda/lib64. Itried to rename that path but still could not work?
Updates (Conflicting Directory Structure)
To run TensorFlow, you have to install cuDNN. There are two possible ways:
1. Installing cuDNN for all Users:
This is the way that the official TensorFlow documentation describes.
Here, cuDNN is installed into the folder /usr/local/cuda. That way, cuDNN can be used by all users on that machine. The instructions are taken from the TensorFlow documentation:
Download the correct cuDNN version. For TensorFlow r1.1, that would be cuDNN v5.1 for CUDA 8.0.
Unpack the .tgz file. Open a terminal, navigate to the folder where you downloaded cuDNN, and call
tar xvzf cudnn-8.0-linux-x64-v5.1-ga.tgz
Note: this is just an example, check the file name before calling this.
This will create a new folder called cuda, which contains two subfolders include and lib64, containing all cuDNN files.
Move the downloaded files to /usr/local/cuda. You will need sudo rights for this!
sudo cp cuda/include/cudnn.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
And that's already it. TensorFlow should now work as expected.
2. Installing cuDNN locally:
If you do not have admin rights, or you want to have different cuDNN versions on your machine, you can install cuDNN to any folder of your choice, and then set the paths correctly. This method is proposed in this answer on StackOverflow and is explained in the official NVIDIA installation instructions.
Step 1 and 2 are the same as above.
Move the extracted cuda folder to the place you choose.
Add this directory to the $LD_LIBRARY_PATH environment variable. In a terminal, you can do this by calling
export LD_LIBRARY_PATH=/path/to/cudnn/lib64:$LD_LIBRARY_PATH
where /path/to/cudnn is the place where you moved cuDNN in the previous step. Note the lib64 at the end!
Usually, you'll have to call this every time before starting TensorFlow. To avoid this, you can edit the file ~/.bashrc and add this line at the bottom of the file. This will automatically add cuDNN to the path every time you start a terminal window.
With that, TensorFlow will be able to find cuDNN and work as expected.
To run a GPU enabled TensorFlow 1.4 you should first install CUDA 8 (+patch 2) and cuDNN v6.0, you may find this step-by-step installation guide useful.
After installing the CUDA 8 drivers you will need to install cuDNN v6.0:
Download the cuDNN v6.0 driver. The driver can be downloader from here, please note that you will need to register first.
Copy the driver to the remote machine (scp -r -i ...)
Extract the cuDNN files, copy them to the target directory and extract the files from the .tgz file:
tar xvzf cudnn-8.0-linux-x64-v6.0.tgz
sudo cp -P cuda/include/cudnn.h /usr/local/cuda/includesudo
cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h
/usr/local/cuda/lib64/libcudnn*
Update your bash file
nano ~/.bashrc
Add the following lines to the end of the bash file:
export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
export PATH=${CUDA_HOME}/bin:${PATH}
Install the libcupti-dev library
sudo apt-get install libcupti-dev
Install pip
sudo apt-get install python-pip
sudo pip install –upgrade pip
Install TensorFlow
sudo pip install tensorflow-gpu
Test the installation, by running the following within the Python command line:
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == ‘GPU’]
get_available_gpus()
For a single GPU the output should be similar to:
2017-11-22 03:18:15.187419: I
tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports
instructions that this TensorFlow binary was not compiled to use:
SSE4.1 SSE4.2 AVX AVX2 FMA
2017-11-22 03:18:17.986516: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-11-22 03:18:17.986867: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0
with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:1e.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2017-11-22 03:18:17.986896: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating
TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci
bus id: 0000:00:1e.0, compute capability: 3.7)
[u’/device:GPU:0′]
I installed cuda 8 and the new tensorflow 1.0.
When I run "import tensorflow as tf" I get the following:
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.8.0.dylib locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.5.dylib locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.8.0.dylib locally
Segmentation fault: 11
Knowing that nvcc -V gives the following:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Oct_30_22:18:43_CDT_2016
Cuda compilation tools, release 8.0, V8.0.54
Any idea how to fix this segmentation fault?
You might be missing a library in your local cuda installation. E.g., /usr/local/cuda/lib/libcuda.dylib was missing for me after trying to install CUDA Toolkit 8.0 locally (possibly because I installed the drivers first before the toolkit, as this ancient thread suggests: https://render.otoy.com/forum/viewtopic.php?f=25&t=1859). Re-running the installer for just the driver installed it properly, and also symlinked it to another name (https://github.com/tensorflow/tensorflow/issues/3263#issuecomment-232184358).
Lastly, double check your environment variable paths, see if echo $DYLD_LIBRARY_PATH looks right.
As an aside, I still saw some warnings when testing the install, e.g. The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.. These just are suggesting to build from source (https://github.com/tensorflow/tensorflow/issues/8037), rather than using pip install --upgrade tensorflow-gpu. 🍻