Chainer: No module named 'cupy.util'` - cupy

I am getting desperate with Chainer because I'm not able to use it with GPU for about a week now. The error I am getting:
RuntimeError: CUDA environment is not correctly set up (see https://github.com/chainer/chainer#installation).No module named 'cupy.util'
Code to reproduce:
import chainer
chainer.cuda.to_gpu([0, 0])
Output of chainer.backends.cuda.available is False.
Working on Ubuntu 20.04 (I know, it is not the one from the recommended on Chainer's docs) inside WSL2. CUDA drivers 11.0. Output of nvcc -V:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
CUDA samples compile and work properly inside WSL2.
According to pip freeze, cupy-cuda110 is installed within an (activated) virtual environment (but not detected, it seems). Chainer version 7.7.0 is installed.
Any ideas how to fix it?
Solution from https://github.com/chainer/chainer/issues/8582 did not seem to do the trick for me.

The error message is very clear. Just change L69 of backends/cuda.py:
from cupy.util import PerformanceWarning as _PerformanceWarning
to
from cupy._util import PerformanceWarning as _PerformanceWarning
along with the solution from #8582, everything will work just fine.

Related

Loaded runtime CuDNN library: 8.0.5 but source was compiled with: 8.1.0

I get this error when I run the model.fit_generator code to train images using the CNN model. I don't understand the error, and what should I do? Can anyone help me?
this is the full error description
`Loaded runtime CuDNN library: 8.0.5, but the source was compiled with: 8.1.0. CuDNN library needs to have a matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library. If building from sources, ensure the library loaded at runtime is compatible with the version specified during compile configuration.
I had the same error "tensorflow/stream_executor/cuda/cuda_dnn.cc:362] Loaded runtime CuDNN library: 8.0.5 but source was compiled with: 8.1.0."
I solved it by downgrading the TensorFlow version, here it says that you use a new version of TensorFlow that is not compatible with the google colab CuDNN version. I used TensorFlow 2.4.0 plus all the dependence required on version 2.4.0.
Here it says which version of TensorFlow to use for cudnn compatibility, https://www.tensorflow.org/install/source
You should always have version of libraries installed that is matching the version dependency you want to use is compiled with.
You can download the version you need from nvidia website or use conda for package management. It will handle all dependencies for you.
You can miniconda and type conda install -c anaconda tensorflow-gpu to get it sorted for you. If you need a specific version of python, you can create environment with it.
My solution:
After confirming that my cuda and cudnn versions are compatible with tensorflow, I first thought that the system did not synchronize after the installation was completed. After several restarts, it was found that it was not and could not be the problem, so I started to check all the cuda in the system. For the software that depends on cudnn, matlab was uninstalled during the period but it was useless. Later, I thought that pytorch is also related to cuda and cudnn. I checked the version of pytorch and found that I was using torch 1.8, and the cuda it was adapted to was 11.1 , The corresponding cudnn is 8.0.5, now the case is solved. Finally upgraded pytorch and solved it.
I have faced the same issue. It seems like if TensorFlow versions requires specific cuDNN version.
Check the link for required versions.
https://www.tensorflow.org/install/source#gpu
Thanks for This answer.
My solution:
After confirming that my cuda and cudnn versions are compatible with
tensorflow, I first thought that the system ...
It helps me a lot,but I use different way to solve this problem.
I found that pytorch 1.8 is compatible with cudnn 8.1.0. So, instead of upgrade pytorch version, I overwrite the cudnn 8.0.5 dll library with cudnn 8.1.0 in directory D:\Program Files\Python37\Lib\site-packages\torch\lib. You can find this location with Everything, which is always helpful.

Can CUDA 10.0 and 10.1 be on the same system?

I want support for both Visual Studio 2019 (which needs CUDA 10.1) and TensorFlow-GPU 1.14 (which needs CUDA 10.0) on a Windows PC. Is there any methods?
I simply installed CUDA 10.0 and CUDA 10.1, and add both directory into environment variable CUDA_PATH. cuDNN is already installed.
The result is Visual Studio can detect CUDA but TensorFlow cannot.
Yes, more than one version of the CUDA toolkit can exist on a system and be used by different applications.
How are you installing TensorFlow-GPU? If you're compiling it yourself, during configuration you can specify the path to whichever version of CUDA that you want to use. If you're installing a pre-built set of binaries (e.g. using something like Anaconda) then that's already been built against a specific version of the CUDA toolkit; you'll need to fetch a different version of the binaries compiled for whichever CUDA toolkit you want, or build it yourself.
If you use Anaconda to install TensorFlow-GPU, you should also receive the correct version of the CUDA toolkit that's needed to run whichever version of TensorFlow-GPU that you've installed; it takes care of those dependencies for you.

Tensorflow on Anaconda error cannot find cudnn64_6.dll

I am having a problem with Tensorflow running on Spyder. When I installed it in cmd, it had the same problem that it couldn't find the path to cudnn64_6.dll, and so I added pathway to it and it seemed to import. Then, I installed the theano library and the keras and it seemed ok, then when I tried to import the keras library in spider, I got this message:
I have Cuda v8.0 and it should have that with it, at least I am told. I have installed all the drivers and downloaded the cudnn v6.0 for Cuda 8.0 and have added enough paths but still no luck. Where have I gone wrong?
Its Ok I just had to get rid of some environment paths and restart. My bad

Getting errors installing Tensorflow GPU

I was earlier working with the CPU only version of tensorflow. I tried installing the GPU version now using this link.
But I think I messed up.
When I try to do import tensorflow it gives the following message:
ImportError: libcudart.so.7.5: cannot open shared object file: No such file or directory
What should I do?
It could be because a wrong version of cuda is installed : check /usr/local/ for the versions of cuda that are installed and if it matches with the version in the Tensorflow error. If it both versions don't match, you'll have to either install another version of cuda or Tensorflow.
Another reason could be because of missing environment variables (as explained here).
Try this :
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda

Keras (Theano backend + GPU + CUDA) not working with PyDev

I am using Keras (Theano backend) with GPU and Cuda 8.0. Everything works fine when I run my code in Jupyter or Ubuntu terminal. However, inside Eclipse (PyDev) I receive the following error importing Keras:
ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: libcublas.so.8.0: cannot open shared object file: No such file or directory
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu0 is not available (error: cuda unavailable)
I have double checked the interpreter and it is the same python as the terminal and Jupyter. I have also added the /usr/local/cuda/lib64/ to the pythonpath of the interpreter but still the same error !
Anybody knows how to fix the issue with PyDev?
Thank you,
I found a solution but not the reason.
I started Eclipse from Ubuntu terminal and it worked fine. I don't know why it couldn't find CUDA path when I start it by double clicking on its icon.