Cuda Installation Error - tensorflow

I installed Cuda on My Ubuntu 18.04(Dual Boot with windows 10) using the following Commands
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers autoinstall
Then ReBooted my Computer.
sudo apt install nvidia-cuda-toolkit gcc-6
Then verified the installation using
nvcc --version
which nvcc
Both worked well without any errors. After few days I wanted verify it completely when I entered these 2 commands
sudo modprobe nvidia
nvidia-smi
which gave me this error respectively
modprobe: ERROR: could not insert 'nvidia': Required key not available
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Now I am unable to understand if Cuda is properly installed or not. I am also unable to find Cuda-9.0 in "usr" file inside ubuntu. I need this so that I can work with tensorflow-gpu (Python3).
Thank you in Advance.

Apparently, the "required key not available" message is a typical (side-)effect of the "secure boot" feature of newer Linux kernels (EFI_SECURE_BOOT_SIG_ENFORCE); and you may be able to get around it by Disabling Secure Boot in your UEFI BIOS.
See this AskUbuntu question for details:
Why do I get “Required key not available” when install 3rd party kernel modules or after a kernel upgrade?

Related

Libcamera command not found after installing it

Having a terrible time with the raspi related problems. One of them concerning the libcamera. I have Ubuntu 22.04 64bit on my raspi and I have installed the libcamera package with the command sudo apt install libcamera_*. The problem is that whenever I run a command with libcamera it tells me command not found!!! Any solutions?!! The camera is detected and supported. Thanks in advance for your help.

The kernel appears to have died. It will restart automatically. Jupyter notebook [duplicate]

I am using a MacBook Pro with M1 processor, macOS version 11.0.1, Python 3.8 in PyCharm, Tensorflow version 2.4.0rc4 (also tried 2.3.0, 2.3.1, 2.4.0rc0). I am trying to run the following code:
import tensorflow
This causes the error message:
Process finished with exit code 132 (interrupted by signal 4: SIGILL)
The code runs fine on my Windows and Linux machines.
What does the error message mean and how can I fix it?
Seems that this problem happens when you have multiple python interpreters installed, and some of them are for differente architectuers (x86_64 vs arm64). You need to make sure that the correct python interpreter is being used, if you installed Apple's version of tensorflow, then that probably requires an arm64 interpreter.
If you use rosetta (Apple's x86_64 emulator) then you need to use a x86_64 python interpreter, if you somehow load the arm64 python interpreter, you will get the illegal instruction error (which totally makes sense).
If you use any script that installs new python interpreters, then you need to make sure the correct interpreter for the architecture is installed (most likely arm64).
Overalll I think this problem happens because the python environment setup is not made for systems that can run multiple instruction sets/architectures, pip does check the architecture of packages and the host system but seems you can run a x86_64 interpreter to load a package meant for arm64 and this produces the problem.
For reference there is an issue in tensorflow_macos that people can check.
For M1 Macs, From Apple developer page the following worked:
First, download Conda Env from here and then follow these instructions (assuming the script is downloaded to ~/Downloads folder)
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
reload the shell and do
python -m pip uninstall tensorflow-macos
python -m pip uninstall tensorflow-metal
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal
If the above doesn't work for some reason, there are some edge cases and additional information provided at the Apple developer page
Installing Tensorflow version 1.15 fixed this for me.
$ conda install tensorflow==1.15
I have been able to resolve this issue by using Miniforge instead of Anaconda as the Python environment. Anaconda doesn't support the arm64 architecture, yet.
I had the same issue
This is because of M1 chip. Now there is a pre-release that delivers hardware-accelerated TensorFlow and TensorFlow Addons for macOS 11.0+. Native hardware acceleration is supported on M1 Macs and Intel-based Macs through Apple’s ML Compute framework.
You need to install the TensorFlow that supports M1 chip Simply pull this tensorflow macos repository and run the ./scripts/download_and_install.sh

Install Tensorflow-GPU on WSL2

Has anyone successfully installed Tensorflow-GPU on WSL2 with NVIDIA GPUs? I have Ubuntu 18.04 on WSL2, but am struggling to get NVIDIA drivers installed. Any help would be appreciated as I'm lost.
So I have just got this running.
The steps you need to follow are here. To summarise them:
sign up for windows insider program and get the development builds of windows so that you have the latest version
Install wsl 2
Install Ubuntu from the windows store
Install the wsl 2 cuda driver on windows
Install cuda toolkit
Install cudnn (you can download the linux version from windows and then copy the file to linux)
If you are getting memory errors like 'cannot allocate memory' then you might need to increase the amount of memory wsl can get
Then install tensorflow-gpu
pray it works
bugs I hit along the way:
If when you open ubuntu for the first time you get an error you need to enable virutalisation in the bios
If you cannot run the ./Blackscholes example in the installation instructions you might not have the right build of windows! You must have the right version
if you are getting 'cannot allocate memory' errors when running tf you need to give wsl more ram. It only access half your ram by default
create a .wslconfig file under your user directory in windows with the amount of memory you want. Mine looks like:
[wsl2]
memory=16GB
Edit after running some code
This is much slower then when I was running on windows directly. I went from 1 minute per epoch to 5 minutes. I'm just going to dualboot.
These are the steps I had to follow for Ubuntu 20.04. I am no longer on dev channel, beta channel works fine for this use case and is much more stable.
Install WSL2
Install Ubuntu 20.04 from Windows Store
Install Nvidia Drivers for Windows from: https://developer.nvidia.com/cuda/wsl/download
Install nvcc inside of WSL with:
sudo apt install nvidia-cuda-toolkit
Check that it is there with:
nvcc --version
For my use case, I do data science and already had anaconda installed. I created an environment with:
conda create --name tensorflow
conda install tensorflow-gpu
Then just test it with this little python program with the environment activated:
import tensorflow as tf
tf.config.list_physical_devices('GPU')
sys_details = tf.sysconfig.get_build_info()
cuda = sys_details["cuda_version"]
cudnn = sys_details["cudnn_version"]
print(cuda, cudnn)
For reasons I do not understand, my machine was unable to find the GPU without installing the nvcc and actually gave an error message saying it could not find nvcc.
Online tutorials I had found which had you downloading CUDA and CUDNN separately but I thinkNVCC includes CUDNN since it is . . . there somehow.
I can confirm I am able to get this working without the need for Docker on WSL2 thanks to the following article:
https://qiita.com/Navier/items/cf551908bae707db4258
Be sure to update to driver version 460.15, not 455.41 as listed in the CUDA documentation.
Note, this does not work with the card in TCC mode (only WDDM). Also, be sure to place your files on the Linux file system (i.e. not on a mount drive, like /mnt/c/). Performance is significantly faster on the Linux file system (this has to do with the difference in implementation of WSL 1 vs. WSL 2; see 1, 2, and 3).
NOTE: See also Is the class generator (inheriting Sequence) thread safe in Keras/Tensorflow?
I just want to point out that using anaconda to install cudatoolkit and cudnn does not seem to work in wsl.
Maybe there is some problem with paths that make TF look for the needed files only in the system paths instead of the conda enviroments.

Can I install Tensorflow 1.15 with GPU support on Ubuntu 20.04.1 LTS?

I am building a Deep Learning rig with a GeForce RTX 2060.
I am wanting to use baselines-stable which isn't tensorflow 2.0 compatible yet.
According to here and here, tensorflow-gpu-1.15 is only listed as compatible with CUDA 10.0, not CUDA 10.1.
Attempting to download CUDA from Nvidia, the option for Ubuntu 20.04 is not available for CUDA 10.0.
Searching the apt-cache does not result in CUDA 10.0 either.
$ sudo apt-cache policy nvidia-cuda-toolkit
[sudo] password for lansford:
nvidia-cuda-toolkit:
Installed: (none)
Candidate: 10.1.243-3
Version table:
10.1.243-3 500
500 http://us.archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages
I would highly prefer not to have to reinstall the OS with an older version of Ubuntu. However experimenting with reinforcement learning was the motive for purchasing this PC.
I see some possible clues that it might be possible to build tensorflow-gpu-1.15 from source with cuda 10.1 support. I also saw a random comment that tensorflow-gpu-1.15 will just-work with tf 1.15, but I am not wanting to make a miss-step installing things until I have a signal that is the direction to go. Uninstalling things isn't always straightforward.
Should I install CUDA 10.1 and cross my fingers 1.15 will like it.
Should I download the install for CUDA 10.0 for a the older Ubuntu version and see if it will install anyway
Should I attempt to compile tensorflow from source against CUDA 10.1 (heh heh heh)
Should I install and older version of Ubuntu and hope I don't go obsolete too quickly.
Given the situation is there a way to run tensorflow 1.15 with gpu support on Ubuntu 20.04.1?
As this also bothered me I found a working solution that I think is more versatile than using docker containers.
The main idea is from here (not to claim credit from others).
To make a working solution for Ubuntu 20.04 and TensorFlow 1.15 one needs:
Cuda 10.0 (to work with tf 1.15).
I have some trouble finding this version because it's not officially available for Ubuntu 20.04. I resolved to the Ubuntu 18.04 version though which works fine.
Archive toolkits here.
Final toolkit for Ubuntu here (as it's obvious not 20.04 version is available).
I chose runfile as method which resulted into 1 main runfile and 1 patch runfile being available:
cuda_10.0.130_410.48_linux.run
cuda_10.0.130.1_linux.run
The toolkit can be safely installed using the instructions provided with no risk since each version allocates a different folder in the system (typically this would be /usr/local/cuda-10.0/).
The corresponding cudnn for cuda 10.0
I had this one from a previous installation but its shouldn't be hard to download it also. The version I used is cudnn-10.0-linux-x64-v7.6.5.32.tgz.
Cudnn basically just copies files in the right places (do not actually install anything that is). So, an extraction of the compressed file and copy to the folder would suffice:
$ sudo cp cuda/include/cudnn.h /usr/local/cuda-10.0/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda-10.0/lib64
$ sudo chmod a+r /usr/local/cuda-10.0/include/cudnn.h /usr/local/cuda-10.0/lib64/libcudnn*
Upto this point although installed the system is unaware of the presence of cuda 10.0. So, all call to it will fail as if non existent. We should update the relevant system environment for cuda 10.0. One way (there are others) system-wide is to create (in not existent) a /etc/profile.d/cuda.sh which will contain the update to the LD_LIBRARY_PATH variable. It should contain something like:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-11.3/lib64:/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
This command would normally do the work:
$ sudo sh -c ‘echo export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-11.3/lib64:/usr/local/cuda-10.0/lib64:\$LD_LIBRARY_PATH > /etc/profile.d/cuda.sh’
This requires a restart though to be evaluated I think. Anyway, this way the system will search for the relevant so files in:
a) /usr/local/cuda/lib64 (the default symbolic link) and it will fail
b) to the virtually same as the latter /usr/local/cuda-11.3/lib64 and also fail BUT it will search also
c) /usr/local/cuda-10.0/lib64 which will be successful.
The supported versions of python for cuda 10.0 ends with 3.7 so an older version should be installed. This means obligatory a virtual environment (since messing with system python is never not a good idea).
One can install python 3.7 for example using this repository which contains old (and new versions of python):
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get install python3.7
This just installs python3.7 to the system it does not make it default. The default is the previous one.
Create a virtual environment and add the desired python as the default interpreter. For me this works:
virtualenv -p python3.7 ~/tensorflow_1-15
which creates a new venv with Python 3.7 in it.
Now populate with all required modules and you are set to go.
I went ahead and went with the docker approach. The Tensorflow documentation seems to be pushing in that direction anyway. Using docker only the Nvidia driver needs to be installed. You do need to have nvidia support installed in docker for it to work.
This contains the CUDA environment with the Tensorflow version so I can work with 1.15 and with the latest 2.x versions of Tensorflow on the same computer which require different CUDA versions.
It doesn't install anything besides docker stuff to get messy on the computer and difficult to pull back out.
I can still install Tensorflow natively on the computer at some point in the future when the libraries become availabe without compiling from source.
Here is the command which launches jupyter and mounts the current directory from my computer to /tf/bob which shows up in jupyter.
docker run -it --mount type=bind,source="$(pwd)",target=/tf/bob -u $(id -u):$(id -g) -p 8888:8888 tensorflow/tensorflow:1.15.2-gpu-py3-jupyter

E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)

I am using rasa 1.9.6 on ubuntu in Vmware I have been getting this error in both training as well as running the model. It allows training the model but I am unable to run it I need to run my Bot can someone please help
According to rasa forum, the origin of this issue is due to tensorflow and graphics card configuration. GPU’s do not typically provide an advantage for the Rasa models. This can be safely ignored
Installing nvidia-modprobe can solve this issue.
sudo apt install nvidia-modprobe
Other solutions you can try are :
Uninstall and install CUDA and cuDNN.
Install tensorflow-gpu.
Uninstall and install different Nvidia driver versions.
The problem also could be that only some /dev/nvidia* files are present before running Python with sudo, check using $ ls /dev/nvidia*, after running the Device Node verification script the /dev/nvidia-uvm file gets added.