I have question about gem5 .which version of ubuntu is better for gem5?
I install ubuntu 18.04 that is good for running fullsystem mode on gem5 or not?
Related
In the Memgraph official documentation it says "Install MemgraphDB using the latest Memgraph Ubuntu package and by running the following command in the Ubuntu terminal.". On the download page there are three Ubuntu versions, 18.04, 20.04, and 22.04. I've downloaded the right .deb file. From Memgraph part, everything works ok.
I get stuck when trying to install Ubuntu 22.04 in WSL. I don't see it. I have Windows 11 professional. Does this mean that Ubuntu 22.04 is perhaps not available in my Region?
PS C:\Users\Gai> wsl --list --online
The following is a list of valid distributions that can be installed.
Install using 'wsl.exe --install <Distro>'.
NAME FRIENDLY NAME
Ubuntu Ubuntu
Debian Debian GNU/Linux
kali-linux Kali Linux Rolling
SLES-12 SUSE Linux Enterprise Server v12
SLES-15 SUSE Linux Enterprise Server v15
Ubuntu-18.04 Ubuntu 18.04 LTS
Ubuntu-20.04 Ubuntu 20.04 LTS
OracleLinux_8_5 Oracle Linux 8.5
OracleLinux_7_9 Oracle Linux 7.9
You can install Ubuntu 22.04 from the Microsoft Store.
Ubuntu 22.04.5 LTS on Microsoft Store
You can read more on the general availabilty of WSL in the Microsoft Store here.
I am building a Deep Learning rig with a GeForce RTX 2060.
I am wanting to use baselines-stable which isn't tensorflow 2.0 compatible yet.
According to here and here, tensorflow-gpu-1.15 is only listed as compatible with CUDA 10.0, not CUDA 10.1.
Attempting to download CUDA from Nvidia, the option for Ubuntu 20.04 is not available for CUDA 10.0.
Searching the apt-cache does not result in CUDA 10.0 either.
$ sudo apt-cache policy nvidia-cuda-toolkit
[sudo] password for lansford:
nvidia-cuda-toolkit:
Installed: (none)
Candidate: 10.1.243-3
Version table:
10.1.243-3 500
500 http://us.archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages
I would highly prefer not to have to reinstall the OS with an older version of Ubuntu. However experimenting with reinforcement learning was the motive for purchasing this PC.
I see some possible clues that it might be possible to build tensorflow-gpu-1.15 from source with cuda 10.1 support. I also saw a random comment that tensorflow-gpu-1.15 will just-work with tf 1.15, but I am not wanting to make a miss-step installing things until I have a signal that is the direction to go. Uninstalling things isn't always straightforward.
Should I install CUDA 10.1 and cross my fingers 1.15 will like it.
Should I download the install for CUDA 10.0 for a the older Ubuntu version and see if it will install anyway
Should I attempt to compile tensorflow from source against CUDA 10.1 (heh heh heh)
Should I install and older version of Ubuntu and hope I don't go obsolete too quickly.
Given the situation is there a way to run tensorflow 1.15 with gpu support on Ubuntu 20.04.1?
As this also bothered me I found a working solution that I think is more versatile than using docker containers.
The main idea is from here (not to claim credit from others).
To make a working solution for Ubuntu 20.04 and TensorFlow 1.15 one needs:
Cuda 10.0 (to work with tf 1.15).
I have some trouble finding this version because it's not officially available for Ubuntu 20.04. I resolved to the Ubuntu 18.04 version though which works fine.
Archive toolkits here.
Final toolkit for Ubuntu here (as it's obvious not 20.04 version is available).
I chose runfile as method which resulted into 1 main runfile and 1 patch runfile being available:
cuda_10.0.130_410.48_linux.run
cuda_10.0.130.1_linux.run
The toolkit can be safely installed using the instructions provided with no risk since each version allocates a different folder in the system (typically this would be /usr/local/cuda-10.0/).
The corresponding cudnn for cuda 10.0
I had this one from a previous installation but its shouldn't be hard to download it also. The version I used is cudnn-10.0-linux-x64-v7.6.5.32.tgz.
Cudnn basically just copies files in the right places (do not actually install anything that is). So, an extraction of the compressed file and copy to the folder would suffice:
$ sudo cp cuda/include/cudnn.h /usr/local/cuda-10.0/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda-10.0/lib64
$ sudo chmod a+r /usr/local/cuda-10.0/include/cudnn.h /usr/local/cuda-10.0/lib64/libcudnn*
Upto this point although installed the system is unaware of the presence of cuda 10.0. So, all call to it will fail as if non existent. We should update the relevant system environment for cuda 10.0. One way (there are others) system-wide is to create (in not existent) a /etc/profile.d/cuda.sh which will contain the update to the LD_LIBRARY_PATH variable. It should contain something like:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-11.3/lib64:/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
This command would normally do the work:
$ sudo sh -c ‘echo export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-11.3/lib64:/usr/local/cuda-10.0/lib64:\$LD_LIBRARY_PATH > /etc/profile.d/cuda.sh’
This requires a restart though to be evaluated I think. Anyway, this way the system will search for the relevant so files in:
a) /usr/local/cuda/lib64 (the default symbolic link) and it will fail
b) to the virtually same as the latter /usr/local/cuda-11.3/lib64 and also fail BUT it will search also
c) /usr/local/cuda-10.0/lib64 which will be successful.
The supported versions of python for cuda 10.0 ends with 3.7 so an older version should be installed. This means obligatory a virtual environment (since messing with system python is never not a good idea).
One can install python 3.7 for example using this repository which contains old (and new versions of python):
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get install python3.7
This just installs python3.7 to the system it does not make it default. The default is the previous one.
Create a virtual environment and add the desired python as the default interpreter. For me this works:
virtualenv -p python3.7 ~/tensorflow_1-15
which creates a new venv with Python 3.7 in it.
Now populate with all required modules and you are set to go.
I went ahead and went with the docker approach. The Tensorflow documentation seems to be pushing in that direction anyway. Using docker only the Nvidia driver needs to be installed. You do need to have nvidia support installed in docker for it to work.
This contains the CUDA environment with the Tensorflow version so I can work with 1.15 and with the latest 2.x versions of Tensorflow on the same computer which require different CUDA versions.
It doesn't install anything besides docker stuff to get messy on the computer and difficult to pull back out.
I can still install Tensorflow natively on the computer at some point in the future when the libraries become availabe without compiling from source.
Here is the command which launches jupyter and mounts the current directory from my computer to /tf/bob which shows up in jupyter.
docker run -it --mount type=bind,source="$(pwd)",target=/tf/bob -u $(id -u):$(id -g) -p 8888:8888 tensorflow/tensorflow:1.15.2-gpu-py3-jupyter
I just saw that my Scrutinizer builds runs on Ubuntu 14.04
scrutinizer#container-0:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.1 LTS
Release: 14.04
Codename: trusty
Is there a way to use 16.04 or other linux release?
I answer my own question: That's not possible. Scrutinizer's Support team confirmed me that.
If you need to use packages only availables on other Linux version, create a docker container
I installed Cuda on My Ubuntu 18.04(Dual Boot with windows 10) using the following Commands
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers autoinstall
Then ReBooted my Computer.
sudo apt install nvidia-cuda-toolkit gcc-6
Then verified the installation using
nvcc --version
which nvcc
Both worked well without any errors. After few days I wanted verify it completely when I entered these 2 commands
sudo modprobe nvidia
nvidia-smi
which gave me this error respectively
modprobe: ERROR: could not insert 'nvidia': Required key not available
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Now I am unable to understand if Cuda is properly installed or not. I am also unable to find Cuda-9.0 in "usr" file inside ubuntu. I need this so that I can work with tensorflow-gpu (Python3).
Thank you in Advance.
Apparently, the "required key not available" message is a typical (side-)effect of the "secure boot" feature of newer Linux kernels (EFI_SECURE_BOOT_SIG_ENFORCE); and you may be able to get around it by Disabling Secure Boot in your UEFI BIOS.
See this AskUbuntu question for details:
Why do I get “Required key not available” when install 3rd party kernel modules or after a kernel upgrade?
Is it possible to install tensorflow GPU in debian? I am using Nvidia GTX 1070 ti and debian 9.3.0. I have tried several tutorials for Ubuntu but failed as debian doesn't have the same PPA repository supported by Ubuntu, also saw many saying that adding ubuntu's repository to debian is not recommended
It is possible, but it's a hassle :)
I got Debian 9.3 with Openbox to work nicely with Tensorflow 1.6 and Cuda 9.0 + cuDNN 7.0.5.15 (eventually also Wavenet).
https://github.com/ella1011/debian_gpu_jungle
You may want to consider using docker/nvidia-docker. TensorFlow binary releases include docker images, so you could use those to avoid having to mess with your local environment.
Once you have docker/nvidia-docker installed, it would be something like this:
docker run -it --runtime=nvidia --rm tensorflow/tensorflow:1.6.0-gpu
And of course, you can use the -v flag to make directories in your host machine visible to the docker container.