What python virtual environment install locally and system-wise? - python-venv

When Python virtual environment venv is activated, I wonder what installation will be added to the local venv and what go to the OS as system-wise?
I ask after notice that my LAMP installation, while venv is activated, has the system-wise effect i.e. NOT within the environment as I thought.

Related

Problems with virtualenv and pandas

I created a virtual env and that was where I installed the pandas and numpy libraries, at the end of the installation I closed the virtual env and realized that it was also installed globally on the computer and I don't understand why this happens
to be installed only in the virtual environment and not global
Is there a way to install it only in the environment or will it always be installed globally?

making virtual environment with venv in python

I am building a data analysis pipeline in which I need to create a virtual environment using this command:
python3 -m venv venv
And then I will activate it using:
source venv/bin/activate
I have 2 questions:
1- would it be fine to call the venv also venv? In fact would it make problem if I have 2 venv in the same command?
2- in the manual page it is mentioned that after the activation of virtual environment, “you can confirm you’re in the virtual environment by checking the location of your Python interpreter” using this command: “which python “. Does it have any advantage to add it to my pipeline?

Install Tensorflow-GPU on WSL2

Has anyone successfully installed Tensorflow-GPU on WSL2 with NVIDIA GPUs? I have Ubuntu 18.04 on WSL2, but am struggling to get NVIDIA drivers installed. Any help would be appreciated as I'm lost.
So I have just got this running.
The steps you need to follow are here. To summarise them:
sign up for windows insider program and get the development builds of windows so that you have the latest version
Install wsl 2
Install Ubuntu from the windows store
Install the wsl 2 cuda driver on windows
Install cuda toolkit
Install cudnn (you can download the linux version from windows and then copy the file to linux)
If you are getting memory errors like 'cannot allocate memory' then you might need to increase the amount of memory wsl can get
Then install tensorflow-gpu
pray it works
bugs I hit along the way:
If when you open ubuntu for the first time you get an error you need to enable virutalisation in the bios
If you cannot run the ./Blackscholes example in the installation instructions you might not have the right build of windows! You must have the right version
if you are getting 'cannot allocate memory' errors when running tf you need to give wsl more ram. It only access half your ram by default
create a .wslconfig file under your user directory in windows with the amount of memory you want. Mine looks like:
[wsl2]
memory=16GB
Edit after running some code
This is much slower then when I was running on windows directly. I went from 1 minute per epoch to 5 minutes. I'm just going to dualboot.
These are the steps I had to follow for Ubuntu 20.04. I am no longer on dev channel, beta channel works fine for this use case and is much more stable.
Install WSL2
Install Ubuntu 20.04 from Windows Store
Install Nvidia Drivers for Windows from: https://developer.nvidia.com/cuda/wsl/download
Install nvcc inside of WSL with:
sudo apt install nvidia-cuda-toolkit
Check that it is there with:
nvcc --version
For my use case, I do data science and already had anaconda installed. I created an environment with:
conda create --name tensorflow
conda install tensorflow-gpu
Then just test it with this little python program with the environment activated:
import tensorflow as tf
tf.config.list_physical_devices('GPU')
sys_details = tf.sysconfig.get_build_info()
cuda = sys_details["cuda_version"]
cudnn = sys_details["cudnn_version"]
print(cuda, cudnn)
For reasons I do not understand, my machine was unable to find the GPU without installing the nvcc and actually gave an error message saying it could not find nvcc.
Online tutorials I had found which had you downloading CUDA and CUDNN separately but I thinkNVCC includes CUDNN since it is . . . there somehow.
I can confirm I am able to get this working without the need for Docker on WSL2 thanks to the following article:
https://qiita.com/Navier/items/cf551908bae707db4258
Be sure to update to driver version 460.15, not 455.41 as listed in the CUDA documentation.
Note, this does not work with the card in TCC mode (only WDDM). Also, be sure to place your files on the Linux file system (i.e. not on a mount drive, like /mnt/c/). Performance is significantly faster on the Linux file system (this has to do with the difference in implementation of WSL 1 vs. WSL 2; see 1, 2, and 3).
NOTE: See also Is the class generator (inheriting Sequence) thread safe in Keras/Tensorflow?
I just want to point out that using anaconda to install cudatoolkit and cudnn does not seem to work in wsl.
Maybe there is some problem with paths that make TF look for the needed files only in the system paths instead of the conda enviroments.

import tensorflow working in terminal but not in jupyter notebook

I used the following guide to install tensorflow-gpu - https://towardsdatascience.com/tensorflow-gpu-installation-made-easy-use-conda-instead-of-pip-52e5249374bc
I created a new environment and installed tensorflow-gpu using the command -
conda create --name tf_gpu tensorflow-gpu
If I activate the environment, start python in terminal, and import tensorflow from the terminal, it works.
BUT
When I activate the environment, run a jupyter notebook and type -
import tensorflow
I get module not found error. How do I resolve this?
Start Command Promt (CMD) as administrator (right click). Do not enter any environment yet.
Install Jupyter (and nb_conda as well as ipykernel) to get your environments listed: conda install jupyter nb_conda ipykernel
Activate the environment you want to add to jupyter kernel: conda activate myenv
Install ipykernel in the environment (do this for all envvironemnts you would like to add): conda install ipykernel
To start Jupyter, cd to root (cd .. until you are at C:) then type (does not need to be inside and env): Jupyter noteboook
You might need to confirm that it shall open in a web browser (I use chrome)
Once open in a browser navigate to the folder of your choice, then make a new python 3 file.
Once inside click Kernel -> Change kernel and select the conda env you would like
You should now be able to change kernel (env) within all conda environments that have ipykernel installed (step 4)

Install Tensorflow gpu on a remote pc without sudo

I don't have sudo access to the remote pc where cuda is already installed. Now, I have to install tensorflow-gpu on that system. Please give me the step by step guide to install it without sudo.
Operating System : Ubuntu 18.04
I had to do this before. Basically, I installed miniconda (you can also use anaconda, same thing and installation works without sudo), and installed everything using conda.
Create my environment and activate it:
conda create --name myenv python=3.6.8
conda actiavate myenv
Install the CUDA things and Tensorflow
conda install cudatoolkit=9.0 cudnn=7.1.2 tensorflow-gpu
Depending on your system, you may need to change version numbers.
Not sure how familiar you are with conda - it is basically a package-manager/repository and environment manager like pip/venv with the addition that it can handle non-python things as well (such as cudnn for example). As a note - if a package is not availabe through conda, you can still use pip as a fallback.
Untested with pip
I previously tried to do it without conda and using pip (I ended up failing due to some version conflicts, got frustrated with the process and moved to conda). It gets a little more complicated since you need to manually install it. So first, download cudnn from nvidia and unpack it anywhere you want. Then, you need to add it to the LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/path/to/cuda/lib64:/path/to/cudnn/lib64/:${LD_ LIBRARY_PATH}