Should I trust conda or pip when checking libraries installed? - tensorflow

While I am in a conda environment, the 'conda list' and 'pip freeze' show different number of libraries. For example, 'tensorflow-gpu' is listed in 'pip freeze', but not in 'conda list'. If I want to use tensorflow-gpu in this environment, should I run pip install tensorflow-gpu to install it again, or not necessary?

I think when you are using the conda environment. The conda list is going to show all the general packages that shared by the same conda environment. And the reason why 'tensorflow-gpu' is listed in 'pip freeze', but not in 'conda list', is because you used pip install to installed 'tensorflow-gpu'(could be you or the IDE). In this case, 'tensorflow-gpu' is only exists under this python project I believe. Actually, there is an official document about this topic.
Issues may arise when using pip and conda together. When combining
conda and pip, it is best to use an isolated conda environment. Only
after conda has been used to install as many packages as possible
should pip be used to install any remaining software. If modifications
are needed to the environment, it is best to create a new environment
rather than running conda after pip. When appropriate, conda and pip
requirements should be stored in text files.
Use pip only after conda Install as many requirements as possible with
conda then use pip.
Pip should be run with --upgrade-strategy only-if-needed (the
default).
Do not use pip with the --user argument, avoid all users installs.
And here is the link.

Related

Conda: Best approach for installing packages from Github and pip

My main Python environment on OSX Big Sur is the base env from Anaconda.
Now, I regularly need to install packages that are not available via conda or conda-forge or not in the latest version.
I understand that there are several different approaches:
Using the pip package installed by conda and follow the preferred way of using pip within conda i.e. to add those packages at the end of a environment.yaml file and to delete and re-generate the env whenever a new package needs to be added, in order to avoid conflicts from the interplay between conda and pip.
Forking the latest available recipe as described here, updating the version number and details, and using my local version of the recipe.
Simply installing the packages via pip, npm (Node JS package manager), apm (Atom package manager) from within the activated conda environment.
Which approach is best? Is it safe to use pip with conda in the way described in #1?
#2 is more effort than the other two. But I am looking for a generic approach that will not mess up my installation.

Why use the pip in a conda virtual environment makes the global effect?

previously, I installed the tensorflow 1.13 in my machine.
There are some projects depending on different version of tensorflow and I do not want to mixed up different version of tensowflow.
So I just tried create a env called tf2.0 and used pip to install tensorflow 2.0.0b1 in that specific virtual environment.
However, after I ran 'pip install tensorflow-gpu==2.0.0b1` in that "tf2.0" conda environment, I found that it takes effect globally, which mean I have to use tensorflow-gpu 2.0.0b1 even when that virtual env "tf2.0" disactivated.
I wish I could use tensorflow 1.13 when virtual env is deactivated.
It's hard to troubleshoot the described conditions without more details (exact commands run, showing PATH before and after and post activation, etc.). Nevertheless, you can try switching to following the most recent recommendations for mixing Conda and Pip. Namely, avoid installing things ad hoc, which is prone to using the wrong pip and clobbering packages, but instead define a YAML file and always create the whole env in one go.
As a minimal example:
my_env.yaml
name: my_env
channels:
- defaults
dependencies:
- python
- pip
- pip:
- tensorflow-gpu==2.0.0b1
which can be created with conda env create -f my_env.yaml. Typically, it is best to include everything possible in the "non-pip" section of dependencies.
It is mostly that you used a wrong pip. To make sure you are using correct pip, it is usually a good practice to do
python -m pip install —user PACKAGE_NAME
Given that you have conda, pip should be the last resort.
Conda channel conda-forge most likely has the latest package version you are looking for.
conda install -c conda-forge PACKAGE_NAME
If you have to use pip, make sure you are in an environment and that environment has its own pip.
conda create -n test python=3.7
conda activate test
python -m pip install PACKAGE_NAME
From your described problem, I can guess that your environment is not activated in which you are trying to install the tensorflow2.0
Please make sure to activate the environment after making it.
so after creating the environment do this-
conda activate tf2.0
make sure you see this
(tf2.0) C:\Users\XYZ>
And then you install your tensorflow.

Does Miniconda also benefit from MKL optimization for numpy?

For scientific computing use cases, it is amazing that Anaconda distribution has MKL optimization for libraries like numpy, scipy etc.
I am wondering what if I use Miniconda instead, and install numpy via conda command, does it ship the same benefits of MLK optimization as what Anaconda does ?
Yes, it ship the same benefits of MLK optimization as what Anaconda does.
These Miniconda installers contain the conda package manager and Python. Once Miniconda is installed, you can use the conda command to install any other packages and create environments, etc.
For example:
$ conda install numpy
...
$ conda create -n py3k anaconda python=3
...
Note: If you already have Miniconda or Anaconda installed, and you just want to upgrade, you should not use the installer. Just use conda update.
For instance:
$ conda update conda
will update conda.
Yes, it is exactly the same package as would come with the Anaconda installer.
Note that you can also install NumPy from the conda-forge channel, if you wanted to use OpenBLAS instead of MKL.

Install Tensorflow gpu on a remote pc without sudo

I don't have sudo access to the remote pc where cuda is already installed. Now, I have to install tensorflow-gpu on that system. Please give me the step by step guide to install it without sudo.
Operating System : Ubuntu 18.04
I had to do this before. Basically, I installed miniconda (you can also use anaconda, same thing and installation works without sudo), and installed everything using conda.
Create my environment and activate it:
conda create --name myenv python=3.6.8
conda actiavate myenv
Install the CUDA things and Tensorflow
conda install cudatoolkit=9.0 cudnn=7.1.2 tensorflow-gpu
Depending on your system, you may need to change version numbers.
Not sure how familiar you are with conda - it is basically a package-manager/repository and environment manager like pip/venv with the addition that it can handle non-python things as well (such as cudnn for example). As a note - if a package is not availabe through conda, you can still use pip as a fallback.
Untested with pip
I previously tried to do it without conda and using pip (I ended up failing due to some version conflicts, got frustrated with the process and moved to conda). It gets a little more complicated since you need to manually install it. So first, download cudnn from nvidia and unpack it anywhere you want. Then, you need to add it to the LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/path/to/cuda/lib64:/path/to/cudnn/lib64/:${LD_ LIBRARY_PATH}

What is the difference in installing tensorflow with pip command and conda or directing cloning?

For the first time I'v installed tensorflow with conda installation. Then I actually work with a seq2seq model. After that I have again installed the tensorflow with the pip installation. But now the libraries are very different. All the old scripts are misplaced etc. Why is that ? Why I didn't face this when I was working with coda instillation
It has been claimed that Tensorflow installed with Conda performs a lot faster than a Pip installation, for example:
https://towardsdatascience.com/stop-installing-tensorflow-using-pip-for-performance-sake-5854f9d9eb0c
Conda also installs all of the package dependencies automatically, which Pip does not, as far as I'm aware.
https://www.anaconda.com/blog/developer-blog/tensorflow-in-anaconda/
Pip and conda install to two different locations. You should try to stick to one or the other. I would recommend uninstalling the conda version and sticking to pip but it's up to you how to proceed.
Update 01-02-2019: It seems that conda is now the faster and preferred way to install tensorflow. Note this may change again in the future.