For scientific computing use cases, it is amazing that Anaconda distribution has MKL optimization for libraries like numpy, scipy etc.
I am wondering what if I use Miniconda instead, and install numpy via conda command, does it ship the same benefits of MLK optimization as what Anaconda does ?
Yes, it ship the same benefits of MLK optimization as what Anaconda does.
These Miniconda installers contain the conda package manager and Python. Once Miniconda is installed, you can use the conda command to install any other packages and create environments, etc.
For example:
$ conda install numpy
...
$ conda create -n py3k anaconda python=3
...
Note: If you already have Miniconda or Anaconda installed, and you just want to upgrade, you should not use the installer. Just use conda update.
For instance:
$ conda update conda
will update conda.
Yes, it is exactly the same package as would come with the Anaconda installer.
Note that you can also install NumPy from the conda-forge channel, if you wanted to use OpenBLAS instead of MKL.
Related
While I am in a conda environment, the 'conda list' and 'pip freeze' show different number of libraries. For example, 'tensorflow-gpu' is listed in 'pip freeze', but not in 'conda list'. If I want to use tensorflow-gpu in this environment, should I run pip install tensorflow-gpu to install it again, or not necessary?
I think when you are using the conda environment. The conda list is going to show all the general packages that shared by the same conda environment. And the reason why 'tensorflow-gpu' is listed in 'pip freeze', but not in 'conda list', is because you used pip install to installed 'tensorflow-gpu'(could be you or the IDE). In this case, 'tensorflow-gpu' is only exists under this python project I believe. Actually, there is an official document about this topic.
Issues may arise when using pip and conda together. When combining
conda and pip, it is best to use an isolated conda environment. Only
after conda has been used to install as many packages as possible
should pip be used to install any remaining software. If modifications
are needed to the environment, it is best to create a new environment
rather than running conda after pip. When appropriate, conda and pip
requirements should be stored in text files.
Use pip only after conda Install as many requirements as possible with
conda then use pip.
Pip should be run with --upgrade-strategy only-if-needed (the
default).
Do not use pip with the --user argument, avoid all users installs.
And here is the link.
I'm using Miniconda 3, and I'm trying to build a minimal Conda environment containing pandas. However, when I try to load the pandas module, Jupyter gives me the following error:
The kernel appears to have died. It will restart automatically.
When doing the same thing via Python in the Terminal, Python crashes.
I have created a minimal Conda environment, which can be reproduced via the code below.
conda create -n testenv
conda activate testenv
conda install python
conda install pandas
conda install jupyter
The problem doesn't occur anymore when I follow up with a full Anaconda install, via conda install anaconda.
Any ideas as to how this problem can be resolved without installing Anaconda?
I figured out what the problem was. What I should have mentioned is that I built a global channel list according to Bioconda recommendation, using
conda config --add channels defaults
conda config --add channels bioconda
conda config --add channels conda-forge
This makes conda-forge the highest-priority channel. For reasons I don't understand, conda-forge builds a dependency hell. (I noticed the guys at conda-forge: https://github.com/conda-forge/pandas-feedstock/issues/63).
For now, install using conda install -c defaults pandas.
I installed Tensorflow for GPU using: pip install tensorflow-gpu
But when I tried the same for Keras pip install keras-gpu, it pulled me an error: could not find the version that satisfies the requirements.
Adding to the answer below which is the correct answer in terms of recommending to use Anaconda package manager, but out of date in that there is now a keras-gpu package on Anaconda Cloud.
So once you have Anaconda installed, you simply need to create a new environment where you want to install keras-gpu and execute the command:
conda install -c anaconda keras-gpu
This will install Keras along with both tensorflow and tensorflow-gpu libraries as the backend. (There is also no need to install separately the CUDA runtime and cudnn libraries as they are also included in the package - tested on Windows 10 and working).
There is not any keras-gpu package [UPDATE: now there is, see other answer above]; Keras is a wrapper around some backends, including Tensorflow, and these backends may come in different versions, such as tensorflow and tensorflow-gpu. But this does not hold for Keras itself, which should be installed simply with
pip install keras
independently of whatever backend is used (see the PyPi docs).
Additionally, and since you have tagged the question as anaconda, too, be informed that it is generally not advisable to mix your package managers (i.e pip with conda), and you may be better off installing Keras from the Anaconda cloud with
conda install -c conda-forge keras
Finally, you may be also interested to know that recent versions of Tensorflow include Keras as a subpackage, so you can use it without any additional installation; see https://www.tensorflow.org/guide/keras
For installing tensorflow-gpu from Anaconda cloud, you should use
conda install -c anaconda tensorflow-gpu
before installing Keras. Be sure you do it in a different virtual environment, or after having uninstalled other versions (i.e. pip-installed ones), as there have been reported problems otherwise.
Adding to the above two answers, ensure your TensorFlow/Keras environment is using Python 3.6. Keras/TensorFlow doesn't work very well with Python 3.7, as of May 10, 2019.
I tried to use Keras/TensorFlow with Python 3.7 and I ended up having to reinstall Anaconda, since it sort of broke my Anaconda Prompt.
To install tensorflow-gpu with particular cuda version 9.0, use:
conda install tensorflow-gpu cudatoolkit==9.0 -c anaconda
Similarly for keras-gpu
I've setup a conda environment and used conda install numpy. This includes the mkl package, which is quite large, 100s of MBs. Conda installs these into Library/bin in the environment when on Windows. In another conda environment I installed numpy+mkl from Christoph Gohlke's binaries. What's the difference between these two numpy builds that are both linked against MKL? Is there a preferred version to use?
The install from Christoph's binaries don't put all the mkl_* DLLs in Library/bin. Christoph's website says all the MKL libraries are included in the numpy.core folder after the pip install. I checked and there are a few DLL files in there, but none with the mkl_* prefix.
The conda version ends up using 100s of MBs more than the other install so I'm curious what benefits there are to using the Conda version.
For the first time I'v installed tensorflow with conda installation. Then I actually work with a seq2seq model. After that I have again installed the tensorflow with the pip installation. But now the libraries are very different. All the old scripts are misplaced etc. Why is that ? Why I didn't face this when I was working with coda instillation
It has been claimed that Tensorflow installed with Conda performs a lot faster than a Pip installation, for example:
https://towardsdatascience.com/stop-installing-tensorflow-using-pip-for-performance-sake-5854f9d9eb0c
Conda also installs all of the package dependencies automatically, which Pip does not, as far as I'm aware.
https://www.anaconda.com/blog/developer-blog/tensorflow-in-anaconda/
Pip and conda install to two different locations. You should try to stick to one or the other. I would recommend uninstalling the conda version and sticking to pip but it's up to you how to proceed.
Update 01-02-2019: It seems that conda is now the faster and preferred way to install tensorflow. Note this may change again in the future.