Is it safe to install Tensorflow in an existing Conda environment? - tensorflow

I am looking into using Tensorflow for my research soon, and looked at the online documentation for installing with Conda https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#anaconda-installation.
It suggested creating a new environment, and installing Tensorflow in it, and the installing other python packages afterwards.
But I already have an existing environment with lots of packages I need, and I'm wondering if its safe to add Tensorflow into that environment?
Also, I have a question about how this installation with conda works. I know that Conda will create a distinct set of folders containing the libraries needed for each environment, but if I install Tensorflow, what happens to all the base low level C++ and CUDA libraries that get compiled? Do they reside in my Conda environment's folder, or are they in some system wide libraries closer to my root?
PS: I'm using Ubuntu 16.04, and have a GPU that I want to run Tensorflow on.
Thank you.

But I already have an existing environment with lots of packages I need, and I'm wondering if its safe to add Tensorflow into that environment?
conda has this awesome feature called "revisions". You can show your current environment with
conda list --revisions
which allows you to revert changes to your conda environment. This allows you to install new packages with confidence that if something breaks you can always revert it later. See this page for more info: https://www.continuum.io/blog/developer/advanced-features-conda-part-2. tl;dr: conda install --revisions <revision_number>
what happens to all the base low level C++ and CUDA libraries that get compiled
Are you talking about the libraries that get compiled when you are trying to run your code? Or the C++/CUDA libraries? If you're talking about the C++/CUDA libs then conda is not compiling them, but merely installing a pre-compiled binary into a specific location that gets picked up. If you're talking about your code, then where exactly those files live would seem to depend on where you put them.

Related

which one is better in installing tensorflow

I followed the instructions on the official website to download the TensorFlow. I chose to create a virtual environment as the instruction shown for macOS. My question is that if I need to activate the virtual environment each time before I use TensorFlow?
For example, I want to use tensor flow on Jupiter notebook and that means I need to install Jupiter and other required packages like Seaborn/pandas as well on the virtual environment. However I already downloaded anaconda and basically, it has all the packages I need.
Besides, will it make a difference if I download it with conda?
Well, if you downloaded the packages (like you said TensorFlow and Seaborn) in the base Conda environment which is the default environment that anaconda provides on installation, then to use what it has, you need to run whatever program/IDE like Jupyter lab from it. So you would open Anaconda Prompt and then type in jupyter lab and it would open up a new socket and you can edit with your installed python libraries from Conda.
Otherwise in IDE's VSCode you can simply set the python interpreter to that from Conda.
However, if you install the libraries and packages you need using pip on your actual python installation not Conda, then there is no need for any activation. Everything will run right out of the box. You don't need to select the interpreter in IDE's like VSCode.
Bottom line, if you know what libraries you need and don't mind running pip install package-name every time you need a package, stick with pip.
If you don't like to that sort of 'low level' stuff then use Anaconda or Miniconda.

What is the difference between Tensorflow installation via source vs using pip?

I'm want to run tensorflow on a very standard machine setup (windows 64 bit) and have read that tensorflow has greater performance if built from source as it is optimised for your system. When installing tensorflow via pip for why does pip not select the optimal build for your system?
Also if you did install via pip is there a way or being able to tell whether the optimal build has been installed, or is the only way of knowing that simply remembering how you installed it?
Google has taken the position, that it is not reasonable to build TF for every possible instruction set out there. They only release generic builds for Linux, Mac and Windows. You must build from source if you want all the optimizations for your particular machine's instruction set.

What if we don't install tensorflow under a new environment?

How come we need to install tensorflow as a separate environment?
If we do it this way, many common libraries are not available when tensorflow is activated.
Most of the common libraries such as matplotlib, panda, etc. are not within tensorflow environment. So we have to install again to use them.
So why not just install under root so we don't have to re-install all those libraries under the new environment?
Thanks.

Managing CUDA with Conda environments

I work with Tensorflow and PyTorch and manage each project that I work on inside of a Conda environment. This allows me to share projects and update my stack without breaking old projects. I am trying to get Conda to manage Cuda within an environment as well. Is this possible? I want old projects to be maintained whenever I update to newer versions of Cuda and CUDnn. Is there a way for Anaconda do this?

build tensorflow from source to use SSE3 and SSE4

Whenever I use tensorflow, it displays the message "The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations" and 2 more similar messages.
So I decided to build tensorflow from source to get rid of these messages. I'm using python 3.5 on Debian, and followed the instructions at https://www.tensorflow.org/install/install_sources (CPU only, no GPU).
It asked during the build if the build should be for the machine it's doing the build on, I selected that, it included -march=native in some compiler option.
Everything seemed to work, but when I ran python3 to test the build, it still gives the messages about "The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available ..." etc. How to I make the build use the hardware that it's running on?
There have been similar questions, and most of the answers to them are wrong. They say it's necessary to specify options like "--copt=-msse4.1 --copt=-msse4.2" in the build; it isn't. With the default option "-march=native", the GNU compiler will use SSE4.1 and SSE4.2 instructions if they are available.
The real problem is that if you build tensorflow from source, after installing the default build with pip, pip won't replace the old build with the new build. Everything will seem to work, but your old build remains in place in a directory under ~/.local.
The solution is simply to uninstall the old tensorflow with pip ('pip uninstall tensorflow' or 'pip3 uninstall tensorflow'), and then rebuild from source. If you have already done a build, and wondered why nothing seemed to change, you needn't repeat the build but can just execute the last couple of steps (https://www.tensorflow.org/install/install_sources), namely bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg , followed by the pip install.