Tensorflow Serving Developer environment - tensorflow-serving

I can't seem to find any documentation that describes what parts of TF and TFS need to be installesd/built to create a servable, can anyone shed light on the subject?

I'm not sure if this documentation exists. The approach I would take is to create a new blank environment, on conda or whatever you prefer. Then install Tensorflow and Tensorflow serving into the environment, which will prompt you to install the dependencies into the environment as well.
Then just to pip list or conda list (or equivalent) and see what all libraries got installed. That should give you a list of the base libraries needed to use TF and TF Serving.

Related

Problem with importing tensorflow and testing NN

I'm currently working on a program to play a game similar to atari-games. I'm using keras (python 3). I finished writing the code and I want to test it, and I have few questions about the process:
first of all, I have trouble importing tesnorflow for some reason. I've installed it using pip. I've made sure to created new env. before the installation (which finished successfully), but when I try to run my program it says:
ModuleNotFoundError: No module named 'tensorflow'
I also, tried to install the package from within pycharm, but then I get this error:
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
I've checked program requirements (such as pip, python, virtualenv and setuptools versions) and everything seems up to date. perhaps someone could point out what else might be the problem?
Is there any other way I can test the performance of my program?
Thank you very much for your time and attention.
Anaconda is a complete time-saver. I suggest create an enviornment using Anaconda and install the tensorflow by conda install tensorflow If you would like to use the gpu version, conda automatically installs the CUDA and cudnn for you too.

"No module named tensorflow" when tensorflow-gpu is required

I built a python tensorflow package and uploaded to run on ml engine.
"tensorflow-gpu==1.8.0" (no tensorflow) is set to be required in my setup.py.
The ML engine run fails at "import tensorflow as tf" saying "No module named tensorflow".
The ML engine run works fine when I'm only requiring "tensorflow==1.8.0" but I believe tensorflow-gpu is needed to use GPU.
Any ideas how to solve this issue?
Thanks
You need to set --runtime-version=1.8 when submitting the job. Consequently, you don't need to manually specify TF in setup.py. In fact, if that's the only package you are requiring, you can omit setup.py altogether.
Update 2018/06/29:
Explanation: different versions of TensorFlow require different versions of NVIDIA's drivers and software stack. The --runtime-version is guaranteed to have the right version of the drivers for that particular version of TensorFlow. You can technically set the version of tensorflow-gpu in your setup.py, but that version must be compatible with the NVIDIA stack present in the --runtime-version you've selected (defaults to the very old TF 1.0).
This also happens when you have multiple python versions. In that case you have to specify the relevant python version for tf installation. For example,"python3 setup.py" instead of "python setup.py".

Needed help?? installing Tensorflow-GPU for Win 10 Pro 9-2-18

1.Create a new environment through conda create --name tftest. (You can replace tftest with e.g. the name of your current project.)
2.Activate that new environment through activate tftest.
3.Install TF into this environment through conda install tensorflow.
4.Ensure that you're in the right environment through where python (which should produce a path containing "tftest").
5.Run Python through python.
6.import tensorflow as tf in a shell in that environment.
Thanks to great community as I found this thanks to another post!!!
Starting with version 1.6.0, prebuild binaries need AVX instructions.
There are some bug reports by people who tried to use the precompiled binaries but whose doesn't support AVX instructions and got the same error as you posted here:
https://github.com/tensorflow/tensorflow/issues/17761
https://github.com/tensorflow/tensorflow/issues/17386
Maybe you have this problem? If yes, you may have to build tensorflow from sources or downgrade to tensorflow 1.5.1.

What if we don't install tensorflow under a new environment?

How come we need to install tensorflow as a separate environment?
If we do it this way, many common libraries are not available when tensorflow is activated.
Most of the common libraries such as matplotlib, panda, etc. are not within tensorflow environment. So we have to install again to use them.
So why not just install under root so we don't have to re-install all those libraries under the new environment?
Thanks.

Is it safe to install Tensorflow in an existing Conda environment?

I am looking into using Tensorflow for my research soon, and looked at the online documentation for installing with Conda https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#anaconda-installation.
It suggested creating a new environment, and installing Tensorflow in it, and the installing other python packages afterwards.
But I already have an existing environment with lots of packages I need, and I'm wondering if its safe to add Tensorflow into that environment?
Also, I have a question about how this installation with conda works. I know that Conda will create a distinct set of folders containing the libraries needed for each environment, but if I install Tensorflow, what happens to all the base low level C++ and CUDA libraries that get compiled? Do they reside in my Conda environment's folder, or are they in some system wide libraries closer to my root?
PS: I'm using Ubuntu 16.04, and have a GPU that I want to run Tensorflow on.
Thank you.
But I already have an existing environment with lots of packages I need, and I'm wondering if its safe to add Tensorflow into that environment?
conda has this awesome feature called "revisions". You can show your current environment with
conda list --revisions
which allows you to revert changes to your conda environment. This allows you to install new packages with confidence that if something breaks you can always revert it later. See this page for more info: https://www.continuum.io/blog/developer/advanced-features-conda-part-2. tl;dr: conda install --revisions <revision_number>
what happens to all the base low level C++ and CUDA libraries that get compiled
Are you talking about the libraries that get compiled when you are trying to run your code? Or the C++/CUDA libraries? If you're talking about the C++/CUDA libs then conda is not compiling them, but merely installing a pre-compiled binary into a specific location that gets picked up. If you're talking about your code, then where exactly those files live would seem to depend on where you put them.