inception model folder not available when installing with pip - tensorflow

The github source for tensorflow contains a folder called inception under models.
https://github.com/tensorflow/models/tree/master/inception/inception
However, there is no such folder when I install using pip
The pip installation resides at:
/usr/lib/python2.7/site-packages/tensorflow/
Is this discrepancy because of the version I am using when installing with with pip? I used this link TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0rc0-cp27-none-linux_x86_64.whl
when installing using pip. Is this correct?
Pleasec check

Related

RASA --- ERROR: Could not find a version that satisfies the requirement tensorflow

I am trying to install rasa and there is a problem with the tensorflow (Windows 10)
As a pre-requisite, I have installed Anaconda, VC++
Steps -
Open Anaconda with admin rights
activate rasa
pip install rasa-x --extra index url https://pypi.rasa.com/simple
pip install rasa
Error - ERROR: Could not find a version that satisfies the requirement tensorflow
I tried to install tensorflow before installing rasa, apparently the error remains the same even for installing tensorflow .... Need some pointers to install rasa and tensorflow so that I can move ahead.
You need use Python version 3.6 or 3.7. Check on that
I tried the following and everything got fixed immediately ..
The main problem that I could notice at my set up was with the Visual Studio installation that was not complete.
Pre-requisites -
Anaconda 64 Bit
Visual Studio complete installation
Open Anaconda - Admin privilege
mkdir c:\RASA
cd RASA
activate rasa
pip install rasa x extra index url https://pypi.rasa.com/simple
pip install rasa[full]
pip install rasa[spacy]
python -m spacy download en_core_web_md
python -m spacy link en_core_web_md en
Following the above steps as mentioned solves all the issues wrt rasa installation.

No module named "tensorflow"

I want to build tensorflow with python libraries. Currently I have tensorflow installed but I do not see python packages in /usr/local/lib/python3.6/dist-packages so when I try to import tensorflow module in python terminal, it fails. However, there is library in /usr/lib and C++ programs work.
What is flag/target needed in bazel build?
You can get TensorFlow in python package by:
Directly doing pip install tensorflow: It will install the precompiled version by directly downloading a wheel file.
Build from source code in GitHub using bazel build.
For the second approach, the following steps are needed:
Take the source code from GitHub.
Compile the code using bazel build. It will give the ".whl" file.
Do pip install /tmp/TensorFlow-version-tags.whl
For detailed installation, steps follow this.

Tensorflow will not run on GPU

I'm a newbie when it comes to AWS and Tensorflow and I've been learning about CNNs over the last week via Udacity's Machine Learning course.
Now I've a need to use an AWS instance of a GPU. I launched a p2.xlarge instance of Deep Learning AMI with Source Code (CUDA 8, Ubuntu) (that's what they recommended)
But now, it seems that tensorflow is not using the GPU at all. It's still training using the CPU. I did some searching and I found some answers to this problem and none of them seemed to work.
When I run the Jupyter notebook, it still uses the CPU
What do I do to get it to run on the GPU and not the CPU?
The problem of tensorflow not detecting GPU can possibly be due to one of the following reasons.
Only the tensorflow CPU version is installed in the system.
Both tensorflow CPU and GPU versions are installed in the system, but the Python environment is preferring CPU version over GPU version.
Before proceeding to solve the issue, we assume that the installed environment is an AWS Deep Learning AMI having CUDA 8.0 and tensorflow version 1.4.1 installed. This assumption is derived from the discussion in comments.
To solve the problem, we proceed as follows:
Check the installed version of tensorflow by executing the following command from the OS terminal.
pip freeze | grep tensorflow
If only the CPU version is installed, then remove it and install the GPU version by executing the following commands.
pip uninstall tensorflow
pip install tensorflow-gpu==1.4.1
If both CPU and GPU versions are installed, then remove both of them, and install the GPU version only.
pip uninstall tensorflow
pip uninstall tensorflow-gpu
pip install tensorflow-gpu==1.4.1
At this point, if all the dependencies of tensorflow are installed correctly, tensorflow GPU version should work fine. A common error at this stage (as encountered by OP) is the missing cuDNN library which can result in following error while importing tensorflow into a python module
ImportError: libcudnn.so.6: cannot open shared object file: No such
file or directory
It can be fixed by installing the correct version of NVIDIA's cuDNN library. Tensorflow version 1.4.1 depends upon cuDNN version 6.0 and CUDA 8, so we download the corresponding version from cuDNN archive page (Download Link). We have to login to the NVIDIA developer account to be able to download the file, therefore it is not possible to download it using command line tools such as wget or curl. A possible solution is to download the file on host system and use scp to copy it onto AWS.
Once copied to AWS, extract the file using the following command:
tar -xzvf cudnn-8.0-linux-x64-v6.0.tgz
The extracted directory should have structure similar to the CUDA toolkit installation directory. Assuming that CUDA toolkit is installed in the directory /usr/local/cuda, we can install cuDNN by copying the files from the downloaded archive into corresponding folders of CUDA Toolkit installation directory followed by linker update command ldconfig as follows:
cp cuda/include/* /usr/local/cuda/include
cp cuda/lib64/* /usr/local/cuda/lib64
ldconfig
After this, we should be able to import tensorflow GPU version into our python modules.
A few considerations:
If we are using Python3, pip should be replaced with pip3.
Depending upon user privileges, the commands pip, cp and ldconfig may require to be run as sudo.

How to find Tensorflow Serving version?

To find Tensorflow version we can do that by:
python -c 'import tensorflow as tf; print(tf.version)'
Tensorflow Serving is a separate install, so how to find the version of Tensorflow Serving?
Is it same as Tensorflow? Do not see any reference/comments or documentation related to this.
After you have installed tensorflow_model_server, run this
tensorflow_model_server --version
you will get the version of tf-serving.
In my case, i get
TensorFlow ModelServer: 1.13.0-rc1+dev.sha.f16e777
TensorFlow Library: 1.13.1
When building TF-Serving, you are going to do:
git clone --recursive https://github.com/tensorflow/serving
cd serving
From there, you can submit:
git branch --list -a
This will list all the possible TF-Serving versions. At the time of writing, I have:
remotes/origin/master
remotes/origin/r0.5.1
Then, you can checkout the branch you want before you build TF-Serving:
git checkout r0.5.1
If one installs Tensor flow serving using
apt-get, we can find the installed version by using apt list --installed
For pip installation use pip freeze
For conda installation use conda list

translate.py can't be found in /rnn/translate folder

I installed tensorflow with method:
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
sudo pip install --upgrade $TF_BINARY_URL
But when I go into the tensorflow folder, I can't found the translate.py file in translate folder, all the files in the translate are list:
It seems that example files were excluded from the pip installation packages of tensorflow (see issue #4574) without updating the docs.
Cloning the model repo should help as it contains all the tutorial files.