how to install tensorflow on google cloud platform - tensorflow

when I use the command pip install tensorflow the download is only 99% complete and terminated at that point. How can I install tensorflow using google cloud shell.

Instead of installing it by yourself you can use the machine learning api and use TensorFlow for training or inference. Just follow this guidelines: https://cloud.google.com/ml/docs/quickstarts/training
You can submit a TensorFlow job like this:
gcloud beta ml jobs submit training ${JOB_NAME} \
--package-path=trainer \
--module-name=trainer.task \
--staging-bucket="${TRAIN_BUCKET}" \
--region=us-central1 \
-- \
--train_dir="${TRAIN_PATH}/train"

Related

freeze model for inference with output_node_name for ssd mobilenet v1 coco

I want to compile the TensorFlow Graph to Movidius Graph. I have used Model Zoo's ssd_mobilenet_v1_coco model to train it on my own dataset.
Then I ran
python object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/home/redtwo/nsir/ssd_mobilenet_v1_coco.config \
--trained_checkpoint_prefix=/home/redtwo/nsir/train/model.ckpt-3362 \
--output_directory=/home/redtwo/nsir/output
which generates me frozen_interference_graph.pb & saved_model/saved_model.pb
Now to convert this saved model into Movidius graph. There are commands given
Export GraphDef file
python3 ../tensorflow/tensorflow/python/tools/freeze_graph.py \
--input_graph=inception_v3.pb \
--input_binary=true \
--input_checkpoint=inception_v3.ckpt \
--output_graph=inception_v3_frozen.pb \
--output_node_name=InceptionV3/Predictions/Reshape_1
Freeze model for inference
python3 ../tensorflow/tensorflow/python/tools/freeze_graph.py \
--input_graph=inception_v3.pb \
--input_binary=true \
--input_checkpoint=inception_v3.ckpt \
--output_graph=inception_v3_frozen.pb \
--output_node_name=InceptionV3/Predictions/Reshape_1
which can finally be feed to NCS Intel Movidius SDK
mvNCCompile -s 12 inception_v3_frozen.pb -in=input -on=InceptionV3/Predictions/Reshape_1
All of this is given at Intel Movidius Website here: https://movidius.github.io/ncsdk/tf_modelzoo.html
My model was already trained i.e. output/frozen_inference_graph. Why do I again freeze it using /slim/export_inference_graph.py or it's the output/saved_model/saved_model.py that will go as input to slim/export_inference_graph.py??
All I want is output_node_name=Inceptionv3/Predictions/Reshape_1. How to get this output_name_name directory structure & anything inside it? I don't know what all it contains
what output node should I use for model zoo's ssd_mobilenet_v1_coco model(trained on my own custom dataset)
python freeze_graph.py \
--input_graph=/path/to/graph.pbtxt \
--input_checkpoint=/path/to/model.ckpt-22480 \
--input_binary=false \
--output_graph=/path/to/frozen_graph.pb \
--output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
Things I understand & don't understand:
input_checkpoint: ✓ [check points that were created during training]
output_graph: ✓ [path to output frozen graph]
out_node_names: X
I don't understand out_node_names parameter & what should inside this considering its ssd_mobilnet not inception_v3
System information
What is the top-level directory of the model you are using:
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
TensorFlow installed from (source or binary): TensorFlow installed with pip
TensorFlow version (use command below): 1.13.1
Bazel version (if compiling from source):
CUDA/cuDNN version: V10.1.168/7.*
GPU model and memory: 2080Ti 11Gb
Exact command to reproduce:
The graph in saved_model/saved_model.pb is the graph definition(graph architecture) of the pretrained inception_v3 model without the weights loaded to the graph. The frozen_interference_graph.pb is the graph frozen with the checkpoints you have provided and taking the default output nodes of the inception_v3 model.
To get output node names summarise_graph tool can be used
You can use the below commands to use summarise_graph tool if bazel is installed
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \
--in_graph=/tmp/inception_v3_inf_graph.pb
In case if bazel is not installed Output nodes can be obtained using the tensorboard or any other graph visualising tools like Netron.
The additional freeze_graph.py can be used to freeze the graph specifying the output nodes(ie in a case where additional output nodes are added to the inceptionV3). The frozen_interference_graph.pb is also an equaly good fit for infrencing.

Using TPU on Cloud ML Engine

I am trying to use TPU on Cloud ML Engine but I am at a loss as to how I should provide the tpu argument which TPUClusterResolver expects.
This is the environment I am using:
--python-version 3.5 \
--runtime-version 1.12 \
--region us-central1 \
--scale-tier BASIC_TPU
The job crashes with:
ValueError: Please provide a TPU Name to connect to.
As a separate issue - ML engine seems to be adding --master grpc://10.129.152.2:8470 on its own to my job which also crashes the job. As a workaround for it I just added an un-used master flag to my code.
this was a known issue for runtime 1.11 and 1.12 and it has been fixed. Now, the service won't append --master to your training application. You should continue using TpuClusterResolver.

Google Cloud ML: Use Nightly TF Import Error No Module tensorflow

I want to train the NMT model from Google on Google Cloud ML.
NMT Model
Now I put all input data in a bucket and downloaded the git repository.
The model needs the nightly version of tensorflow so I defined it in setup.py and when I use the cpu version tf-nightly==1.5.0-dev20171115 and run the following command to run it in GCP local it works.
Train local on google:
gcloud ml-engine local train --package-path nmt/ \
--module-name nmt.nmt \
-- --src=en --tgt=de \
--hparams_path=$HPARAMAS_PATH \
--out_dir=$OUTPUT_DIR \
--vocab_prefix=$VOCAB_PREFIX \
--train_prefix=$TRAIN_PREFIX \
--dev_prefix=$DEV_PREFIX \
--test_prefix=$TEST_PREFIX
Now when I use the gpu version with the following command I got this error message few minutes after submitting the job.
Train on cloud
gcloud ml-engine jobs submit training $JOB_NAME \
--runtime-version 1.2 \
--job-dir $JOB_DIR \
--package-path nmt/ \
--module-name nmt.nmt \
--scale-tier BAISC_GPU \
--region $REGION \
-- --src=en --tgt=de \
--hparams_path=$HPARAMAS_PATH \
--out_dir=$OUTPUT_DIR \
--vocab_prefix=$VOCAB_PREFIX \
--train_prefix=$TRAIN_PREFIX \
--dev_prefix=$DEV_PREFIX \
--test_prefix=$TEST_PREFIX
Error:
import tensorflow as tf ImportError: No module named tensorflow
setup.py:
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['tf-nightly-gpu==1.5.0-dev20171115']
setup(
name="nmt",
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
version='0.1.2'
)
Thank you all in advance
Markus
Update:
I have found a note on
GCP docs
Note: Training with TensorFlow versions 1.3+ is limited to CPUs only. See the Cloud ML Engine release notes for updates.
So it seems to doesn't work currently I think I have to go with the compute engine.
Or is there any hack to got it working?
However thank you for your help
The TensorFlow 1.5 might need newer version of CUDA (i.e., CUDA 9), and but the version CloudML Engine installed is CUDA 8. Can you please try to use TensorFlow 1.4 instead, which works on CUDA 8? Please tell us if 1.4 works for you here or send us an email via cloudml-feedback#google.com

Tensorflow couldn't open CUDA library libcupti.so.8.0 during training on google cloud

I'm trying train a model using Tensorflow on the Google Cloud ml-engine. It seems that tensorflow can't get to the libcupti files on the cloud compute machine due to the LD_LIBRARY_PATH not pointing to the correct directory, as implied by the log entry below:
lineno: 126
message: "Couldn't open CUDA library libcupti.so.8.0.
LD_LIBRARY_PATH: /usr/local/cuda/lib64"
levelname: "INFO"
pathname: "tensorflow/stream_executor/dso_loader.cc"
created: 1491143889.84344
As far as I know, the libcupti files are all in /usr/local/cuda/extras/CUPTI/lib64, so I would need to append this to the LD_LIBRARY_PATH variable, but how would I do that when submitting a job via a gcloud ml-engine jobs submit training $JOB_NAME command? Or maybe there's an easier solution?
I tried to use GPU with tensorflow on google cloud and it works for me. In my code I didn't do any GPU specific setting (nor set anything with LD_LIBRARY_PATH)
I think you can try with just a simple and standard tensorflow code and with you submit the job you attach a config then the job should automatically use GPU to do the calculation for you.
Try add a file such as cloudml-gpu.yaml in your module with the following content:
trainingInput:
scaleTier: CUSTOM
# standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4
GPUs
masterType: standard_gpu
runtimeVersion: "1.0"
Then add a option called --config=trainer/cloudml-gpu.yaml (suppose your training module is in a folder called trainer). For example:
export BUCKET_NAME=tf-learn-simple-sentiment
export JOB_NAME="example_5_train_$(date +%Y%m%d_%H%M%S)"
export JOB_DIR=gs://$BUCKET_NAME/$JOB_NAME
export REGION=europe-west1
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir gs://$BUCKET_NAME/$JOB_NAME \
--runtime-version 1.0 \
--module-name trainer.example5-keras \
--package-path ./trainer \
--region $REGION \
--config=trainer/cloudml-gpu.yaml \
-- \
--train-file gs://tf-learn-simple-sentiment/sentiment_set.pickle

Keras on Google Cloud ML does not seem to use GPU? Is it possible to make it work?

I tried running Keras with tensorflow backend on cloud ml (google cloud platform). I find that keras does not seem to use the GPU. The performance for running one epoch on my CPU is 190 seconds and is equal to what I see in the logs dumped. Is there a way to identify whether a code is running in GPU or CPU in keras? Has anybody tried Keras on Cloud ML with Tensor flow backend running??
Update: As of March of 2017, GPUs are publicly available. See Fuyang Liu's answer
GPUs are not currently available on CloudML. However, they will be in the upcoming months.
yes it is supported now.
Basically you need to add a file such as cloudml-gpu.yaml in your module with the following content:
trainingInput:
scaleTier: CUSTOM
# standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4
GPUs
masterType: standard_gpu
runtimeVersion: "1.0"
Then add a option called --config=trainer/cloudml-gpu.yaml (suppose your training module is in a folder called trainer). For example:
export BUCKET_NAME=tf-learn-simple-sentiment
export JOB_NAME="example_5_train_$(date +%Y%m%d_%H%M%S)"
export JOB_DIR=gs://$BUCKET_NAME/$JOB_NAME
export REGION=europe-west1
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir gs://$BUCKET_NAME/$JOB_NAME \
--runtime-version 1.0 \
--module-name trainer.example5-keras \
--package-path ./trainer \
--region $REGION \
--config=trainer/cloudml-gpu.yaml \
-- \
--train-file gs://tf-learn-simple-sentiment/sentiment_set.pickle
You may also want to checkout this url for the GPU available region and other info on it.
import keras.backend.tensorflow_backend as K
K._set_session(K.tf.Session(config=K.tf.ConfigProto(log_device_placement=True)))
should make keras print the device placement of each tensor to stdout or stderr.