I'm trying to upload my saved model to ML engine so I can consume my model online, however I am getting the below error:
I am using tensorflow version 1.5 locally to train my model, based on the Tensorflow for poets tutorial (https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/).
I am then converting my model using the below 'save_model.py' script:
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import builder as saved_model_builder
input_graph = 'retrained_graph.pb'
saved_model_dir = 'my_model'
with tf.Graph().as_default() as graph:
# Read in the export graph
with tf.gfile.FastGFile(input_graph, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
# Define SavedModel Signature (inputs and outputs)
in_image = graph.get_tensor_by_name('DecodeJpeg/contents:0')
inputs = {'image_bytes': tf.saved_model.utils.build_tensor_info(in_image)}
out_classes = graph.get_tensor_by_name('final_result:0')
outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name='tensorflow/serving/predict'
)
with tf.Session(graph=graph) as sess:
# Save out the SavedModel.
b = saved_model_builder.SavedModelBuilder(saved_model_dir)
b.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
b.save()
The error message saying please use runtime 1.2 or above is talking about tensorflow? Or is my save_model.py doing something incorrectly?
You will need to use gcloud to deploy your model. The console does not let you manually specify the runtime version (i.e. it assumes TensorFlow 1.0). Further note that 1.5 is not yet available but will be very soon. That said, your model might work with 1.4, so it's worth a try.
The command to run is:
gcloud ml-engine versions create --model mymodel --origin=gs://mybucket --runtime-version 1.4
And in the near future you can use --runtime-version 1.5.
For more info, see the reference docs, particular the gcloud examples.
Related
I'm trying to load a pretrained retinanet model with keras by running:
# import keras
import keras
# import keras_retinanet
from keras_retinanet import models
from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
from keras_retinanet.utils.visualization import draw_box, draw_caption
from keras_retinanet.utils.colors import label_color
# set tf backend to allow memory to grow, instead of claiming everything
import tensorflow as tf
def get_session():
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
return tf.Session(config=config)
model_path = os.path.join('sample_data/snapshots', sorted(os.listdir('sample_data/snapshots'), reverse=True)[0])
print(model_path)
# load retinanet model
model = models.load_model(model_path, backbone_name='resnet50')
model = models.convert_model(model)
I am facing the following error with both codes:
OSError: SavedModel file does not exist at: sample_data/snapshots/training_5000(640_480).h5/{saved_model.pbtxt|saved_model.pb}
the cause might be some new versions of Keras or tensorflow,
soo I am going to list the versions that I am currently using.
keras.__version__
2.4.3
tf.__version__
2.4.1
Note: I am trying to run this code in my Colab.
I load a saved h5 model and want to save the model as pb.
The model is saved during training with the tf.keras.callbacks.ModelCheckpoint callback function.
TF version: 2.0.0a
edit: same issue also with 2.0.0-beta1
My steps to save a pb:
I first set K.set_learning_phase(0)
then I load the model with tf.keras.models.load_model
Then, I define the freeze_session() function.
(optional I compile the model)
Then using the freeze_session() function with tf.keras.backend.get_session
The error I get, with and without compiling:
AttributeError: module 'tensorflow.python.keras.api._v2.keras.backend'
has no attribute 'get_session'
My Question:
Does TF2 not have the get_session anymore?
(I know that tf.contrib.saved_model.save_keras_model does not exist anymore and I also tried tf.saved_model.save which not really worked)
Or does get_session only work when I actually train the model and just loading the h5 does not work
Edit: Also with a freshly trained session, no get_session is available.
If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
update:
Since the official release of TF2.x graph/session concept has changed. The savedmodel api should be used.
You can use the tf.compat.v1.disable_eager_execution() with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
I do save the model to pb from h5 model:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I use TF2 to convert model like:
pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
After training, self.model.load_weights(self.checkpoint_path) load checkpoint;
self.model.save(h5_path, overwrite=True, include_optimizer=False) save as h5;
convert h5 to pb just like above;
I'm wondering the same thing, as I'm trying to use get_session() and set_session() to free up GPU memory. These functions seem to be missing and aren't in the TF2.0 Keras documentation. I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
use
from tensorflow.compat.v1.keras.backend import get_session
in keras 2 & tensorflow 2.2
then call
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I am looking to use Google Cloud ML to host my Keras models so that I can call the API and make some predictions. I am running into some issues from the Keras side of things.
So far I have been able to build a model using TensorFlow and deploy it on CloudML. In order for this to work I had to make some changes to my basic TF code. The changes are documented here: https://cloud.google.com/ml/docs/how-tos/preparing-models#code_changes
I have also been able to train a similar model using Keras. I can even save the model in the same export and export.meta format as I would get with TF.
from keras import backend as K
saver = tf.train.Saver()
session = K.get_session()
saver.save(session, 'export')
The part I am missing is how do I add the placeholders for input and output into the graph I build on Keras?
After training your model on Google Cloud ML Engine (check out this awesome tutorial ), I named the input and output of my graph with
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
You can see the full exporting example for an already trained keras model 'model.h5' below.
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('model.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'YOUR_EXPORT_PATH' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
You can also see my full implementation.
edit: And if my answer solved your problem, just leave me an uptick here :)
I found out that in order to use keras on google cloud one has to install it with a setup.py script and put it on the same place folder where you run the gcloud command:
├── setup.py
└── trainer
├── __init__.py
├── cloudml-gpu.yaml
├── example5-keras.py
And in the setup.py you put content such as:
from setuptools import setup, find_packages
setup(name='example5',
version='0.1',
packages=find_packages(),
description='example to run keras on gcloud ml-engine',
author='Fuyang Liu',
author_email='fuyang.liu#example.com',
license='MIT',
install_requires=[
'keras',
'h5py'
],
zip_safe=False)
Then you can start your job running on gcloud such as:
export BUCKET_NAME=tf-learn-simple-sentiment
export JOB_NAME="example_5_train_$(date +%Y%m%d_%H%M%S)"
export JOB_DIR=gs://$BUCKET_NAME/$JOB_NAME
export REGION=europe-west1
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir gs://$BUCKET_NAME/$JOB_NAME \
--runtime-version 1.0 \
--module-name trainer.example5-keras \
--package-path ./trainer \
--region $REGION \
--config=trainer/cloudml-gpu.yaml \
-- \
--train-file gs://tf-learn-simple-sentiment/sentiment_set.pickle
To use GPU then add a file such as cloudml-gpu.yaml in your module with the following content:
trainingInput:
scaleTier: CUSTOM
# standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4
GPUs
masterType: standard_gpu
runtimeVersion: "1.0"
I don't know much about Keras. I consulted with some experts, and the following should work:
from keras import backend as k
# Build the model first
model = ...
# Declare the inputs and outputs for CloudML
inputs = dict(zip((layer.name for layer in model.input_layers),
(t.name for t in model.inputs)))
tf.add_to_collection('inputs', json.dumps(inputs))
outputs = dict(zip((layer.name for layer in model.output_layers),
(t.name for t in model.outputs)))
tf.add_to_collection('outputs', json.dumps(outputs))
# Fit/train the model
model.fit(...)
# Export the model
saver = tf.train.Saver()
session = K.get_session()
saver.save(session, 'export')
Some important points:
You have to call tf.add_to_collection after you create the model
but before you ever call K.get_session(), fit etc.,
You should be sure set the name of input and output layers when
you add them to the graph because you'll need to refer to them
when you send prediction requests.
Here's another answer that may help. Assuming you already have a keras model you should be able to append this to the end of your script and get an ML Engine compatible version of the model (protocol buffer). Note that you need to upload the saved_model.pb file and the sibling directory with variables to ML Engine for it to work. Note also that the .pb file must be named saved_model.pb or saved_model.pbtxt.
Assuming your model is name model
from tensorflow import saved_model
model_builder = saved_model.builder.SavedModelBuilder("exported_model")
inputs = {
'input': saved_model.utils.build_tensor_info(model.input)
}
outputs = {
'earnings': saved_model.utils.build_tensor_info(model.output)
}
signature_def = saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=saved_model.signature_constants.PREDICT_METHOD_NAME
)
model_builder.add_meta_graph_and_variables(
K.get_session(),
tags=[saved_model.tag_constants.SERVING],
signature_def_map={saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
})
model_builder.save()
will export the model to directory /exported_model.
I was wondering how to use my model trained with keras on a production server.
I heard about tensorflow serving, but I can't figure out how to use it with my keras model.
I found this link : https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
But I don't know how to initialize the sess variable, since my model is already trained and everything.
Is there any way to do this?
You can initialise your session variable as
from keras import backend as K
sess = K.get_session()
and go about exporting the model as in the tutorial (Note that import for exporter has changed)
from tensorflow.contrib.session_bundle import exporter
K.set_learning_phase(0)
export_path = ... # where to save the exported graph
export_version = ... # version number (integer)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input,
scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(),
default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)
Good alternative to TensorFlow Serving can be TensorCraft - a simple HTTP server that stores models (I'm the author of this tool). Currently it supports only TensorFlow Saved Model format.
Before using the model, you need to export it using TensorFlow API, pack it to TAR and push to the server.
More details you can find in the project documentation.
So I have trained inception model to recognize flowers according to this guide. https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html
bazel build tensorflow/examples/image_retraining:retrain
bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos
To classify the image via command line, I can do this:
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg
But how do I serve this graph via Tensorflow serving?
The guide about setting up Tensorflow serving (https://tensorflow.github.io/serving/serving_basic) does not tell how to incorporate the graph (output_graph.pb). The server expects the different format of file:
$>ls /tmp/mnist_model/00000001
checkpoint export-00000-of-00001 export.meta
To serve the graph after you have trained it, you would need to export it using this api: https://www.tensorflow.org/versions/r0.8/api_docs/python/train.html#export_meta_graph
That api generates the metagraph def that is needed by the serving code ( this will generate that .meta file you are asking about)
Also, you need to restore a checkpoint using Saver.save() which is the Saver class https://www.tensorflow.org/versions/r0.8/api_docs/python/train.html#Saver
Once you have done this, you will both the metagraph def and the checkpoint files that are needed to restore the graph.
You have to export the model. I have a PR that exports the model during retraining. The gist of it is below:
import tensorflow as tf
def export_model(sess, architecture, saved_model_dir):
if architecture == 'inception_v3':
input_tensor = 'DecodeJpeg/contents:0'
elif architecture.startswith('mobilenet_'):
input_tensor = 'input:0'
else:
raise ValueError('Unknown architecture', architecture)
in_image = sess.graph.get_tensor_by_name(input_tensor)
inputs = {'image': tf.saved_model.utils.build_tensor_info(in_image)}
out_classes = sess.graph.get_tensor_by_name('final_result:0')
outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
# Save out the SavedModel.
builder = tf.saved_model.builder.SavedModelBuilder(saved_model_dir)
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature
},
legacy_init_op=legacy_init_op)
builder.save()
Above will create a variables directory and saved_model.pb file. If you put it under a parent directory representing the version number (e.g. 1/) then you can call tensorflow serving via:
tensorflow_model_server --port=9000 --model_name=inception --model_base_path=/path/to/saved_models/
Check out this gist how to load your .pb output graph in a Session:
https://github.com/eldor4do/Tensorflow-Examples/blob/master/retraining-example.py