I have looked on several posts on stackoverflow and have been at it for a few days now, but alas, I'm not able to properly serve an object detection model through tensorflow serving.
I have visited to the following links:
How to properly serve an object detection model from Tensorflow Object Detection API?
and
https://github.com/tensorflow/tensorflow/issues/11863
Here's what I have done.
I have downloaded the ssd_mobilenet_v1_coco_11_06_2017.tar.gz, which contains the following files:
frozen_inference_graph.pb
graph.pbtxt
model.ckpt.data-00000-of-00001
model.ckpt.index
model.ckpt.meta
Using the following script, I was able successfully convert the frozen_inference_graph.pb to a SavedModel (under directory ssd_mobilenet_v1_coco_11_06_2017/saved)
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
import ipdb
# Specify version 1
export_dir = './saved/1'
graph_pb = 'frozen_inference_graph.pb'
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.gfile.GFile(graph_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
with tf.Session(graph=tf.Graph()) as sess:
# name="" is important to ensure we don't get spurious prefixing
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
ipdb.set_trace()
inp = g.get_tensor_by_name("image_tensor:0")
outputs = {}
outputs["detection_boxes"] = g.get_tensor_by_name('detection_boxes:0')
outputs["detection_scores"] = g.get_tensor_by_name('detection_scores:0')
outputs["detection_classes"] = g.get_tensor_by_name('detection_classes:0')
outputs["num_detections"] = g.get_tensor_by_name('num_detections:0')
output_tensor = tf.concat([tf.expand_dims(t, 0) for t in outputs], 0)
# or use tf.gather??
# out = g.get_tensor_by_name("generator/Tanh:0")
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": output_tensor} )
sigs["predict_images"] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": output_tensor} )
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
builder.save()
I get the following error:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server
--port=9000 --model_base_path=/serving/ssd_mobilenet_v1_coco_11_06_2017/saved
2017-09-17 22:33:21.325087: W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:268] No versions of servable default found under base path /serving/ssd_mobilenet_v1_coco_11_06_2017/saved/1
I understand I will need a client to connect to the server to do the prediction. However, I'm not even able to serve the model properly.
You need to change the export signature somewhat from what the original post did. This script does the necessary changes for you:
$OBJECT_DETECTION_CONFIG=object_detection/samples/configs/ssd_mobilenet_v1_pets.config
$ python object_detection/export_inference_graph.py \ --input_type encoded_image_string_tensor \ --pipeline_config_path ${OBJECT_DETECTION_CONFIG} \ --trained_checkpoint_prefix ${YOUR_LOCAL_CHK_DIR}/model.ckpt-${CHECKPOINT_NUMBER} \ --output_directory ${YOUR_LOCAL_EXPORT_DIR}
For more details on what the program is doing, see:
https://cloud.google.com/blog/big-data/2017/09/performing-prediction-with-tensorflow-object-detection-models-on-google-cloud-machine-learning-engine
Related
I deployed my retrained TF for poet model in google cloud. currently, I am trying to get predictions from that. But it gives the following error.
"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"contents must be scalar, got shape [1]\n\t [[{{node DecodeJpeg}}]]\")"
Following code, I used to get serving model
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import builder as saved_model_builder
input_graph = 'retrained_graph.pb'
saved_model_dir = 'my_model'
with tf.Graph().as_default() as graph:
# Read in the export graph
with tf.gfile.FastGFile(input_graph, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
# Define SavedModel Signature (inputs and outputs)
in_image = graph.get_tensor_by_name('DecodeJpeg/contents:0')
inputs = {'image_bytes': tf.saved_model.utils.build_tensor_info(in_image)}
out_classes = graph.get_tensor_by_name('final_result:0')
outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name='tensorflow/serving/predict'
)
with tf.Session(graph=graph) as sess:
# Save out the SavedModel.
b = saved_model_builder.SavedModelBuilder(saved_model_dir)
b.add_meta_graph_and_variables(sess,[tf.saved_model.tag_constants.SERVING],signature_def_map={'serving_default': signature})
b.save()
request.json generating code
python -c 'import base64, sys, json; img = base64.b64encode(open(sys.argv[1], "rb").read()); print json.dumps({"image_bytes": {"b64": img}})' test.jpg &> request.json
You can try:
{"instances": [{"image_bytes": {"b64": encoded_string}, "key": "0"}]}
I have successfully trained, exported and uploaded my 'retrained_graph.pb' to ML Engine. My export script is as follows:
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import builder as saved_model_builder
input_graph = 'retrained_graph.pb'
saved_model_dir = 'my_model'
with tf.Graph().as_default() as graph:
# Read in the export graph
with tf.gfile.FastGFile(input_graph, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
# Define SavedModel Signature (inputs and outputs)
in_image = graph.get_tensor_by_name('DecodeJpeg/contents:0')
inputs = {'image_bytes': tf.saved_model.utils.build_tensor_info(in_image)}
out_classes = graph.get_tensor_by_name('final_result:0')
outputs = {'prediction_bytes': tf.saved_model.utils.build_tensor_info(out_classes)}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name='tensorflow/serving/predict'
)
with tf.Session(graph=graph) as sess:
# Save out the SavedModel.
b = saved_model_builder.SavedModelBuilder(saved_model_dir)
b.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
b.save()
I build my prediction Json using the following:
# Copy the image to local disk.
gsutil cp gs://cloud-ml-data/img/flower_photos/tulips/4520577328_a94c11e806_n.jpg flower.jpg
# Create request message in json format.
python -c 'import base64, sys, json; img = base64.b64encode(open(sys.argv[1], "rb").read()); print json.dumps({"image_bytes": {"b64": img}}) ' flower.jpg &> request.json
# Call prediction service API to get classifications
gcloud ml-engine predict --model ${MODEL_NAME} --json-instances request.json
However this fails with the response:
{
"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"contents must be scalar, got shape [1]\n\t [[Node: Deco
deJpeg = DecodeJpeg[_output_shapes=[[?,?,3]], acceptable_fraction=1, channels=3, dct_method=\"\", fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device=\"/job:l
ocalhost/replica:0/task:0/device:CPU:0\"](_arg_DecodeJpeg/contents_0_0)]]\")"
}
Any help appreciated, I'm so close I can taste it :D
Why do you have this line:
in_image = graph.get_tensor_by_name('DecodeJpeg/contents:0')?
inputs = {'image_bytes': tf.saved_model.utils.build_tensor_info(in_image)}
The shape here is scalar. Can you make sure you create input with shape [None]
https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/flowers/trainer/model.py#L364
The server will decode and batch all the inputs. So the input to your graph is essentially [base64_decode("xxx")] where you actually want to feed base64_decode("xxx") since the op takes a string type tensor. Server-side assumes the shape of input as [None, ] i.e. the first dimension can be anything for batching. In your case, [None]. You might want to create a tensor of that shape and then feed that into the op.
I've been following the TensorFlow for Poets 2 codelab on a model I've trained, and have created a frozen, quantized graph with embedded weights. It's captured in a single file - say my_quant_graph.pb.
Since I can use that graph for inference with the TensorFlow Android inference library just fine, I thought I could do the same with Cloud ML Engine, but it seems it only works on a SavedModel model.
How can I simply convert a frozen/quantized graph in a single pb file to use on ML engine?
It turns out that a SavedModel provides some extra info around a saved graph. Assuming a frozen graph doesn't need assets, then it needs only a serving signature specified.
Here's the python code I ran to convert my graph to a format that Cloud ML engine accepted. Note I only have a single pair of input/output tensors.
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
export_dir = './saved'
graph_pb = 'my_quant_graph.pb'
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.gfile.GFile(graph_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
with tf.Session(graph=tf.Graph()) as sess:
# name="" is important to ensure we don't get spurious prefixing
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
inp = g.get_tensor_by_name("real_A_and_B_images:0")
out = g.get_tensor_by_name("generator/Tanh:0")
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": out})
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
builder.save()
There is sample with multiple outputs nodes:
# Convert PtotoBuf model to saved_model, format for TF Serving
# https://cloud.google.com/ai-platform/prediction/docs/exporting-savedmodel-for-prediction
import shutil
import tensorflow.compat.v1 as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
export_dir = './1' # TF Serving supports run different versions of same model. So we put current model to '1' folder.
graph_pb = 'frozen_inference_graph.pb'
# Clear out folder
shutil.rmtree(export_dir, ignore_errors=True)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.io.gfile.GFile(graph_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
with tf.Session(graph=tf.Graph()) as sess:
# Prepare input and outputs of model
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
image_tensor = g.get_tensor_by_name("image_tensor:0")
num_detections = g.get_tensor_by_name("num_detections:0")
detection_scores = g.get_tensor_by_name("detection_scores:0")
detection_boxes = g.get_tensor_by_name("detection_boxes:0")
detection_classes = g.get_tensor_by_name("detection_classes:0")
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"input_image": image_tensor},
{ "num_detections": num_detections,
"detection_scores": detection_scores,
"detection_boxes": detection_boxes,
"detection_classes": detection_classes})
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
builder.save()
Right now we are successfully able to serve models using Tensorflow Serving. We have used following method to export the model and host it with Tensorflow Serving.
------------
For exporting
------------------
from tensorflow.contrib.session_bundle import exporter
K.set_learning_phase(0)
export_path = ... # where to save the exported graph
export_version = ... # version number (integer)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input,
scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(),
default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)
--------------------------------------
For hosting
-----------------------------------------------
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=default --model_base_path=/serving/models
However our issue is - we want keras to be integrated with Tensorflow serving. We would like to serve the model through Tensorflow serving using Keras.
The reason we would like to have that is because - in our architecture we follow couple of different ways to train our model like deeplearning4j + Keras ,
Tensorflow + Keras, but for serving we would like to use only one servable engine that's Tensorflow Serving. We don't see any straight forward way to achieve that. Any comments ?
Thank you.
Very recently TensorFlow changed the way it exports the model, so the majority of the tutorials available on web are outdated. I honestly don't know how deeplearning4j works, but I use Keras quite often. I managed to create a simple example that I already posted on this issue in TensorFlow Serving Github.
I'm not sure whether this will help you, but I'd like to share how I did and maybe it will give you some insights. My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19. I did this as follows.
Model creation
import keras.backend as K
from keras.applications import VGG19
from keras.models import Model
# very important to do this as a first thing
K.set_learning_phase(0)
model = VGG19(include_top=True, weights='imagenet')
# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights)
Exporting the model
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
export_path = 'folder_to_export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'images': new_model.input},
outputs={'scores': new_model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict': signature})
builder.save()
Some side notes
It can vary depending on Keras, TensorFlow, and TensorFlow Serving
version. I used the latest ones.
Beware of the names of the signatures, since they should be used in the client as well.
When creating the client, all preprocessing steps that are needed for the
model (preprocess_input() for example) must be executed. I didn't try
to add such step in the graph itself as Inception client example.
With respect to serving different models within the same server, I think that something similar to the creation of a model_config_file might help you. To do so, you can create a config file similar to this:
model_config_list: {
config: {
name: "my_model_1",
base_path: "/tmp/model_1",
model_platform: "tensorflow"
},
config: {
name: "my_model_2",
base_path: "/tmp/model_2",
model_platform: "tensorflow"
}
}
Finally, you can run the client like this:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --config_file=model_config.conf
try this script i wrote, you can convert keras models into tensorflow frozen graphs, ( i saw that some models give rise to strange behaviours when you export them without freezing the variables).
import sys
from keras.models import load_model
import tensorflow as tf
from keras import backend as K
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
K.set_learning_phase(0)
K.set_image_data_format('channels_last')
INPUT_MODEL = sys.argv[1]
NUMBER_OF_OUTPUTS = 1
OUTPUT_NODE_PREFIX = 'output_node'
OUTPUT_FOLDER= 'frozen'
OUTPUT_GRAPH = 'frozen_model.pb'
OUTPUT_SERVABLE_FOLDER = sys.argv[2]
INPUT_TENSOR = sys.argv[3]
try:
model = load_model(INPUT_MODEL)
except ValueError as err:
print('Please check the input saved model file')
raise err
output = [None]*NUMBER_OF_OUTPUTS
output_node_names = [None]*NUMBER_OF_OUTPUTS
for i in range(NUMBER_OF_OUTPUTS):
output_node_names[i] = OUTPUT_NODE_PREFIX+str(i)
output[i] = tf.identity(model.outputs[i], name=output_node_names[i])
print('Output Tensor names: ', output_node_names)
sess = K.get_session()
try:
frozen_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), output_node_names)
graph_io.write_graph(frozen_graph, OUTPUT_FOLDER, OUTPUT_GRAPH, as_text=False)
print(f'Frozen graph ready for inference/serving at {OUTPUT_FOLDER}/{OUTPUT_GRAPH}')
except:
print('Error Occured')
builder = tf.saved_model.builder.SavedModelBuilder(OUTPUT_SERVABLE_FOLDER)
with tf.gfile.GFile(f'{OUTPUT_FOLDER}/{OUTPUT_GRAPH}', "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
OUTPUT_TENSOR = output_node_names
with tf.Session(graph=tf.Graph()) as sess:
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
inp = g.get_tensor_by_name(INPUT_TENSOR)
out = g.get_tensor_by_name(OUTPUT_TENSOR[0] + ':0')
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"input": inp}, {"outout": out})
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
try:
builder.save()
print(f'Model ready for deployment at {OUTPUT_SERVABLE_FOLDER}/saved_model.pb')
print('Prediction signature : ')
print(sigs['serving_default'])
except:
print('Error Occured, please checked frozen graph')
I have recently added this blogpost that explain how to save a Keras model and serve it with Tensorflow Serving.
TL;DR:
Saving an Inception3 pretrained model:
### Load a pretrained inception_v3
inception_model = keras.applications.inception_v3.InceptionV3(weights='imagenet')
# Define a destination path for the model
MODEL_EXPORT_DIR = '/tmp/inception_v3'
MODEL_VERSION = 1
MODEL_EXPORT_PATH = os.path.join(MODEL_EXPORT_DIR, str(MODEL_VERSION))
# We'll need to create an input mapping, and name each of the input tensors.
# In the inception_v3 Keras model, there is only a single input and we'll name it 'image'
input_names = ['image']
name_to_input = {name: t_input for name, t_input in zip(input_names, inception_model.inputs)}
# Save the model to the MODEL_EXPORT_PATH
# Note using 'name_to_input' mapping, the names defined here will also be used for querying the service later
tf.saved_model.simple_save(
keras.backend.get_session(),
MODEL_EXPORT_PATH,
inputs=name_to_input,
outputs={t.name: t for t in inception_model.outputs})
And then starting a TF serving Docker:
Copy the saved model to the hosts' specified directory. (source=/tmp/inception_v3 in this example)
Run the docker:
docker run -d -p 8501:8501 --name keras_inception_v3 --mount type=bind,source=/tmp/inception_v3,target=/models/inception_v3 -e MODEL_NAME=inception_v3 -t tensorflow/serving
Verify that there's network access to the Tensorflow service. In order to get the local docker ip (172.*.*.*) for testing run:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' keras_inception_v3
I'm trying to benchmark the performance in the inference phase of my Keras model build with the TensorFlow backend. I was thinking that the the Tensorflow Benchmark tool was the proper way to go.
I've managed to build and run the example on Desktop with the tensorflow_inception_graph.pb and everything seems to work fine.
What I can't seem to figure out is how to save the Keras model as a proper .pbmodel. I'm able to get the TensorFlow Graph from the Keras model as follows:
import keras.backend as K
K.set_learning_phase(0)
trained_model = function_that_returns_compiled_model()
sess = K.get_session()
sess.graph # This works
# Get the input tensor name for TF Benchmark
trained_model.input
> <tf.Tensor 'input_1:0' shape=(?, 360, 480, 3) dtype=float32>
# Get the output tensor name for TF Benchmark
trained_model.output
> <tf.Tensor 'reshape_2/Reshape:0' shape=(?, 360, 480, 12) dtype=float32>
I've now been trying to save the model in a couple of different ways.
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
model = trained_model
export_path = "path/to/folder" # where to save the exported graph
export_version = 1 # version number (integer)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)
Which produces a folder with some files I don't know what to do with.
I would now run the Benchmark tool with something like this
bazel-bin/tensorflow/tools/benchmark/benchmark_model \
--graph=tensorflow/tools/benchmark/what_file.pb \
--input_layer="input_1:0" \
--input_layer_shape="1,360,480,3" \
--input_layer_type="float" \
--output_layer="reshape_2/Reshape:0"
But no matter which file I'm trying to use as the what_file.pb I'm getting a Error during inference: Invalid argument: Session was not created with a graph before Run()!
So I got this to work. Just needed to convert all variables in the tensorflow graph to constants and then save graph definition.
Here's a small example:
import tensorflow as tf
from keras import backend as K
from tensorflow.python.framework import graph_util
K.set_learning_phase(0)
model = function_that_returns_your_keras_model()
sess = K.get_session()
output_node_name = "my_output_node" # Name of your output node
with sess as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(
sess,
sess.graph.as_graph_def(),
output_node_name.split(","))
tf.train.write_graph(output_graph_def,
logdir="my_dir",
name="my_model.pb",
as_text=False)
Now just call the TensorFlow Benchmark tool with my_model.pb as the graph.
You're saving the parameters of this model and not the graph definition; to save that use tf.get_default_graph().as_graph_def().SerializeToString() and then save that to a file.
That said I don't think the benchmark tool will work since it has no way to initialize the variables your model depends on.