How do I need to modify exporting a keras model to accept b64 string to RESTful API/Google cloud ML - tensorflow

The complete code for exporting the model: (I've already trained it and now loading from weights file)
def cnn_layers(inputs):
conv_base= keras.applications.mobilenetv2.MobileNetV2(input_shape=(224,224,3), input_tensor=inputs, include_top=False, weights='imagenet')
for layer in conv_base.layers[:-200]:
layer.trainable = False
last_layer = conv_base.output
x = GlobalAveragePooling2D()(last_layer)
x= keras.layers.GaussianNoise(0.3)(x)
x = Dense(1024,name='fc-1')(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.advanced_activations.LeakyReLU(0.3)(x)
x = Dropout(0.4)(x)
x = Dense(512,name='fc-2')(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.advanced_activations.LeakyReLU(0.3)(x)
x = Dropout(0.3)(x)
out = Dense(10, activation='softmax',name='output_layer')(x)
return out
model_input = layers.Input(shape=(224,224,3))
model_output = cnn_layers(model_input)
test_model = keras.models.Model(inputs=model_input, outputs=model_output)
weight_path = os.path.join(tempfile.gettempdir(), 'saved_wt.h5')
test_model.load_weights(weight_path)
export_path='export'
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'image': test_model.input},
outputs={'prediction': test_model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict': signature})
builder.save()
And the output of  (dir 1 has saved_model.pb and models dir) :
python /tensorflow/python/tools/saved_model_cli.py show --dir /1 --all   is
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['predict']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 224, 224, 3)
name: input_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['prediction'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 107)
name: output_layer/Softmax:0
Method name is: tensorflow/serving/predict
To accept b64 string:
The code was written for (224, 224, 3) numpy array. So, the modifications I made for the above code are:
_bytes should be added to input when passing as b64. So,
predict_signature_def(inputs={'image':......
          changed to
predict_signature_def(inputs={'image_bytes':.....
Earlier, type(test_model.input) is : (224, 224, 3) and dtype: DT_FLOAT. So,
signature = predict_signature_def(inputs={'image': test_model.input},.....
          changed to (reference)
temp = tf.placeholder(shape=[None], dtype=tf.string)
signature = predict_signature_def(inputs={'image_bytes': temp},.....
Edit:
Code to send using requests is : (As mentioned in the comments)
encoded_image = None
with open('/1.jpg', "rb") as image_file:
encoded_image = base64.b64encode(image_file.read())
object_for_api = {"signature_name": "predict",
"instances": [
{
"image_bytes":{"b64":encoded_image}
#"b64":encoded_image (or this way since "image" is not needed)
}]
}
p=requests.post(url='http://localhost:8501/v1/models/mnist:predict', json=json.dumps(object_for_api),headers=headers)
print(p)
I'm getting <Response [400]> error. I think there's no error in the way I'm sending. Something needs to be changed in the code for exporting the model and specifically in
temp = tf.placeholder(shape=[None], dtype=tf.string).

Looking at the docs you've provided what you're looking to do is to take the image and send it in to the API. Images are easily transferable in a text format if you encode them, base64 being pretty much the standard. So what we want to do is create a json object with the image as base64 in the right place and then send this json object into the REST api. python has the requests library which makes sending in a python dictionary as JSON very easy.
So take the image, encode it, put it in a dictionary and send it off using requests:
import requests
import base64
encoded_image = None
with open("image.png", "rb") as image_file:
encoded_image = base64.b64encode(image_file.read())
object_for_api = {"signature_name": "predict",
"instances": [
{
"image": {"b64": encoded_image}
}]
}
requests.post(url='http://localhost:8501/v1/models/mnist:predict', json=object_for_api)
You can also encode your numpy array into JSON but it doesn't seem that the API docs are looking for that.

Two side notes:
I encourage you to use tf.saved_model.simple_save
You may find model_to_estimator convenient.
While your model seems like it will work for requests (the output of saved_model_cli shows the outer dimension is None for both inputs and outputs), it's fairly inefficient to send JSON arrays of floats
To the last point, it's often easier to modify the code to do the image decoding server side so you're sending a base64 encoded JPG or PNG over the wire instead of an array of floats. Here's one example for Keras (I plan to update that answer with simpler code).

Related

Two models of the same architecture with same weights giving different results

Problem
After copying weights from a pretrained model, I do not get the same output.
Description
tf2cv repository provides pretrained models in TF2 for various backbones. Unfortunately the codebase is of limited use to me because they use subclassing via tf.keras.Model which makes it very hard to extract intermediate outputs and gradients at will. I therefore embarked upon rewriting the codes for the backbones using the functional API. After rewriting the resnet architecture codes, I copied their weights into my model and saved them in SavedModel format. In order to test if it is correctly done, I gave an input to my model instance and theirs and the results were different.
My approaches to debugging the problem
I checked the number of trainable and non-trainable parameters and they are the same between my model instance and theirs.
I checked if all trainable weights have been copied which they have.
My present line of thinking
I think it might be possible that weights have not been copied to the correct layers. For example :- Layer X and Layer Y might have weights of the same shape but during weight copying, weights of layer Y might have gone into Layer X and vice versa. This is only possible if I have not mapped the layer names between the two models properly.
However I have exhaustively checked and have not found any error so far.
The Code
My code is attached below. Their (tfcv) code for resnet can be found here
Please note that resnet_orig in the following snippet is the same as here
My converted code can be found here
from vision.image import resnet as myresnet
from glob import glob
from loguru import logger
import tensorflow as tf
import resnet_orig
import re
import os
import numpy as np
from time import time
from copy import deepcopy
tf.random.set_seed(time())
models = [
'resnet10',
'resnet12',
'resnet14',
'resnetbc14b',
'resnet16',
'resnet18_wd4',
'resnet18_wd2',
'resnet18_w3d4',
'resnet18',
'resnet26',
'resnetbc26b',
'resnet34',
'resnetbc38b',
'resnet50',
'resnet50b',
'resnet101',
'resnet101b',
'resnet152',
'resnet152b',
'resnet200',
'resnet200b',
]
def zipdir(path, ziph):
# ziph is zipfile handle
for root, dirs, files in os.walk(path):
for file in files:
ziph.write(os.path.join(root, file),
os.path.relpath(os.path.join(root, file),
os.path.join(path, '..')))
def find_model_file(model_type):
model_files = glob('*.h5')
for m in model_files:
if '{}-'.format(model_type) in m:
return m
return None
def remap_our_model_variables(our_variables, model_name):
remapped = list()
reg = re.compile(r'(stage\d+)')
for var in our_variables:
newvar = var.replace(model_name, 'features/features')
stage_search = re.search(reg, newvar)
if stage_search is not None:
stage_search = stage_search[0]
newvar = newvar.replace(stage_search, '{}/{}'.format(stage_search,
stage_search))
newvar = newvar.replace('conv_preact', 'conv/conv')
newvar = newvar.replace('conv_bn','bn')
newvar = newvar.replace('logits','output1')
remapped.append(newvar)
remap_dict = dict([(x,y) for x,y in zip(our_variables, remapped)])
return remap_dict
def get_correct_variable(variable_name, trainable_variable_names):
for i, var in enumerate(trainable_variable_names):
if variable_name == var:
return i
logger.info('Uffff.....')
return None
layer_regexp_compiled = re.compile(r'(.*)\/.*')
model_files = glob('*.h5')
a = np.ones(shape=(1,224,224,3), dtype=np.float32)
inp = tf.constant(a, dtype=tf.float32)
for model_type in models:
logger.info('Model is {}.'.format(model_type))
model = eval('myresnet.{}(input_height=224,input_width=224,'
'num_classes=1000,data_format="channels_last")'.format(
model_type))
model2 = eval('resnet_orig.{}(data_format="channels_last")'.format(
model_type))
model2.build(input_shape=(None,224, 224,3))
model_name=find_model_file(model_type)
logger.info('Model file is {}.'.format(model_name))
original_weights = deepcopy(model2.weights)
if model_name is not None:
e = model2.load_weights(model_name, by_name=True, skip_mismatch=False)
print(e)
loaded_weights = deepcopy(model2.weights)
else:
logger.info('Pretrained model is not available for {}.'.format(
model_type))
continue
diff = [np.mean(x.numpy()-y.numpy()) for x,y in zip(original_weights,
loaded_weights)]
our_model_weights = model.weights
their_model_weights = model2.weights
assert (len(our_model_weights) == len(their_model_weights))
our_variable_names = [x.name for x in model.weights]
their_variable_names = [x.name for x in model2.weights]
remap_dict = remap_our_model_variables(our_variable_names, model_type)
new_weights = list()
for i in range(len(our_model_weights)):
our_name = model.weights[i].name
remapped_name = remap_dict[our_name]
source_index = get_correct_variable(remapped_name, their_variable_names)
new_weights.append(
model2.weights[source_index].value())
logger.debug('Copying from {} ({}) to {} ({}).'.format(
model2.weights[
source_index].name,
model2.weights[source_index].value().shape,
model.weights[
i].name,
model.weights[i].value().shape))
logger.info(len(new_weights))
logger.info('Setting new weights')
model.set_weights(new_weights)
logger.info('Finished setting new weights.')
their_output = model2(inp)
our_output = model(inp)
logger.info(np.max(their_output.numpy() - our_output.numpy()))
logger.info(diff) # This must be 0.0
break

tensorflow serving restful request how to post a list Object

I made a tensorflow Model and i use tensorflow serving to deploy this model, but when i can build the restful request params the model need
curl -d '{"instances": [[1], [1], [1], [1], [1], [1], [1], [1], [1], [1]]}'
-X POST http://localhost:8501/v1/models/shipping_predict:predict
it call back
"error": "instances is a plain list, but expecting list of objects as multiple input tensors required as per tensorinfo_map"
this is my model
# prepare each input head
in_layers = list()
em_layers = list()
#
# customer
in_layer_customer = Input(shape=(1,))
em_layer_customer = Embedding(5000, 10)(in_layer_customer)
em_layer_customer = layers.Reshape([1 * 10])(em_layer_customer)
in_layers.append(in_layer_customer)
em_layers.append(em_layer_customer)
# salesman
in_layer_sale = Input(shape=(1,))
em_layer_sale = Embedding(500, 10)(in_layer_sale)
em_layer_sale = layers.Reshape([1 * 10])(em_layer_sale)
in_layers.append(in_layer_sale)
em_layers.append(em_layer_sale)
# business_type
in_layer_businessType = Input(shape=(1,))
em_layer_businessType = Embedding(100, 10)(in_layer_businessType)
em_layer_businessType = layers.Reshape([1 * 10])(em_layer_businessType)
in_layers.append(in_layer_businessType)
em_layers.append(em_layer_businessType)
# 20
in_layer_20 = Input(shape=(1,))
em_layer_20 = layers.Dense(16, activation='relu')(in_layer_20)
in_layers.append(in_layer_20)
em_layers.append(em_layer_20)
# 40
in_layer_40 = Input(shape=(1,))
em_layer_40 = layers.Dense(16, activation='relu')(in_layer_40)
in_layers.append(in_layer_40)
em_layers.append(em_layer_40)
# 45
in_layer_45 = Input(shape=(1,))
em_layer_45 = layers.Dense(16, activation='relu')(in_layer_45)
in_layers.append(in_layer_45)
em_layers.append(em_layer_45)
# other
in_layer_other = Input(shape=(1,))
em_layer_other = layers.Dense(16, activation='relu')(in_layer_other)
in_layers.append(in_layer_other)
em_layers.append(em_layer_other)
# dischargingPort
in_layer_dischargingPortName = Input(shape=(1,))
em_layer_dischargingPortName = Embedding(5000, 10)(in_layer_dischargingPortName)
em_layer_dischargingPortName = layers.Reshape([1 * 10])(em_layer_dischargingPortName)
in_layers.append(in_layer_dischargingPortName)
em_layers.append(em_layer_dischargingPortName)
# MBL Method
in_layer_mbl = Input(shape=(1,))
em_layer_mbl = layers.Dense(16, activation='relu')(in_layer_mbl)
in_layers.append(in_layer_mbl)
em_layers.append(em_layer_mbl)
# atdMouth
in_layer_atdMouth = Input(shape=(1,))
em_layer_atdMouth = Embedding(100, 10)(in_layer_atdMouth)
em_layer_atdMouth = layers.Reshape([1 * 10])(em_layer_atdMouth)
in_layers.append(in_layer_atdMouth)
em_layers.append(em_layer_atdMouth)
merge = Concatenate()(em_layers)
dense = Dense(32, activation='relu')(merge)
output = Dense(1)(dense)
model = Model(inputs=in_layers, outputs=output)
instances is a plain list, but expecting list of objects as multiple
input tensors
It sounds like your model, for whatever reason, is expecting named tensors. This is not something I've worked with before but there appears to be another way of sending requests to your your model.
curl -X POST -i 'http://192.168.1.16:8501/v1/models/export:predict' --data '
{
"signature_name": "serving_default",
"inputs": [
{
"tokens_0" :["text text text text text text text text text text"],
"length_0": [1],
"tokens_1": ["01 01 01 01 01 01 01 01 01 01"],
"length_1": [1],
"tokens_2": ["4 4 4 1 1 4 4 4 4 4"],
"length_2": [1]
}
]
}'
I've just copied this example from here (credit to #ishaan-sharma).
Your model is non-trivial so I won't try and create the exact request for you. If you're unsure about the tensor names etc, you can check out the expected shape using the saved_model_cli:
saved_model_cli show [-h] --dir DIR [--all]
A bit late for this answer, but I hope it helps to someone. I have saved a Keras model where it excepts multiple inputs. The model expects two inputs: text and a vector of integers
import tensorflow as tf
from tensorflow.keras.models import Model
model = Model(inputs=[text_input, input2], outputs=out)
### some code for training and data preparation
export_path = "serving/5/"
tf.saved_model.save(model, export_path)
This finally saved the model to be served using Tensorflow serving. A brief summary of the model is as follows:
$ saved_model_cli show --dir ./serving/5 --tag_set serve --signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
inputs['input_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 23)
name: serving_default_input_1:0
inputs['text'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_text:0
The given SavedModel SignatureDef contains the following output(s):
outputs['dense_2'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
Finally to post my request to the serving container, I did:
import requests
import json
params = json.dumps(
{
"instances": [
{
"text": [features[0]],
"input_1": features[1:]
}
]
}
)
response = requests.post(
"http://localhost:8038/v1/models/serving:predict",
headers={"Content-Type": "application/json"},
data=params
)
The main issue was that I had to wrap all my inputs as a JSON in instances with their tensor names (which I got from saved_model_cli).
Important thing to note is that the values needs to be given as a list. features[0] is a text so I wrapped it up in a list while features[1:] itself is a vector which does fine.

How to read (decode) tfrecords with tf.data API

I have a custom dataset, that I then stored as tfrecord, doing
# toy example data
label = np.asarray([[1,2,3],
[4,5,6]]).reshape(2, 3, -1)
sample = np.stack((label + 200).reshape(2, 3, -1))
def bytes_feature(values):
"""Returns a TF-Feature of bytes.
Args:
values: A string.
Returns:
A TF-Feature.
"""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[values]))
def labeled_image_to_tfexample(sample_binary_string, label_binary_string):
return tf.train.Example(features=tf.train.Features(feature={
'sample/image': bytes_feature(sample_binary_string),
'sample/label': bytes_feature(label_binary_string)
}))
def _write_to_tf_record():
with tf.Graph().as_default():
image_placeholder = tf.placeholder(dtype=tf.uint16)
encoded_image = tf.image.encode_png(image_placeholder)
label_placeholder = tf.placeholder(dtype=tf.uint16)
encoded_label = tf.image.encode_png(image_placeholder)
with tf.python_io.TFRecordWriter("./toy.tfrecord") as writer:
with tf.Session() as sess:
feed_dict = {image_placeholder: sample,
label_placeholder: label}
# Encode image and label as binary strings to be written to tf_record
image_string, label_string = sess.run(fetches=(encoded_image, encoded_label),
feed_dict=feed_dict)
# Define structure of what is going to be written
file_structure = labeled_image_to_tfexample(image_string, label_string)
writer.write(file_structure.SerializeToString())
return
However I cannot read it. First I tried (based on http://www.machinelearninguru.com/deep_learning/tensorflow/basics/tfrecord/tfrecord.html , https://medium.com/coinmonks/storage-efficient-tfrecord-for-images-6dc322b81db4 and https://medium.com/mostly-ai/tensorflow-records-what-they-are-and-how-to-use-them-c46bc4bbb564)
def read_tfrecord_low_level():
data_path = "./toy.tfrecord"
filename_queue = tf.train.string_input_producer([data_path], num_epochs=1)
reader = tf.TFRecordReader()
_, raw_records = reader.read(filename_queue)
decode_protocol = {
'sample/image': tf.FixedLenFeature((), tf.int64),
'sample/label': tf.FixedLenFeature((), tf.int64)
}
enc_example = tf.parse_single_example(raw_records, features=decode_protocol)
recovered_image = enc_example["sample/image"]
recovered_label = enc_example["sample/label"]
return recovered_image, recovered_label
I also tried variations casting enc_example and decoding it, such as in Unable to read from Tensorflow tfrecord file However when I try to evaluate them my python session just freezes and gives no output or traceback.
Then I tried using eager execution to see what is happening, but apparently it is only compatible with tf.data API. However as far as I understand transformations on tf.data API are made on the whole dataset. https://www.tensorflow.org/api_guides/python/reading_data mentions that a decode function must be written, but doesn't give an example on how to do that. All the tutorials I have found are made for TFRecordReader (which doesn't work for me).
Any help (pinpointing what I am doing wrong/ explaining what is happening/ indications on how to decode tfrecords with tf.data API) is highly appreciated.
According to https://www.youtube.com/watch?v=4oNdaQk0Qv4 and https://www.youtube.com/watch?v=uIcqeP7MFH0 tf.data is the best way to create input pipelines, so I am highly interested on learning that way.
Thanks in advance!
I am not sure why storing the encoded png causes the evaluation to not work, but here is a possible way of working around the problem. Since you mentioned that you would like to use the tf.data way of creating input pipelines, I'll show how to use it with your toy example:
label = np.asarray([[1,2,3],
[4,5,6]]).reshape(2, 3, -1)
sample = np.stack((label + 200).reshape(2, 3, -1))
First, the data has to be saved to the TFRecord file. The difference from what you did is that the image is not encoded to png.
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
writer = tf.python_io.TFRecordWriter("toy.tfrecord")
example = tf.train.Example(features=tf.train.Features(feature={
'label_raw': _bytes_feature(tf.compat.as_bytes(label.tostring())),
'sample_raw': _bytes_feature(tf.compat.as_bytes(sample.tostring()))}))
writer.write(example.SerializeToString())
writer.close()
What happens in the code above is that the arrays are turned into strings (1d objects) and then stored as bytes features.
Then, to read the data back using the tf.data.TFRecordDataset and tf.data.Iterator class:
filename = 'toy.tfrecord'
# Create a placeholder that will contain the name of the TFRecord file to use
data_path = tf.placeholder(dtype=tf.string, name="tfrecord_file")
# Create the dataset from the TFRecord file
dataset = tf.data.TFRecordDataset(data_path)
# Use the map function to read every sample from the TFRecord file (_read_from_tfrecord is shown below)
dataset = dataset.map(_read_from_tfrecord)
# Create an iterator object that enables you to access all the samples in the dataset
iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
label_tf, sample_tf = iterator.get_next()
# Similarly to tf.Variables, the iterators have to be initialised
iterator_init = iterator.make_initializer(dataset, name="dataset_init")
with tf.Session() as sess:
# Initialise the iterator passing the name of the TFRecord file to the placeholder
sess.run(iterator_init, feed_dict={data_path: filename})
# Obtain the images and labels back
read_label, read_sample = sess.run([label_tf, sample_tf])
The function _read_from_tfrecord() is:
def _read_from_tfrecord(example_proto):
feature = {
'label_raw': tf.FixedLenFeature([], tf.string),
'sample_raw': tf.FixedLenFeature([], tf.string)
}
features = tf.parse_example([example_proto], features=feature)
# Since the arrays were stored as strings, they are now 1d
label_1d = tf.decode_raw(features['label_raw'], tf.int64)
sample_1d = tf.decode_raw(features['sample_raw'], tf.int64)
# In order to make the arrays in their original shape, they have to be reshaped.
label_restored = tf.reshape(label_1d, tf.stack([2, 3, -1]))
sample_restored = tf.reshape(sample_1d, tf.stack([2, 3, -1]))
return label_restored, sample_restored
Instead of hard-coding the shape [2, 3, -1], you could also store that too into the TFRecord file, but for simplicity I didn't do it.
I made a little gist with a working example.
Hope this helps!

How to read a utf-8 encoded binary string in tensorflow?

I am trying to convert an encoded byte string back into the original array in the tensorflow graph (using tensorflow operations) in order to make a prediction in a tensorflow model. The array to byte conversion is based on this answer and it is the suggested input to tensorflow model prediction on google cloud's ml-engine.
def array_request_example(input_array):
input_array = input_array.astype(np.float32)
byte_string = input_array.tostring()
string_encoded_contents = base64.b64encode(byte_string)
return string_encoded_contents.decode('utf-8')}
Tensorflow code
byte_string = tf.placeholder(dtype=tf.string)
audio_samples = tf.decode_raw(byte_string, tf.float32)
audio_array = np.array([1, 2, 3, 4])
bstring = array_request_example(audio_array)
fdict = {byte_string: bstring}
with tf.Session() as sess:
[tf_samples] = sess.run([audio_samples], feed_dict=fdict)
I have tried using decode_raw and decode_base64 but neither return the original values.
I have tried setting the the out_type of decode raw to the different possible datatypes and tried altering what data type I am converting the original array to.
So, how would I read the byte array in tensorflow? Thanks :)
Extra Info
The aim behind this is to create the serving input function for a custom Estimator to make predictions using gcloud ml-engine local predict (for testing) and using the REST API for the model stored on the cloud.
The serving input function for the Estimator is
def serving_input_fn():
feature_placeholders = {'b64': tf.placeholder(dtype=tf.string,
shape=[None],
name='source')}
audio_samples = tf.decode_raw(feature_placeholders['b64'], tf.float32)
# Dummy function to save space
power_spectrogram = create_spectrogram_from_audio(audio_samples)
inputs = {'spectrogram': power_spectrogram}
return tf.estimator.export.ServingInputReceiver(inputs, feature_placeholders)
Json request
I use .decode('utf-8') because when attempting to json dump the base64 encoded byte strings I receive this error
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'longbytestring'
Prediction Errors
When passing the json request {'audio_bytes': 'b64': bytestring} with gcloud local I get the error
PredictionError: Invalid inputs: Expected tensor name: b64, got tensor name: [u'audio_bytes']
So perhaps google cloud local predict does not automatically handle the audio bytes and base64 conversion? Or likely somethings wrong with my Estimator setup.
And the request {'instances': [{'audio_bytes': 'b64': bytestring}]} to REST API gives
{'error': 'Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Input to DecodeRaw has length 793713 that is not a multiple of 4, the size of float\n\t [[Node: DecodeRaw = DecodeRaw[_output_shapes=[[?,?]], little_endian=true, out_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_source_0_0)]]")'}
which confuses me as I explicitly define the request to be a float and do the same in the serving input receiver.
Removing audio_bytes from the request and utf-8 encoding the byte strings allows me to get predictions, though in testing the decoding locally, I think the audio is being incorrectly converted from the byte string.
The answer that you referenced, is written assuming you are running the model on CloudML Engine's service. The service actually takes care of the JSON (including UTF-8) and base64 encoding.
To get your code working locally or in another environment, you'll need the following changes:
def array_request_example(input_array):
input_array = input_array.astype(np.float32)
return input_array.tostring()
byte_string = tf.placeholder(dtype=tf.string)
audio_samples = tf.decode_raw(byte_string, tf.float32)
audio_array = np.array([1, 2, 3, 4])
bstring = array_request_example(audio_array)
fdict = {byte_string: bstring}
with tf.Session() as sess:
tf_samples = sess.run([audio_samples], feed_dict=fdict)
That said, based on your code, I suspect you are looking to send data as JSON; you can use gcloud local predict to simulate CloudML Engine's service. Or, if you prefer to write your own code, perhaps something like this:
def array_request_examples,(input_arrays):
"""input_arrays is a list (batch) of np_arrays)"""
input_arrays = (a.astype(np.float32) for a in input_arrays)
# Convert each image to byte strings
bytes_strings = (a.tostring() for a in input_arrays)
# Base64 encode the data
encoded = (base64.b64encode(b) for b in bytes_strings)
# Create a list of images suitable to send to the service as JSON:
instances = [{'audio_bytes': {'b64': e}} for e in encoded]
# Create a JSON request
return json.dumps({'instances': instances})
def parse_request(request):
# non-TF to simulate the CloudML Service which does not expect
# this to be in the submitted graphs.
instances = json.loads(request)['instances']
return [base64.b64decode(i['audio_bytes']['b64']) for i in instances]
byte_strings = tf.placeholder(dtype=tf.string, shape=[None])
decode = lambda raw_byte_str: tf.decode_raw(raw_byte_str, tf.float32)
audio_samples = tf.map_fn(decode, byte_strings, dtype=tf.float32)
audio_array = np.array([1, 2, 3, 4])
request = array_request_examples([audio_array])
fdict = {byte_strings: parse_request(request)}
with tf.Session() as sess:
tf_samples = sess.run([audio_samples], feed_dict=fdict)

How to make predictions on TensorFlow's Wide and Deep model loaded in TensorFlow Servings model_server

Can someone assist me in making predictions on TensorFlow's Wide and Deep Learning model loaded into TensorFlow Serving's model_server?
If anyone could point me to a resource or documentation for the same would be really helpful.
You can possibly try to invoke the predict method of the estimator and set the as_iterable as false for an ndarray
y = m.predict(input_fn=lambda: input_fn(df_test), as_iterable=False)
However, note the deprecation note here for future compatibility.
If your model is exported using Estimator.export_savedmodel() and you successfully built TensorFlow Serving itself, you can do something like this:
from grpc.beta import implementations
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
tf.app.flags.DEFINE_string('server', 'localhost:9000', 'Server host:port.')
tf.app.flags.DEFINE_string('model', 'wide_and_deep', 'Model name.')
FLAGS = tf.app.flags.FLAGS
...
def main(_):
host, port = FLAGS.server.split(':')
# Set up a connection to the TF Model Server
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Create a request that will be sent for an inference
request = predict_pb2.PredictRequest()
request.model_spec.name = FLAGS.model
request.model_spec.signature_name = 'serving_default'
# A single tf.Example that will get serialized and turned into a TensorProto
feature_dict = {'age': _float_feature(value=25),
'capital_gain': _float_feature(value=0),
'capital_loss': _float_feature(value=0),
'education': _bytes_feature(value='11th'.encode()),
'education_num': _float_feature(value=7),
'gender': _bytes_feature(value='Male'.encode()),
'hours_per_week': _float_feature(value=40),
'native_country': _bytes_feature(value='United-States'.encode()),
'occupation': _bytes_feature(value='Machine-op-inspct'.encode()),
'relationship': _bytes_feature(value='Own-child'.encode()),
'workclass': _bytes_feature(value='Private'.encode())}
label = 0
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
serialized = example.SerializeToString()
request.inputs['inputs'].CopyFrom(
tf.contrib.util.make_tensor_proto(serialized, shape=[1]))
# Create a future result, and set 5 seconds timeout
result_future = stub.Predict.future(request, 5.0)
prediction = result_future.result().outputs['scores']
print('True label: ' + str(label))
print('Prediction: ' + str(np.argmax(prediction)))
Here I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model with more details.
Hope it helps.