tf.parse_example used in mnist export example - tensorflow

I am new to tensorflow and are reading mnist_export.py in tensorflow serving example.
There is something here I cannot understand:
sess = tf.InteractiveSession()
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {
'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32),
}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name
Above, serialized_tf_example is a Tensor.
I have read the api document tf.parse_example but it seems that serialized is serialized Example protos like:
serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
So how to understand tf_example = tf.parse_example(serialized_tf_example, feature_configs) here as serialized_tf_example is a Tensor, not Example proto?

Here serialized_tf_example is serialized string of a tf.train.Example. See tf.parse_example for the usage. Reading data chapter gives some example link.
tf_example.SerializeToString() converts tf.train.Example to string and tf.parse_example parses the serialized string to a dict.

The below mentioned code provides the simple example of using parse_example
import tensorflow as tf
sess = tf.InteractiveSession()
serialized_tf_example = tf.placeholder(tf.string, shape=[1], name='serialized_tf_example')
feature_configs = {'x': tf.FixedLenFeature(shape=[1], dtype=tf.float32)}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
feature_dict = {'x': tf.train.Feature(float_list=tf.train.FloatList(value=[25]))}
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
f = example.SerializeToString()
sess.run(tf_example,feed_dict={serialized_tf_example:[f]})

Related

Tensorflow Serving GRPC Call with additional signature changes the output response

I am using tensorflow serving to serve a model which has an additional signature. Having the additional signature changes the output response (Just the keys. The values are correct). Why would that happen?
import tensorflow as tf
from tensorflow.python.keras import Input, Model
from tensorflow.python.keras.layers import , Dense
input1 = Input(shape=(3,), dtype=tf.float32, name='value')
dense = Dense(1, activation='tanh', name='dense')(input1)
prediction = Dense(1, activation='tanh', name='prediction_label')(dense)
inputs = {"inputs"}
model = Model(inputs = input1, outputs=prediction, name='models')
tf.saved_model.save(model, "model_with_single_endpoint")
#tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.int32)])
def get_version(x):
return tf.constant('1.1.1.1')
main_sig = tf.function(model, input_signature=[model._get_save_spec()]).get_concrete_function()
model.save("model_with_two_endpoint", signatures={'serving_default': main_sig, 'get_version': get_version}, save_format='tf')
When I load these 2 models in tfserving. The responses for the same request are
model_with_single_signature
outputs {
key: "prediction_label"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 1
}
}
float_val: 0.2734697759151459
}
}
model_with_two_signatures
outputs {
key: "output_0"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 1
}
}
float_val: 0.2734697759151459
}
}
Why is the outputs key different when I add a new signature? Thanks for the help!
tensorflow ==2.3 and tensorflow-serving-api==2.3

tf.estimator serving function failing

I am using the tf.estimator to train and serve my tensorflow model. the training completed as expected, but fails in serving. I read my data in as a TFRecordDataset. My parsing function applies a transformation to feature "x2". "x2" is a string that is split. the tranformed feature is "x3".
def parse_function(example_proto):
features={"x1":tf.FixedLenFeature((), tf.string), "x2":tf.FixedLenFeature((),
tf.string),
"label":tf.FixedLenFeature((), tf.int64)}
parsed_features = tf.parse_example(example_proto, features)
x3=tf.string_split(parsed_features["string"],',')
parsed_features["x3"]=x3
return parsed_features, parsed_features["label"]
My serving fucnction is
def serving_input_fn():
receiver_tensor = {}
for feature_name in record_columns:
if feature_name in {"x1", "x2","x3"}:
dtype = tf.string
else:
dtype=tf.int32
receiver_tensor[feature_name] = tf.placeholder(dtype, shape=[None])
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in receiver_tensor.items()
}
return tf.estimator.export.ServingInputReceiver(features, receiver_tensor)
It always worked in the past when I didn't have any transformations in my parsing function, but it fails now with the error.
cloud.ml.prediction.prediction_utils.PredictionError: Failed to run the provided model: Exception during running the graph: Cannot feed value of shape (2, 1) for Tensor u'Placeholder_2:0', which has shape '(?,)' (Error code: 2)
I think I have to apply the transformation to "x2" in my serving function, but I don't know how. Any help would be greatly appreciated
Following this link
I processed feature "x3" after creating the receiver_tensor. Splitting the string in the serving fucntion required squeezing the tensor before splitting
def serving_input_fn():
receiver_tensor = {}
receiver_tensor["x1"] = tf.placeholder(tf.string, shape=[None], name="x1")
receiver_tensor["label"] = tf.placeholder(tf.int32, shape=[None], name="x2")
receiver_tensor["x2"] = tf.placeholder(tf.string, shape=[None],
name="string")
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in receiver_tensor.items()
}
features["x3"]=tf.string_split(tf.squeeze(features["x2"]),',')
return tf.estimator.export.ServingInputReceiver(features, receiver_tensor)

Create SavedModel for BERT

I'm using this Colab for BERT model.
In last cells in order to make predictions we have:
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!"
]
predictions = getPrediction(pred_sentences)
I want to create a 'SavedModel' to be used with TF serving. How to create a SavedModel for this model?
Normally I would define the following:
def serving_input_fn():
"""Create serving input function to be able to serve predictions later
using provided inputs
:return:
"""
feature_placeholders = {
'sentence': tf.placeholder(tf.string, [None]),
}
return tf.estimator.export.ServingInputReceiver(feature_placeholders,
feature_placeholders)
latest_ckpt = tf.train.latest_checkpoint(OUTPUT_DIR)
last_eval = estimator.evaluate(input_fn=test_input_fn, steps=None, checkpoint_path=latest_ckpt)
# Export the model to GCS for serving.
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn, exports_to_keep=None)
exporter.export(estimator, OUTPUT_DIR, latest_ckpt, last_eval, is_the_final_export=True)
Not sure how to define my tf.estimator.export.ServingInputReceiver
If you look at create_model function present in notebook. It takes some arguments. These are the features which will be passed to the model.
You need to update the serving_input_fn function to include them.
def serving_input_fn():
feature_spec = {
"input_ids" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64),
"input_mask" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64),
"segment_ids" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64),
"label_ids" : tf.FixedLenFeature([], tf.int64)
}
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[None],
name='input_example_tensor')
receiver_tensors = {'example': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

How to create Tensorflow serving_input_receiver_fn with multiple features?

In the TF guide on saving models there is a paragraph on serving_input_receiver_fn that talks about implementing functions for preprocessing logic. I'm trying to do some normalization of input data for a DNNRegressor. Their code for the function looks like this:
feature_spec = {'foo': tf.FixedLenFeature(...),
'bar': tf.VarLenFeature(...)}
def serving_input_receiver_fn():
"""An input receiver that expects a serialized tf.Example."""
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[default_batch_size],
name='input_example_tensor')
receiver_tensors = {'examples': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
My code looks like this:
feat_cols = [
tf.feature_column.numeric_column(key="FEATURE1"),
tf.feature_column.numeric_column(key="FEATURE2")
]
def serving_input_receiver_fn():
feature_spec = tf.feature_column.make_parse_example_spec(feat_cols)
default_batch_size = 1
serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[default_batch_size], name='tf_example')
receiver_tensors = { 'examples': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
fn_norm1 = lamba FEATURE1: normalize_input_data('FEATURE1', FEATURE1)
fn_norm2 = lamba FEATURE2: normalize_input_data('FEATURE2', FEATURE2)
features['FEATURE1'] = tf.map_fn(fn_norm1, features['FEATURE1'], dtype=tf.float32)
features['FEATURE2'] = tf.map_fn(fn_norm2, features['FEATURE2'], dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
After all of that the saved model has none of my features in the graph. I'm trying to figure out how this works if you have more than one feature you are trying to pass.
I created an example using the keras MPG data. It is located here:
features in ServingInputReceiver is passed directly to your model function. What you want is receive_tensors or receive_tensor_alternatives, that is, the second and third argument to ServingInputReceiver constructor.
For example, you may do this
serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[default_batch_size], name='tf_example')
receiver_tensors = { 'examples': serialized_tf_example}
raw_features = tf.parse_example(serialized_tf_example, feature_spec)
fn_norm1 = lamba FEATURE1: normalize_input_data('FEATURE1', FEATURE1)
fn_norm2 = lamba FEATURE2: normalize_input_data('FEATURE2', FEATURE2)
features['FEATURE1'] = tf.map_fn(fn_norm1, raw_features['FEATURE1'], dtype=tf.float32)
features['FEATURE2'] = tf.map_fn(fn_norm2, raw_features['FEATURE2'], dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
receiver_tensors_alternatives={'SOME_KEY': raw_features})
If you don't ever need to feed the network with an Example proto, you can skip it entirely.
raw_features = {'FEATURE1': tf.placeholder(...), 'FEATURE2': tf.placeholder(...)}
features = preprepocess(raw_features)
return tf.estimator.export.ServingInputReceiver(features, {'SOME_OTHER_KEY': raw_features})

What does `^` mean in tensorflow PB file?

I wrote following code and executed, then a PB file called test.pb generated.
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=[1,2,3,4], name="x")
mu, sigma = tf.nn.moments(x, [0,1,2])
in_data = np.array([i for i in range(24)]).reshape([1,2,3,4])
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
print sess.run([mu, sigma], feed_dict={x: in_data})
tf.train.write_graph(sess.graph_def, "./", "test.pb")
In test.pb, there's a node:
node {
name: "moments/normalize/divisor"
op: "Reciprocal"
input: "moments/sufficient_statistics/Const"
input: "^moments/sufficient_statistics/mean_ss"
input: "^moments/sufficient_statistics/var_ss"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
}
My question is that what the punctuation ^ mean in ^moments/sufficient_statistics/mean_ss and ^moments/sufficient_statistics/var_ss?