tensorflow 2.3
ubuntu16.04
python=3.7.7
When using tf.keras.Input with 'sparse=True', the input tensor info names are unreadable in serving signatures, such as args_0, args_0_1, args_0_2. As a result, it is very hard to distinguish when multiple sparse inputs are used in one model.
Source code / logs
def make_parse_example_test(serialized_example):
# 解析的字典格式
data_dict_test = { # 解析example
'label': tf.io.FixedLenFeature([1], tf.float32),
'features': tf.io.FixedLenFeature([38], tf.float32),
# 'emb_arr': tf.io.FixedLenFeature([100],tf.float32),
'scid_index': tf.io.VarLenFeature(tf.int64),
}
features = tf.io.parse_single_example(serialized_example, features=data_dict_test)
label = features.pop('label')
return features,label
def batch_input(file_dir,batchsize):
# 判断是否是文件目录,创建文件流
if os.path.isdir(file_dir):
files = os.listdir(file_dir)
filenamequeues = list(map(lambda x: file_dir+x, files))
else:
filenamequeues = [file_dir]
print(filenamequeues)
dataset = tf.data.TFRecordDataset(filenamequeues)
# dataset = dataset.batch(batchsize)
dataset = dataset.map(make_parse_example_test,num_parallel_calls=4)
dataset = dataset.batch(batchsize)
# 读入数据,对数据进行混洗(shuffle)、分批batch
dataset = dataset.prefetch(-1)
return dataset
class EmbeddingLayer(tf.keras.layers.Layer):
def __init__(self, input_dim, output_dim):
super(EmbeddingLayer, self).__init__(trainable=True)
self.params = tf.Variable(tf.random.truncated_normal([input_dim, output_dim]), trainable=True)
def call(self,inputs):
param = tf.nn.safe_embedding_lookup_sparse(self.params,inputs)
return param
def get_config(self):
return super(EmbeddingLayer, self).get_config()
def models():
feature1 = tf.keras.layers.Input(shape=[38], name='features', dtype=tf.float32)
scid_index = tf.keras.layers.Input(shape=[None], name='scid_index', dtype=tf.int64,sparse=True)
scid_em = EmbeddingLayer(780000,50)(scid_index)
feature_all = tf.keras.layers.concatenate([
feature1,scid_em
])
h1 = tf.keras.layers.Dense(256, activation=leaky_relu,name='h1')(feature_all)
h2 = tf.keras.layers.Dense(256, activation=leaky_relu, name="h2")(h1)
h3 = tf.keras.layers.Dense(1, activation=leaky_relu, name="h3")(h2)
output = tf.keras.layers.Activation(activation="sigmoid")(h3)
out = tf.keras.models.Model(
inputs=[feature1,scid_index],
outputs=[output]
)
return out
train = batch_input('./part-r-00001',512)
model = models()
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=[tf.keras.metrics.AUC(),
tf.keras.metrics.Precision(),
tf.keras.metrics.Recall(),
tf.keras.metrics.BinaryAccuracy()])
model.fit(train, epochs=1,verbose=1,class_weight={0:1,1:2},steps_per_epoch=5)
print(model.input_names)
model.save("./model3test")
[examine exported_model]
$ saved_model_cli show --dir ./model3test --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['args_0'] tensor_info:
dtype: DT_INT64
shape: (-1, 2)
name: serving_default_args_0:0
inputs['args_0_1'] tensor_info:
dtype: DT_INT64
shape: (-1)
name: serving_default_args_0_1:0
inputs['args_0_2'] tensor_info:
dtype: DT_INT64
shape: (2)
name: serving_default_args_0_2:0
The given SavedModel SignatureDef contains the following output(s):
outputs['label'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall_18:0
Method name is: tensorflow/serving/predict
Related
I have an ML model developed using Keras and more accurately, it's using Functional API. Once I save the model and use the saved_model_cli tool on it:
$ saved_model_cli show --dir /serving_model_folder/1673549934 --tag_set serve --signature_def serving_default
2023-01-12 10:59:50.836255: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
The given SavedModel SignatureDef contains the following input(s):
inputs['f1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_f1:0
inputs['f2'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_f2:0
inputs['f3'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_f3:0
inputs['f4'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_f4:0
The given SavedModel SignatureDef contains the following output(s):
outputs['output_0'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: StatefulPartitionedCall_1:0
outputs['output_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: StatefulPartitionedCall_1:1
outputs['output_2'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: StatefulPartitionedCall_1:2
Method name is: tensorflow/serving/predict
As you can see, the 3 output attributes are named: output_0, output_1, and output_2. This is how I'm instantiating my model:
input_layers = {
'f1': Input(shape=(1,), name='f1'),
'f2': Input(shape=(1,), name='f2'),
'f3': Input(shape=(1,), name='f3'),
'f4': Input(shape=(1,), name='f4'),
}
x = layers.concatenate(input_layers.values())
x = layers.Dense(32, activation='relu', name="dense")(x)
output_layers = {
't1': layers.Dense(1, activation='sigmoid', name='t1')(x),
't2': layers.Dense(1, activation='sigmoid', name='t2')(x),
't3': layers.Dense(1, activation='sigmoid', name='t3')(x),
}
model = models.Model(input_layers, output_layers)
I was hoping that the saved model would name the output attributes t1, t2, and t3. Searching online, I see that I can rename them if I subclass my model off tf.Model class:
class CustomModuleWithOutputName(tf.Module):
def __init__(self):
super(CustomModuleWithOutputName, self).__init__()
self.v = tf.Variable(1.)
#tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def __call__(self, x):
return {'custom_output_name': x * self.v}
module_output = CustomModuleWithOutputName()
call_output = module_output.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
module_output_path = os.path.join(tmpdir, 'module_with_output_name')
tf.saved_model.save(module_output, module_output_path,
signatures={'serving_default': call_output})
But I would like to keep using the Functional API. Is there any way to specify the name of the output attributes while using Keras Functional API?
I managed to pull this off a different way. It relies on the signature and adds a new layer just to rename the tensors.
from tensorflow.keras import layers
class CustomModuleWithOutputName(layers.Layer):
def __init__(self):
super(CustomModuleWithOutputName, self).__init__()
def call(self, x):
return {'t1': tf.identity(x[0]),
't2': tf.identity(x[1]),
't3': tf.identity(x[2]),}
def _get_tf_examples_serving_signature(model):
#tf.function(input_signature=[tf.TensorSpec(shape=[None, 1], dtype=tf.float32, name='f1'),
tf.TensorSpec(shape=[None, 1], dtype=tf.float32, name='f2'),
tf.TensorSpec(shape=[None, 1], dtype=tf.float32, name='f3'),
tf.TensorSpec(shape=[None, 1], dtype=tf.float32, name='f4'),])
def serve_tf_examples_fn(f1, f2, f3, f4):
"""Returns the output to be used in the serving signature."""
inputs = {'f1': f1, 'f2': f2, 'f3': f3, 'f4': f4}
outputs = model(inputs)
return model.naming_layer(outputs)
return serve_tf_examples_fn
# This is the same model mentioned in the question (a Functional API model)
model = get_model()
# Any property name will do as long as it is not reserved
model.naming_layer = CustomModuleWithOutputName()
signatures = {
'serving_default': _get_tf_examples_serving_signature(model),
}
model.save(output_dir, save_format='tf', signatures=signatures)
The takeaway from this code is the CustomModuleWithOutputName class. It's a subclass of Keras' Layer and all it does is give names to the output indices. This layer is added to the model's graph in the serving_default signature before it is saved. It's a kinda stupid solution but it works. Also, it relies on the order of the tensors returned by the original functional API.
I was hoping my original approach would work. But since it doesn't, at least I have this one to foot the bill.
I am new to tensorflow serving, here's the data I post:
data_0 = {'inputs/input_x': mfcc[0], "inputs/is_training":False, "inputs/keep_prob":1}
data = json.dumps({"signature_name":'serving_default', 'instances':[data_0]})
headers = {"Content-type": "application/json"}
json_response = requests.post(URL, data=data, headers=headers)
I got an error which comes from the is_training flag of batch normalization (and I think the same for the dropout rate):
{ "error": "The second input must be a scalar, but it has shape [1]\n\t [[{{node conv_layer2/conv2/batch_normalization/cond/Switch}}]]" }
Then I saw a similar issue and modified my code into
data_0 = {'inputs/input_x': mfcc[0], "inputs/is_training":[False], "inputs/keep_prob":[1]}
Then I got one more dimension:
{ "error": "The second input must be a scalar, but it has shape [1,1]\n\t [[{{node conv_layer2/conv2/batch_normalization/cond/Switch}}]]" }
And I tried to post without [] like :
data = json.dumps({"signature_name":'serving_default', 'instances':data_0})
I got :
"error": "JSON Value:{...} Excepting \'instances\' to be an list/array" }
Informations of my model:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs/input_x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 34, 20)
name: inputs/input_x:0
inputs['inputs/is_training'] tensor_info:
dtype: DT_BOOL
shape: unknown_rank
name: inputs/is_training:0
inputs['inputs/keep_prob'] tensor_info:
dtype: DT_FLOAT
shape: unknown_rank
name: inputs/keep_prob:0
The given SavedModel SignatureDef contains the following output(s):
outputs['Softmax'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: Softmax:0
Method name is: tensorflow/serving/predict
Need some help here, thanks!
For now I haven't got an answer and I re-trained my model without is_training flag, and that works.
So it's maybe that there is a problem about boolean values.
When using instances as input key, all inputs should have same 0-th dimension. Try inputs instead of instances. For example,
data = json.dumps({"signature_name":'serving_default', 'inputs':data_0})
I'm trying to save faster R-CNN hub model and use it with AI-platform gcloud ai-platform local predict. The error I'm getting is:
Failed to run the provided model: Exception during running the graph: [_Derived_] Table not initialized.\n\t [[{{node hub_input/index_to_string_1_Lookup}}]]\n\t [[StatefulPartitionedCall_1/StatefulPartitionedCall/model/keras_layer/StatefulPartitionedCall]] (Error code: 2)\n'
The code for saving the model:
model_url = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"
input = tf.keras.Input(shape=(), dtype=tf.string)
decoded = tf.keras.layers.Lambda(
lambda y: tf.map_fn(
lambda x: tf.image.resize(
tf.image.convert_image_dtype(
tf.image.decode_jpeg(x, channels=3), tf.float32), (416, 416)
),
tf.io.decode_base64(y), dtype=tf.float32)
)(input)
results = hub.KerasLayer(model_url, signature_outputs_as_dict=True)(decoded)
model = tf.keras.Model(inputs=input, outputs=results)
model.save("./saved_model", save_format="tf")
The model works when I load with tf.keras.models.load_model("./saved_model") and predict with it, but not with ai-platform local predict.
Command for ai-platform local predictions:
gcloud ai-platform local predict --model-dir ./saved_model --json-instances data.json --framework TENSORFLOW
Versions:
python 3.7.0
tensorflow==2.2.0
tensorflow-hub==0.7.0
Output of saved_model_cli:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image_bytes'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_image_bytes:0
The given SavedModel SignatureDef contains the following output(s):
outputs['keras_layer'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 4)
name: StatefulPartitionedCall_1:0
outputs['keras_layer_1'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: StatefulPartitionedCall_1:1
outputs['keras_layer_2'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: StatefulPartitionedCall_1:2
outputs['keras_layer_3'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: StatefulPartitionedCall_1:3
outputs['keras_layer_4'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall_1:4
Method name is: tensorflow/serving/predict
Any ideas on how to fix the error?
The problem is that your input is being interpreted as a scalar. Do:
input = tf.keras.Input(shape=(1,), dtype=tf.string)
How should I save my trained model using tf.saved_model.simple_save so that I can make requests using tensorflow-serving
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
values = tf.placeholder(tf.float32, [None, 1])
layer = tf.add(tf.matmul(x, w), b)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=layer))
optimize = tf.train.GradientDescentOptimizer(0.001).minimize(cross_entropy)
correct_pred = tf.equal(tf.argmax(layer, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
with tf.Session() as sess:
sess.run(init)
for _ in range(10000):
batch = mnist.train.next_batch(100)
sess.run(accuracy, feed_dict={x:batch[0],y:batch[1]})
!rm -rf "/model"
export_dir = "/model/1"
#Problem here
tf.saved_model.simple_save(
sess,
export_dir=export_dir,
inputs={"x":x},
outputs={"accuracy":accuracy}
)
When I run:
!saved_model_cli show --dir {export_dir} --all
I get:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 784)
name: Placeholder:0
The given SavedModel SignatureDef contains the following output(s):
outputs['accuracy'] tensor_info:
dtype: DT_FLOAT
shape: ()
name: Mean_1:0
Method name is: tensorflow/serving/predict
My output is of shape() instead of (-1,x) or that kind of format.
When I send a request, I get no response. Since accuracy is an operation I get no response. How can I change it to a variable or how can I use {t.name for t in model.outputs} which is used in keras?
The output in simple_save seems not correct. It should be layer, but not accuracy.
The problem is in the last line of the code, outputs={"accuracy":accuracy}. The issue will be resolved if accuracy is replaced with 'layer'. So, the code can be as shown below:
tf.saved_model.simple_save(sess, export_dir=export_dir, inputs={"x":x},
outputs={"Predicted_Output":layer})
While using the following code and doing a gcloud ml-engine local predict I get:
InvalidArgumentError (see above for traceback): You must feed a value
for placeholder tensor 'Placeholder' with dtype string and shape [?]
[[Node: Placeholder = Placeholderdtype=DT_STRING, shape=[?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]] (Error code: 2)
tf_files_path = './tf'
# os.makedirs(tf_files_path) # temp dir
estimator =\
tf.keras.estimator.model_to_estimator(keras_model_path="model_data/yolo.h5",
model_dir=tf_files_path)
#up_one_dir(os.path.join(tf_files_path, 'keras'))
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor,
channels=3)
image = tf.divide(image, 255)
image = tf.image.convert_image_dtype(image, tf.float32)
return image
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
The json I am sending to the ml engine looks like this:
{"image_bytes": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2w..."}}
When not doing a local prediction, but sending it to ML engine itself, I get:
ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 500,
"message": "Internal error encountered.",
"status": "INTERNAL"
}
}
The saved_model_cli gives:
saved_model_cli show --all --dir export/1547848897/
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image_bytes'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder:0
The given SavedModel SignatureDef contains the following output(s):
outputs['conv2d_59'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, 255)
name: conv2d_59/BiasAdd:0
outputs['conv2d_67'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, 255)
name: conv2d_67/BiasAdd:0
outputs['conv2d_75'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, 255)
name: conv2d_75/BiasAdd:0
Method name is: tensorflow/serving/predict
Does anyone see what is going wrong here?
The issue has been resolved. The output of the model appeared to be too big for ml-engine to send it back and it didn't capture it in a more relevant exception than 500 internal error. We added some post-processing steps in the model and it works fine now.
For the gcloud ml-engine local predict command that is returning an error, it seems to be a bug. As the model works on ml-engine now, but still does return this error with local prediction.