I have trained resnet with an additional layer to predict a dog vs cat model and serving it from localhost using
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=resnet_model \
--model_base_path=/home/pc3/deep_learning/models/resnet
$ saved_model_cli show --all --dir resnet/1 shows this signature:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['keras_layer_1_input'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 256, 256, 3)
name: serving_default_keras_layer_1_input:0
The given SavedModel SignatureDef contains the following output(s):
outputs['dense_2'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
Concrete Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
keras_layer_1_input: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='keras_layer_1_input')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
keras_layer_1_input: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='keras_layer_1_input')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Function Name: '_default_save_signature'
Option #1
Callable with:
Argument #1
keras_layer_1_input: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='keras_layer_1_input')
Function Name: 'call_and_return_all_conditional_losses'
Option #1
Callable with:
Argument #1
keras_layer_1_input: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='keras_layer_1_input')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
keras_layer_1_input: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='keras_layer_1_input')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 256, 256, 3), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
When post request with JSON to http://localhost:8501/models/resnet_model/1 with this body:
{
"signature_name": "serving_default",
"instances": [{"b64": "iVBORw0KGgoAAAA..."}]
}
where iVBORw0KGgoAAAA... is base64 encoded value of a PNG image which I've already resized to 256x256, I get this error in response:
{
"error": "Failed to process element: 0 of 'instances' list. Error: INVALID_ARGUMENT: JSON Value: {\n \"b64\": \"iVBORw0KGgoAAAA...\"\n} Type: Object is not of expected type: float"
}
and when I try "signature_name": "__saved_model_init_op", instead, I get
{
"error": "Failed to get input map for signature: __saved_model_init_op"
}
After tons of googling, I could not find any tutorial or code example about this particular scenario. So I'm left clueless what is the correct way of post an image to this model?
Take a look at this resnet_client.
According to the signature_def of your model, it does not accept input as base64, so MODEL_ACCEPT_JPG is False for your case. follow the link to see how data is sent to the server.
Related
I am using tensorflow=1.14. During periodic evaluation, I save best model using tf.estimator.BestExporter. I have following questions.
1)
When I try to convert saved_model.pb stored by BestExporter during training to frozen graph using freeze_graph() function, usual input/output node names ("image_tensor" / ['detection_boxes', 'detection_classes', 'detection_scores', 'num_detections']) are not present in saved_model. when I inspect using saved model cli, input / output names are completely different than the saved model stored by export_inference_graph.py using checkpoint and graph with pipeline.config.
"""
export_inference_graph's saved model saverDef
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['serialized_example'] tensor_info:
dtype: DT_STRING
shape: ()
name: tf_example:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 150, 4)
name: Postprocessor/BatchMultiClassNonMaxSuppression/stack_4:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 150)
name: Postprocessor/BatchMultiClassNonMaxSuppression/stack_6:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 150)
name: Postprocessor/BatchMultiClassNonMaxSuppression/stack_5:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: Postprocessor/ToFloat_3:0
Method name is: tensorflow/serving/predict
BestExporter saverdef
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_UINT8
shape: (-1, -1, -1, 3)
name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 150, 4)
name: detection_boxes:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 150)
name: detection_classes:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 150)
name: detection_scores:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: num_detections:0
Method name is: tensorflow/serving/predict
"""
As one can see, both have different ip/op names.
2)
Alternate approach I tried which is directly using saved_model.pb saved by BestExporter for inference. By inspecting .pb file using saved model cli, input shape it refers to is string with no dimension, which again prevent me using this approach because when passed numpy image for inference, it raised shape mismatch error. (From above)
Can someone help me out how can I use saved_model from BestExporter for inference or convert it to frozen graph with correct ip/op so It can be used for inference.
Let me know If you need more information.
Thank you
I am new to tensorflow serving, here's the data I post:
data_0 = {'inputs/input_x': mfcc[0], "inputs/is_training":False, "inputs/keep_prob":1}
data = json.dumps({"signature_name":'serving_default', 'instances':[data_0]})
headers = {"Content-type": "application/json"}
json_response = requests.post(URL, data=data, headers=headers)
I got an error which comes from the is_training flag of batch normalization (and I think the same for the dropout rate):
{ "error": "The second input must be a scalar, but it has shape [1]\n\t [[{{node conv_layer2/conv2/batch_normalization/cond/Switch}}]]" }
Then I saw a similar issue and modified my code into
data_0 = {'inputs/input_x': mfcc[0], "inputs/is_training":[False], "inputs/keep_prob":[1]}
Then I got one more dimension:
{ "error": "The second input must be a scalar, but it has shape [1,1]\n\t [[{{node conv_layer2/conv2/batch_normalization/cond/Switch}}]]" }
And I tried to post without [] like :
data = json.dumps({"signature_name":'serving_default', 'instances':data_0})
I got :
"error": "JSON Value:{...} Excepting \'instances\' to be an list/array" }
Informations of my model:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs/input_x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 34, 20)
name: inputs/input_x:0
inputs['inputs/is_training'] tensor_info:
dtype: DT_BOOL
shape: unknown_rank
name: inputs/is_training:0
inputs['inputs/keep_prob'] tensor_info:
dtype: DT_FLOAT
shape: unknown_rank
name: inputs/keep_prob:0
The given SavedModel SignatureDef contains the following output(s):
outputs['Softmax'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: Softmax:0
Method name is: tensorflow/serving/predict
Need some help here, thanks!
For now I haven't got an answer and I re-trained my model without is_training flag, and that works.
So it's maybe that there is a problem about boolean values.
When using instances as input key, all inputs should have same 0-th dimension. Try inputs instead of instances. For example,
data = json.dumps({"signature_name":'serving_default', 'inputs':data_0})
I'm trying to save faster R-CNN hub model and use it with AI-platform gcloud ai-platform local predict. The error I'm getting is:
Failed to run the provided model: Exception during running the graph: [_Derived_] Table not initialized.\n\t [[{{node hub_input/index_to_string_1_Lookup}}]]\n\t [[StatefulPartitionedCall_1/StatefulPartitionedCall/model/keras_layer/StatefulPartitionedCall]] (Error code: 2)\n'
The code for saving the model:
model_url = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"
input = tf.keras.Input(shape=(), dtype=tf.string)
decoded = tf.keras.layers.Lambda(
lambda y: tf.map_fn(
lambda x: tf.image.resize(
tf.image.convert_image_dtype(
tf.image.decode_jpeg(x, channels=3), tf.float32), (416, 416)
),
tf.io.decode_base64(y), dtype=tf.float32)
)(input)
results = hub.KerasLayer(model_url, signature_outputs_as_dict=True)(decoded)
model = tf.keras.Model(inputs=input, outputs=results)
model.save("./saved_model", save_format="tf")
The model works when I load with tf.keras.models.load_model("./saved_model") and predict with it, but not with ai-platform local predict.
Command for ai-platform local predictions:
gcloud ai-platform local predict --model-dir ./saved_model --json-instances data.json --framework TENSORFLOW
Versions:
python 3.7.0
tensorflow==2.2.0
tensorflow-hub==0.7.0
Output of saved_model_cli:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image_bytes'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_image_bytes:0
The given SavedModel SignatureDef contains the following output(s):
outputs['keras_layer'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 4)
name: StatefulPartitionedCall_1:0
outputs['keras_layer_1'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: StatefulPartitionedCall_1:1
outputs['keras_layer_2'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: StatefulPartitionedCall_1:2
outputs['keras_layer_3'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: StatefulPartitionedCall_1:3
outputs['keras_layer_4'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall_1:4
Method name is: tensorflow/serving/predict
Any ideas on how to fix the error?
The problem is that your input is being interpreted as a scalar. Do:
input = tf.keras.Input(shape=(1,), dtype=tf.string)
I am trying to run a SavedModel using the C-API.
When it comes to running TF_SessionRun it always fails on various input nodes with the same error.
TF_SessionRun status: 3:Input to reshape is a tensor with 6 values, but the requested shape has 36
TF_SessionRun status: 3:Input to reshape is a tensor with 19 values, but the requested shape has 361
TF_SessionRun status: 3:Input to reshape is a tensor with 3111 values, but the requested shape has 9678321
...
As can be seen, the number of requested shape values is always the square of the expected input size. It's quite odd.
The model runs fine with the saved_model_cli command.
The inputs are all either scalar DT_STRING or DT_FLOATs, I'm not doing image recogition.
Here's the output of that command:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['f1'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f1:0
inputs['f2'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f2:0
inputs['f3'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f3:0
inputs['f4'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: f4:0
inputs['f5'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f5:0
The given SavedModel SignatureDef contains the following output(s):
outputs['o1_probs'] tensor_info:
dtype: DT_DOUBLE
shape: (-1, 2)
name: output_probs:0
outputs['o1_values'] tensor_info:
dtype: DT_STRING
shape: (-1, 2)
name: output_labels:0
outputs['predicted_o1'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: output_class:0
Method name is: tensorflow/serving/predict
Any clues into what's going on are much appreciated. The saved_model.pb file is coming from AutoML, my code is merely querying that model. I don't change the graph.
It turns out that the issue was caused by me not using the TF_AllocateTensor function correctly.
The original code was like:
TF_Tensor* t = TF_AllocateTensor(TF_STRING, nullptr, 0, sz);
when it appears it should have been:
int64_t dims = 0;
TF_Tensor* t = TF_AllocateTensor(TF_STRING, &dims, 1, sz);
Following is the signature def of the tensorflow model being served:
MetaGraphDef with tag-set: 'serve' contains the following:
SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['in'] tensor_info:
dtype: DT_INT32
shape: (-1, 10)
name: input_sentences:0
The given SavedModel SignatureDef contains the following output(s):
outputs['out'] tensor_info:
dtype: DT_INT32
shape: (-1, 1)
name: output_sentences:0
Method name is: tensorflow/serving/predict
After serving this model, when I pass an input of shape [-1, 10], I get the following error:
"You must feed a value for placeholder tensor 'output_sentences' with dtype int32 and shape [?,1]"
even though output_sentences is part of outputs.
Please help me out.