I am using tensorrt to accelerate inference speed of Tacotron2 model. I used
tensorrt 5.0.2.6 version and tensorflow 1.13.0.rc0 version.
I convert savedmodel to tftrt savedmodel using the tensorrt api below:
trt.create_inference_graph(
input_graph_def=None,
outputs=None,
max_batch_size=32,
input_saved_model_dir=os.path.join(args.export_dir, args.version),
output_saved_model_dir=args.output_saved_model_dir,
precision_mode=args.precision_mode)
The outputed tensorrt_savedmodel.pb can not be imported into tensorboard for view but tensorrt_savedmodel.pb can be deployed with tf-serving.
However, when client request the tf-serving using grpc there is an error:
<_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The TF function for the TRT segment could not be empty
[[{{node model/inference/prenet/TRTEngineOp_33}}]]"
debug_error_string = "
{"created":"#1572417319.714936208","description":"Error received from peer ipv4:192.168.23.17:8500","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"The TF function for the TRT segment could not be empty\n\t [[{{node model/inference/prenet/TRTEngineOp_33}}]]","grpc_status":3}
any solutions about this issue?
TensorFlow saved model also provides a formal and consistent way to use tensorrt. You can try to convert it with the saved_model_cli and then deploy to tf-serving.
usage: saved_model_cli convert [-h] --dir DIR --output_dir OUTPUT_DIR
--tag_set TAG_SET
{tensorrt} ...
Usage example:
To convert the SavedModel to one that have TensorRT ops:
$saved_model_cli convert \
--dir /tmp/saved_model \
--tag_set serve \
--output_dir /tmp/saved_model_trt \
tensorrt
optional arguments:
-h, --help show this help message and exit
--dir DIR directory containing the SavedModel to convert
--output_dir OUTPUT_DIR
output directory for the converted SavedModel
--tag_set TAG_SET tag-set of graph in SavedModel to convert, separated by ','
conversion methods:
valid conversion methods
{tensorrt} the conversion to run with the SavedModel
Related
I have trained one object detection model in tensorflow.
My Environment--
tf version == 1.15, network== ssd mobilnet v2
Now i want to convert my saved_model(.pb) file to tfjs(.json) format.
I followed below steps--
pip install tensorflowjs==0.8.6 # not sure if it's compatible with tf version 1.15
command==
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve exported_path/saved_model exported_path/web_model_path
Error== AttributeError: module 'keras_applications' has no attribute 'set_keras_submodules'
Then i degrade keras_application version.
Now getting error as
usage: TensorFlow.js model converters. [-h]
[--input_format {keras,tf_session_bundle,keras_saved_model,tf_hub,tf_saved_model,tensorflowjs,tf_frozen_model}]
[--output_format {keras,tensorflowjs}]
[--output_node_names OUTPUT_NODE_NAMES]
[--signature_name SIGNATURE_NAME]
[--saved_model_tags SAVED_MODEL_TAGS]
[--quantization_bytes {1,2}]
[--split_weights_by_layer] [--version]
[--skip_op_check SKIP_OP_CHECK]
[--strip_debug_ops STRIP_DEBUG_OPS]
[--output_json OUTPUT_JSON]
[input_path] [output_path]
TensorFlow.js model converters.: error: argument --output_format: invalid choice: 'tfjs_graph_model' (choose from 'keras', 'tensorflowjs')
So there is no option for tf_graph_model in output_format.
Now when i am installing pip install tensorflowjs (not passing any specific version), then it installs tfjs==3.3.0, and uninstaling my current tf1.15 and installing new tf2.x version. which i need to avoid at any cost.
Can somebody please guide me , how to convert the saved_model to tf_js format in version tensorflow==1.15.
Thanks in Advance.
Can you please upgrade tensorflow to latest version tf 2.x, stable versions tf 2.6/2.7 and let us know if this issue exist as tf 1.x is not supported any more and hence these errors.
For model checkpoint files (usually consist of .meta, .data, .index) generated from TF-2.0, how can I convert it to onnx or pb file?
since I found most of the existing tools, such as tf2onnx only support TF-1.x.
tf2onnx supports tf2, saved models, and checkpoint files. I would recommend making a saved model:
model.save("path")
Then use tf2onnx command line:
tf2onnx.convert --saved-model path/to/model --output model.onnx --opset 12
https://github.com/onnx/tensorflow-onnx#--saved-model
I trained a custom CNN model using keras and tensorflow 2.2.0 as background. After that, I saved the model as .ckpt file having assets, variables, .pb file as subfolders init. After that to convert it into IR in openvino documentation it is given that we have use the following command:
**To convert such TensorFlow model:
Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
Run the mo_tf.py script with a path to the SavedModel directory to convert a model:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>**
so, I went to the following directory as mentioned and tired the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt
There is no output or anything. There is no error also.
Also, I tried the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt --output_dir C:\Users\vyas\Desktop\out
Still there is no output.
Can someone please help.
I am using tensorflow 2.2.0
Can someone please help me with this
--saved_model_dir must provide a path to the SavedModel directory.
Modify your command as follows:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model
I am trying to restore a TensorFlow's Saver object (.ckpt.*) and convert it into SavedModel object(.pb) so that I can deploy it with TensorFlow Serving.
This is how I convert:
with tf.Session() as sess:
# Restore the graph from (.meta .data .index)
saver = tf.train.import_meta_graph(f"{checkpoint_path}/{meta_file_string}")
saver.restore(sess, tf.train.latest_checkpoint(str(checkpoint_path)))
# Convert into ".pb" using SavedModel API.
model_path = f'{savedmodel_path}/1'
builder = tf.saved_model.builder.SavedModelBuilder(model_path)
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.SERVING],
main_op=tf.tables_initializer(),
strip_default_attrs=True)
builder.save()
print("Saved")
Saving seems to work fine when I tree:
$ tree 1
1
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
1 directory, 3 files
and when I use saved_model_cli:
$ saved_model_cli show --dir path/to/model/1
The given SavedModel contains the following tag-sets:
serve
However, when I run the TensorFlow serving docker container,
$ docker run \
-p 8500:8500 \
-v path/to/model:/models/aaa \
--env MODEL_NAME=aaa \
--name aaa \
tensorflow/serving
it complains that it cannot find the tag "serve" which I DID ADD:
2019-11-19 02:35:30.844163: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/aaa/1
2019-11-19 02:35:30.916952: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-11-19 02:35:30.927640: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: fail. Took 83527 microseconds.
2019-11-19 02:35:30.927781: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: aaa version: 1} failed: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
What have I done wrong, how can I fix this?
otherwise, how can I dive into this issue deeper?
I am using tensorflow 1.14.0.
and using docker image tensorFlow-serving:1.14.0-devel.
you need to add prediction signature to your builder-
prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"input": inputs}, {"output":output})
builder = saved_model_builder.SavedModelBuilder('exported_moddel/')
builder.add_meta_graph_and_variables(session,
[tag_constants.SERVING],
signature_def_map={"classification":prediction_signature})
builder.save()
you can refer this notebook for more detail - https://github.com/CS-savvy/tf-graph-preprocessing-addition/blob/master/keras%20inject.ipynb
Replacing tensorflow/serving image with :latest version, which is :2.0.0 so far. And worked fine.
My local train environment still uses TensorFlow 1.14
No idea why this is so.
I trained an object detection model on ML Engine and exported it by invoking:
python object_detection/export_inference_graph.py \
--input_type encoded_image_string_tensor ....
Then I successfully tested prediction locally by invoking:
gcloud ml-engine local predict --model-dir ../saved_model --json-instances=inputs.json --runtime-version=1.2
where inputs.json contains:
{"b64": "base64 encoded png image"}
When I try to create a model version on ML Engine using the following command:
gcloud ml-engine versions create ${YOUR_VERSION} --model ${YOUR_MODEL} --origin=${YOUR_GCS_BUCKET}/saved_model --runtime-version=1.2
it fails with the following message:
ERROR: (gcloud.ml-engine.versions.create) Bad model detected with error: "Error loading the model: Could not load model. "
Does ML Engine NOT support model versions of input_type=encoded_image_string_tensor and how can I obtain more details on the error?
Creating a model version on ml-engine using an exported model with input_type=image_tensor works fine.
Can you verify that you're exporting the model with tensorflow 1.2?
gcloud ml-engine local predict doesn't have the --runtime-version flag, so if you have TF 1.3 installed and exported your model with that, then local predict would work using TF 1.3, but there may be incompatibilities in the model when trying to use TF 1.2 on the service.