For model checkpoint files (usually consist of .meta, .data, .index) generated from TF-2.0, how can I convert it to onnx or pb file?
since I found most of the existing tools, such as tf2onnx only support TF-1.x.
tf2onnx supports tf2, saved models, and checkpoint files. I would recommend making a saved model:
model.save("path")
Then use tf2onnx command line:
tf2onnx.convert --saved-model path/to/model --output model.onnx --opset 12
https://github.com/onnx/tensorflow-onnx#--saved-model
Related
A have some issues exporting yolov5 model to tensor flow. I'm new to tensorflow and yolov5 so following the docs got me in to troubles. What is the easiest way to export?
Yolov5 supports export to two TF formats: TensorFlow SavedModel and TensorFlow GraphDef. For exporting to both:
python export.py --weights yolov5s.pt --include saved_model pb
For inference
python detect.py --weights yolov5s_saved_model
and
python detect.py --weights yolov5s.pb
respectively
I have trained one object detection model in tensorflow.
My Environment--
tf version == 1.15, network== ssd mobilnet v2
Now i want to convert my saved_model(.pb) file to tfjs(.json) format.
I followed below steps--
pip install tensorflowjs==0.8.6 # not sure if it's compatible with tf version 1.15
command==
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve exported_path/saved_model exported_path/web_model_path
Error== AttributeError: module 'keras_applications' has no attribute 'set_keras_submodules'
Then i degrade keras_application version.
Now getting error as
usage: TensorFlow.js model converters. [-h]
[--input_format {keras,tf_session_bundle,keras_saved_model,tf_hub,tf_saved_model,tensorflowjs,tf_frozen_model}]
[--output_format {keras,tensorflowjs}]
[--output_node_names OUTPUT_NODE_NAMES]
[--signature_name SIGNATURE_NAME]
[--saved_model_tags SAVED_MODEL_TAGS]
[--quantization_bytes {1,2}]
[--split_weights_by_layer] [--version]
[--skip_op_check SKIP_OP_CHECK]
[--strip_debug_ops STRIP_DEBUG_OPS]
[--output_json OUTPUT_JSON]
[input_path] [output_path]
TensorFlow.js model converters.: error: argument --output_format: invalid choice: 'tfjs_graph_model' (choose from 'keras', 'tensorflowjs')
So there is no option for tf_graph_model in output_format.
Now when i am installing pip install tensorflowjs (not passing any specific version), then it installs tfjs==3.3.0, and uninstaling my current tf1.15 and installing new tf2.x version. which i need to avoid at any cost.
Can somebody please guide me , how to convert the saved_model to tf_js format in version tensorflow==1.15.
Thanks in Advance.
Can you please upgrade tensorflow to latest version tf 2.x, stable versions tf 2.6/2.7 and let us know if this issue exist as tf 1.x is not supported any more and hence these errors.
I trained a custom CNN model using keras and tensorflow 2.2.0 as background. After that, I saved the model as .ckpt file having assets, variables, .pb file as subfolders init. After that to convert it into IR in openvino documentation it is given that we have use the following command:
**To convert such TensorFlow model:
Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
Run the mo_tf.py script with a path to the SavedModel directory to convert a model:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>**
so, I went to the following directory as mentioned and tired the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt
There is no output or anything. There is no error also.
Also, I tried the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt --output_dir C:\Users\vyas\Desktop\out
Still there is no output.
Can someone please help.
I am using tensorflow 2.2.0
Can someone please help me with this
--saved_model_dir must provide a path to the SavedModel directory.
Modify your command as follows:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model
I am using tensorrt to accelerate inference speed of Tacotron2 model. I used
tensorrt 5.0.2.6 version and tensorflow 1.13.0.rc0 version.
I convert savedmodel to tftrt savedmodel using the tensorrt api below:
trt.create_inference_graph(
input_graph_def=None,
outputs=None,
max_batch_size=32,
input_saved_model_dir=os.path.join(args.export_dir, args.version),
output_saved_model_dir=args.output_saved_model_dir,
precision_mode=args.precision_mode)
The outputed tensorrt_savedmodel.pb can not be imported into tensorboard for view but tensorrt_savedmodel.pb can be deployed with tf-serving.
However, when client request the tf-serving using grpc there is an error:
<_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The TF function for the TRT segment could not be empty
[[{{node model/inference/prenet/TRTEngineOp_33}}]]"
debug_error_string = "
{"created":"#1572417319.714936208","description":"Error received from peer ipv4:192.168.23.17:8500","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"The TF function for the TRT segment could not be empty\n\t [[{{node model/inference/prenet/TRTEngineOp_33}}]]","grpc_status":3}
any solutions about this issue?
TensorFlow saved model also provides a formal and consistent way to use tensorrt. You can try to convert it with the saved_model_cli and then deploy to tf-serving.
usage: saved_model_cli convert [-h] --dir DIR --output_dir OUTPUT_DIR
--tag_set TAG_SET
{tensorrt} ...
Usage example:
To convert the SavedModel to one that have TensorRT ops:
$saved_model_cli convert \
--dir /tmp/saved_model \
--tag_set serve \
--output_dir /tmp/saved_model_trt \
tensorrt
optional arguments:
-h, --help show this help message and exit
--dir DIR directory containing the SavedModel to convert
--output_dir OUTPUT_DIR
output directory for the converted SavedModel
--tag_set TAG_SET tag-set of graph in SavedModel to convert, separated by ','
conversion methods:
valid conversion methods
{tensorrt} the conversion to run with the SavedModel
Faced error when tried to created tflite pb file from customize checkpoint created from transfer training
System information :
ubuntu - 18.0
tf.VERSION = 1.14.1-dev20190517
tf.GIT_VERSION = v1.12.1-2154-g3df6d99f3f
tf.COMPILER_VERSION = v1.12.1-2154-g3df6d99f3f
env : LD_LIBRARY_PATH /usr/local/cuda-10.0/lib64:/usr/local/cuda-10.0/lib64
Tensorflow command:
python /object_detection/export_tflite_ssd_graph.py
--pipeline_config_path models/ssd_mobilenet_v1_pets.config
--trained_checkpoint_prefix model.ckpt-23595
--output_directory /tflite/
--add_postprocessing_op=true
Error log :
TypeError: export_tflite_graph() takes 6 positional arguments but 7 were given
Any help appreciated !
This is a bug with the script. We will fix it shortly.
Thanks for flagging!