A have some issues exporting yolov5 model to tensor flow. I'm new to tensorflow and yolov5 so following the docs got me in to troubles. What is the easiest way to export?
Yolov5 supports export to two TF formats: TensorFlow SavedModel and TensorFlow GraphDef. For exporting to both:
python export.py --weights yolov5s.pt --include saved_model pb
For inference
python detect.py --weights yolov5s_saved_model
and
python detect.py --weights yolov5s.pb
respectively
Related
For model checkpoint files (usually consist of .meta, .data, .index) generated from TF-2.0, how can I convert it to onnx or pb file?
since I found most of the existing tools, such as tf2onnx only support TF-1.x.
tf2onnx supports tf2, saved models, and checkpoint files. I would recommend making a saved model:
model.save("path")
Then use tf2onnx command line:
tf2onnx.convert --saved-model path/to/model --output model.onnx --opset 12
https://github.com/onnx/tensorflow-onnx#--saved-model
I trained a custom CNN model using keras and tensorflow 2.2.0 as background. After that, I saved the model as .ckpt file having assets, variables, .pb file as subfolders init. After that to convert it into IR in openvino documentation it is given that we have use the following command:
**To convert such TensorFlow model:
Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
Run the mo_tf.py script with a path to the SavedModel directory to convert a model:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>**
so, I went to the following directory as mentioned and tired the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt
There is no output or anything. There is no error also.
Also, I tried the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt --output_dir C:\Users\vyas\Desktop\out
Still there is no output.
Can someone please help.
I am using tensorflow 2.2.0
Can someone please help me with this
--saved_model_dir must provide a path to the SavedModel directory.
Modify your command as follows:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model
I am using tensorrt to accelerate inference speed of Tacotron2 model. I used
tensorrt 5.0.2.6 version and tensorflow 1.13.0.rc0 version.
I convert savedmodel to tftrt savedmodel using the tensorrt api below:
trt.create_inference_graph(
input_graph_def=None,
outputs=None,
max_batch_size=32,
input_saved_model_dir=os.path.join(args.export_dir, args.version),
output_saved_model_dir=args.output_saved_model_dir,
precision_mode=args.precision_mode)
The outputed tensorrt_savedmodel.pb can not be imported into tensorboard for view but tensorrt_savedmodel.pb can be deployed with tf-serving.
However, when client request the tf-serving using grpc there is an error:
<_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The TF function for the TRT segment could not be empty
[[{{node model/inference/prenet/TRTEngineOp_33}}]]"
debug_error_string = "
{"created":"#1572417319.714936208","description":"Error received from peer ipv4:192.168.23.17:8500","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"The TF function for the TRT segment could not be empty\n\t [[{{node model/inference/prenet/TRTEngineOp_33}}]]","grpc_status":3}
any solutions about this issue?
TensorFlow saved model also provides a formal and consistent way to use tensorrt. You can try to convert it with the saved_model_cli and then deploy to tf-serving.
usage: saved_model_cli convert [-h] --dir DIR --output_dir OUTPUT_DIR
--tag_set TAG_SET
{tensorrt} ...
Usage example:
To convert the SavedModel to one that have TensorRT ops:
$saved_model_cli convert \
--dir /tmp/saved_model \
--tag_set serve \
--output_dir /tmp/saved_model_trt \
tensorrt
optional arguments:
-h, --help show this help message and exit
--dir DIR directory containing the SavedModel to convert
--output_dir OUTPUT_DIR
output directory for the converted SavedModel
--tag_set TAG_SET tag-set of graph in SavedModel to convert, separated by ','
conversion methods:
valid conversion methods
{tensorrt} the conversion to run with the SavedModel
Anyone know if Tensorflow Lite has GPU support for Python? I've seen guides for Android and iOS, but I haven't come across anything about Python. If tensorflow-gpu is installed and tensorflow.lite.python.interpreter is imported, will GPU be used automatically?
According to this thread, it is not.
one solution is to convert tflite to onnx and use onnxruntime-gpu
convert to onnx with https://github.com/onnx/tensorflow-onnx:
pip install tf2onnx
python3 -m tf2onnx.convert --opset 11 --tflite path/to/model.tflite --output path/to/model.onnx
then pip install onnxruntime-gpu
and run like:
session = onnxruntime.InferenceSession(('/path/to/model.onnx'))
raw_output = self.detection_session.run(['output_name'], {'input_name': img})
you can get the input and output names by:
for i in range(len(session.get_inputs)):
print(session.get_inputs()[i].name)
and the same but replace 'get_inputs' with 'get_outputs'
You can force the computation to take place on a GPU:
import tensorflow as tf
with tf.device('/gpu:0'):
for i in range(10):
t = np.random.randint(len(x_test) )
...
Hope this helps.
I trained an object detection model on ML Engine and exported it by invoking:
python object_detection/export_inference_graph.py \
--input_type encoded_image_string_tensor ....
Then I successfully tested prediction locally by invoking:
gcloud ml-engine local predict --model-dir ../saved_model --json-instances=inputs.json --runtime-version=1.2
where inputs.json contains:
{"b64": "base64 encoded png image"}
When I try to create a model version on ML Engine using the following command:
gcloud ml-engine versions create ${YOUR_VERSION} --model ${YOUR_MODEL} --origin=${YOUR_GCS_BUCKET}/saved_model --runtime-version=1.2
it fails with the following message:
ERROR: (gcloud.ml-engine.versions.create) Bad model detected with error: "Error loading the model: Could not load model. "
Does ML Engine NOT support model versions of input_type=encoded_image_string_tensor and how can I obtain more details on the error?
Creating a model version on ml-engine using an exported model with input_type=image_tensor works fine.
Can you verify that you're exporting the model with tensorflow 1.2?
gcloud ml-engine local predict doesn't have the --runtime-version flag, so if you have TF 1.3 installed and exported your model with that, then local predict would work using TF 1.3, but there may be incompatibilities in the model when trying to use TF 1.2 on the service.