I trained a model using mxnet framework. The inference time for the model is ~ 9 milliseconds.
The model mainly consists of conv layers and uses depthwise separable convolution.
I want to run that model in browser. I converted the model to ONNX format then from
ONNX -> tensorflow -> tensorflowjs.
The inference time for tensorflowjs model ~129 milliseconds.
Any suggestion to improve the performance for the model?
I have also tried ONNXJS but it seems it still has few bugs.
Re-architecting would be a possibility since you're dealing with 129ms latency. You would have time to send images to an endpoint (EC2, or SageMaker + API Gateway) running a performant inference server.
Vishaal
Related
My goal is to optimize a pre-trained model from TFHub for inference. Therefore I would like to use an object detection model with multiple outputs:
https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1
where the archive contains a SavedModel file
https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1?tf-hub-format=compressed
I came across the methods optimize_for_inference and freeze_graph, but read on the following thread that is is no longer supported in TF2:
https://stackoverflow.com/a/56384808/11687201
So how is optimization for inference done with TF2?
The plan is to use this one of the pre-trained networks for transfer learning and use this network later on with a hardware accelerator, the converter for this hardware requires a frozen graph as input.
I have trained an image classification model using pytorch.
Now, I want to move it from research to production pipeline.
I am thinking of using TensorFlow extended. I have a very noob doubt that will I'll be able to use my PyTorch trained model in the TensorFlow extended pipeline(I can convert the trained model to ONNX and then to Tensorflow compatible format).
I don't want to rewrite and retrain the training part to TensorFlow as it'll be a great overhead.
Is it possible or Is there any better way to productionize the PyTorch trained models?
You should be able to convert your PyTorch image classification model to Tensorflow format using ONNX, as long as you are using standard layers. I would recommend doing the conversion and then look at both model summaries to make sure they are relatively similar. Also, do some tests to make sure your converted model handles any particular edge cases you have. Once you have confirmed that the converted model works, save your model as a TF SavedModel format and then you should be able to use it in Tensorflow Extended (TFX).
For more info on the conversion process, see this tutorial: https://learnopencv.com/pytorch-to-tensorflow-model-conversion/
You could considering using the torchX library. I haven't use it yet, but it seems to make it easier to deploy models by creating and running model pipelines. I don't think it has the same data validation functionality that Tensorflow Extended has, but maybe that will be added in the future.
I am using the tensorflow centernet_resnet50_v2_512x512_kpts_coco17_tpu-8 object detection model on a Nvidia Tesla P100 to extract bounding boxes and keypoints for detecting people in a video. Using the pre-trained from tensorflow.org, I am able to process about 16 frames per second. Is there any way I can imporve the evaluation speed for this model? Here are some ideas I have been looking into:
Pruning the model graph since I am only detecting 1 type of object (people)
Have not been successful in doing this. Changing the label_map when building the model does not seem to improve performance.
Hard coding the input size
Have not found a good way to do this.
Compiling the model to an optimized form using something like TensorRT
Initial attempts to convert to TensorRT did not have any performance improvements.
Batching predictions
It looks like the pre-trained model has the batch size hard coded to 1, and so far when I try to change this using the model_builder I see a drop in performance.
My GPU utilization is about ~75% so I don't know if there is much to gain here.
TensorRT should in most cases give a large increase in frames per second compared to Tensorflow.
centernet_resnet50_v2_512x512_kpts_coco17_tpu-8 can be found in the TensorFlow Model Zoo.
Nvidia has released a blog post describing how to optimize models from the TensorFlow Model Zoo using Deepstream and TensorRT:
https://developer.nvidia.com/blog/deploying-models-from-tensorflow-model-zoo-using-deepstream-and-triton-inference-server/
Now regarding your suggestions:
Pruning the model graph: Pruning the model graph can be done by converting your tensorflow model to a TF-TRT model.
Hardcoding the input size: Use the static mode in TF-TRT. This is the default mode and enabled by: is_dynamic_op=False
Compiling the model: My advise would be to convert you model to TF-TRT or first to ONNX and then to TensorRT.
Batching: Specifying the batch size is also covered in the NVIDIA blog post.
Lastly, for my model a big increase in performance came from using FP16 in my inference engine. (mixed precision) You could even try INT8 although then you first have to callibrate.
I downloaded a tensorflow model from Custom Vision and want to run it on a coral tpu. I therefore converted it to tensorflow-lite and applying hybrid post-training quantization (as far as I know that's the only way because I do not have access to the training data).
You can see the code here: https://colab.research.google.com/drive/1uc2-Yb9Ths6lEPw6ngRpfdLAgBHMxICk
When I then try to compile it for the edge tpu, I get the following:
Edge TPU Compiler version 2.0.258810407
INFO: Initialized TensorFlow Lite runtime.
Invalid model: model.tflite
Model not quantized
Any idea what my problem might be?
tflite models are not fully quantized using converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]. You might have a look on post training full integer quantization using the representation dataset: https://www.tensorflow.org/lite/performance/post_training_quantization#full_integer_quantization_of_weights_and_activations Simply adapt your generator function to yield representative samples (e.g. similar images, to what your image classification network should predict). Very few images are enough for the converter to identify min and max values and quantize your model. However, typically your accuracy is less in comparison to quantization aware learning.
I can't find the source but I believe the edge TPU currently only supports 8bit-quantized models, and no hybrid operators.
EDIT: On Corals FAQ they mention that the model needs to be fully quantized.
You need to convert your model to TensorFlow Lite and it must be
quantized using either quantization-aware training (recommended) or
full integer post-training quantization.
I am working on the recently released "SSD-Mobilenet" model by google for object detection.
Model downloaded from following location: https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md
The frozen graph file downloaded from the site is working as expected, however after quantization the accuracy drops significantly (mostly random predictions).
I built tensorflow r1.2 from source, and used following method to quantize:
bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=frozen_inference_graph.pb --out_graph=optimized_graph.pb --inputs='image_tensor' --outputs='detection_boxes','detection_scores','detection_classes','num_detections' --transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,224,224,3") fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights strip_unused_nodes sort_by_execution_order'
I tried various combinations in the "transforms" part, and the transforms mentioned above gave sometimes correct predictions, however no where close to the original model.
Is there any other way to improve performance of the quantized model?
In this case SSD uses mobilenet as it's feature extractor . In-order to increase the speed. If you read the mobilenet paper , it's a lightweight convolutional neural nets specially using separable convolution inroder to reduce parameters .
As I understood separable convolution can loose information because of the channel wise convolution.
So when quantifying a graph according to TF implementation it makes 16 bits ops and weights to 8bits . If you read the tutorial in TF for quantization they clearly have mentioned how this operation is more like adding some noise in to already trained net hoping our model has well generalized .
So this will work really well and almost lossless interms of accuracy for a heavy model like inception , resnet etc. But with the lightness and simplicity of ssd with mobilenet it really can make a accuracy loss .
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
How to Quantize Neural Networks with TensorFlow