How to Convert Yolov5 model to tensorflow.js - tensorflow

Is it possible to convert the YOLOv5 PyTorch model to the Tensorflow.js model?
I am developing an object detection web app. so I have trained the data using Yolov5, but now I am looking for the correct method to convert that model to Tf.js.

I think this is what you are looking for https://github.com/zldrobit/tfjs-yolov5-example
Inside the YoloV5 repo, run the export.py command.
python export.py --weights yolov5s.pt --include tfjs
Then cd into the above linked repo and copy the weights folder to the public:
cp ./yolov5s_web_model public/web_model
Don't forget, you'll have to change the names array in src/index.js to match your custom model.
But unfortunately, it seems painfully slow at about 1-2 seconds. I don't think I was able to get WebGL working.

With few interim steps, but most of the times it works:
Export PyTorch to ONNX
Convert ONNX to TF Saved Model
Convert TF Saved Model to TFJS Graph Model
When converting from ONNX to TF, you might need to adjust target version if you run into unsupported ops.
Also, make sure to set input resolutions to a fixed values, any dynamic inputs get messed up in this multi-step conversion.

Related

Build a Tensor Flow model from saved_model file

I trained a model using yolov5, Then exported it to TensorFlow saved_model format, the result was a yolo5s.pt file. As far as I know yolov5 uses PyTorch, I prefer TensorFlow. Now I want to build a model in TensorFlow using the saved_model file, how can I do it?
It will be preferable if the solution is in google colab, I didn't included my code because I don't have any ideas how to start.

PEGASUS From pytorch to tensorflow

I have fine-tuned PEGASUS model for abstractive summarization using this script which uses huggingface.
The output model is in pytorch.
Is there a way to transorm it into tensorflow model so I can use it in a javascript backend?
There are several ways in which you can potentially achieve a conversion, some of which might not even need Tensorflow at all.
Firstly, the way that does what you intend to do: PEGASUS seems to be completely based on the BartForConditionalGeneration model, according to the transformer implementation notes. This is important, because there exists a script to convert PyTorch checkpoints to TF2 checkpoints. While this script does not explicitly allow you to convert a PEGASUS model, it does have options available for BART. Running it with the respective parameters should give you the desired output.
Alternatively, you can potentially achieve the same by exporting the model into the ONNX format, which also has JS deployment options. Specific details for how to convert a Huggingface model to ONNX can be found here.

How to convert a tensorflow hub pretrained model as to be consumable by tensorflow serving

I am trying to use this for my object detection task. The problems I am facing are:
On running the saved_model_cli command, I am getting the following output. There is no signature defined with tag-set "serve" also the method name is empty
The variable folder in the model directory only contains a few bytes of data which means the weights are not actually written to disk.
The model format seems to be HubModule V1 which seems to be the issue, any tips on making the above model servable are highly appreciated.
TF2 SavedModels should not have this problem, only Hub.Modules from TF1 since Hub.Modules use the signatures for other purposes. You can take a hub.Module and build a servable SavedModel, but it's quite complex and involves building the signatures yourself.
Instead, I recommend checking out the list of TF2 object detection models on TFHub.dev for a model you can use instead of the model you are using: https://tfhub.dev/s?module-type=image-object-detection&tf-version=tf2
These models should be servable with TF Serving

Tensorflow Extended: Is it possible to use pytorch training loop in Tensorflow extended flow

I have trained an image classification model using pytorch.
Now, I want to move it from research to production pipeline.
I am thinking of using TensorFlow extended. I have a very noob doubt that will I'll be able to use my PyTorch trained model in the TensorFlow extended pipeline(I can convert the trained model to ONNX and then to Tensorflow compatible format).
I don't want to rewrite and retrain the training part to TensorFlow as it'll be a great overhead.
Is it possible or Is there any better way to productionize the PyTorch trained models?
You should be able to convert your PyTorch image classification model to Tensorflow format using ONNX, as long as you are using standard layers. I would recommend doing the conversion and then look at both model summaries to make sure they are relatively similar. Also, do some tests to make sure your converted model handles any particular edge cases you have. Once you have confirmed that the converted model works, save your model as a TF SavedModel format and then you should be able to use it in Tensorflow Extended (TFX).
For more info on the conversion process, see this tutorial: https://learnopencv.com/pytorch-to-tensorflow-model-conversion/
You could considering using the torchX library. I haven't use it yet, but it seems to make it easier to deploy models by creating and running model pipelines. I don't think it has the same data validation functionality that Tensorflow Extended has, but maybe that will be added in the future.

How to do fine tuning on TFlite model

I would like to fine tune a model on my own data. However the model is distributed by tflite format. Is there anyway to extract the model architecture and parameters out of the tflite file?
One approach could be to convert the TFLite file to another format, and import into a deep learning framework that supports training.
Something like ONNX, using tflite2onnx, and then import into a framework of your choice. Not all frameworks can import from ONNX (e.g. PyTorch). I believe you can train with ONNXRuntime, and MXNet. Unsure if you can train using TensorFlow.
I'm not sure to understand what you need. But if you want to know the exact architecture of your model you can use neutron to find out.
You will get something like the this :
And for your information TensorFlow Lite is not meant to be finetuned. You need to finetune a classic TensorFlow model and then convert it to TensorFlow Lite.