issue with converting the model from colab to tf.keras h5 model - tensorflow

I'm having a really hard time converting this model to s h5 model so I can then convert it to Tensorflow lite Someone managed to do that. I shared the colab here:
I really appreciate any help that I can get.
here is my colab:
https://colab.research.google.com/drive/1ZON8lvha8sI9ZCJEF0Ad8au2NNc9sUkU
I used this approach:
from docproduct.models import MedicalQAModelwithBert
medical_qa_model = MedicalQAModelwithBert(
config_file=os.path.join(
pretrained_path, 'bert_config.json'),
checkpoint_file=os.path.join(pretrained_path, 'biobert_model.ckpt'))
medical_qa_model.save("model.h5")
The error that I get is
NotImplementedError: The save method requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider using save_weights, in order to save the weights of the model.
I can save_weights but then I will have issue with converting with tflite because that requires the whole model. Any one have any suggestions how to solve this issue ?
My ultimate goal is to convert the model to tflite.
Thanks
Update:
It seems the issue is that they make their own subclass of Model and do not implement save().
https://github.com/re-search/DocProduct/blob/master/docproduct/models.py#L62
Is there any workaround to be able to convert the model without training it from scratch?

Related

How to Convert tensorflow saved_model to frozen inference graph?

I train a model by tensorflow 2 to detecting vehicles, but I want to Convert tensorflow saved_model to frozen inference graph.
Can any one help?
It is not the recommended way to save your model and i would suggest you use saved model.
People around here can help if you explain why you want to use frozen graph specifically and saved model won't help.
If you still want to try freezing you can use this internal method to do so.

Can't manage to open TensorFlow SavedModel for usage in Keras

I'm kinda new to TensorFlow and Keras, so please excuse any accidental stupidity, but I have an issue. I've been trying to load in models from the TensorFlow Detection Zoo, but haven't had much success.
I can't figure out how to read these saved_model folders (they contain a saved_model.pb file, and an assets and variables folder), so that they're accepted by Keras. Nor can I figure out a way to convert these models so that they may be loaded in. I've tried converting the SavedModel to ONNX, and then convert the ONNX-model to Keras, but that didn't work. Trying to load the original model as a saved_model, and then trying to to save this loaded model in another format gave me no success either.
Since you are new to Tensorflow (and I guess deep learning) I would suggest you stick with the API because the detection zoo models best interface with the object detection API. If you have already downloaded the model, you just need to export it using the exporter_main_v2.py script. This article explains it very well link.

PEGASUS From pytorch to tensorflow

I have fine-tuned PEGASUS model for abstractive summarization using this script which uses huggingface.
The output model is in pytorch.
Is there a way to transorm it into tensorflow model so I can use it in a javascript backend?
There are several ways in which you can potentially achieve a conversion, some of which might not even need Tensorflow at all.
Firstly, the way that does what you intend to do: PEGASUS seems to be completely based on the BartForConditionalGeneration model, according to the transformer implementation notes. This is important, because there exists a script to convert PyTorch checkpoints to TF2 checkpoints. While this script does not explicitly allow you to convert a PEGASUS model, it does have options available for BART. Running it with the respective parameters should give you the desired output.
Alternatively, you can potentially achieve the same by exporting the model into the ONNX format, which also has JS deployment options. Specific details for how to convert a Huggingface model to ONNX can be found here.

Tensorflow Extended: Is it possible to use pytorch training loop in Tensorflow extended flow

I have trained an image classification model using pytorch.
Now, I want to move it from research to production pipeline.
I am thinking of using TensorFlow extended. I have a very noob doubt that will I'll be able to use my PyTorch trained model in the TensorFlow extended pipeline(I can convert the trained model to ONNX and then to Tensorflow compatible format).
I don't want to rewrite and retrain the training part to TensorFlow as it'll be a great overhead.
Is it possible or Is there any better way to productionize the PyTorch trained models?
You should be able to convert your PyTorch image classification model to Tensorflow format using ONNX, as long as you are using standard layers. I would recommend doing the conversion and then look at both model summaries to make sure they are relatively similar. Also, do some tests to make sure your converted model handles any particular edge cases you have. Once you have confirmed that the converted model works, save your model as a TF SavedModel format and then you should be able to use it in Tensorflow Extended (TFX).
For more info on the conversion process, see this tutorial: https://learnopencv.com/pytorch-to-tensorflow-model-conversion/
You could considering using the torchX library. I haven't use it yet, but it seems to make it easier to deploy models by creating and running model pipelines. I don't think it has the same data validation functionality that Tensorflow Extended has, but maybe that will be added in the future.

How to do fine tuning on TFlite model

I would like to fine tune a model on my own data. However the model is distributed by tflite format. Is there anyway to extract the model architecture and parameters out of the tflite file?
One approach could be to convert the TFLite file to another format, and import into a deep learning framework that supports training.
Something like ONNX, using tflite2onnx, and then import into a framework of your choice. Not all frameworks can import from ONNX (e.g. PyTorch). I believe you can train with ONNXRuntime, and MXNet. Unsure if you can train using TensorFlow.
I'm not sure to understand what you need. But if you want to know the exact architecture of your model you can use neutron to find out.
You will get something like the this :
And for your information TensorFlow Lite is not meant to be finetuned. You need to finetune a classic TensorFlow model and then convert it to TensorFlow Lite.