is it possible to convert tflite to tf model? - tensorflow

I converted the TF model, which was a float, into a Tflite model, which was an integer, so that I could make inferences on the Edge device. Tflite is a lightweight model, and it is simple to deploy. Tflite, on the other hand, has a few functions and input allocations that are different from those of TF. Consequently, I would like to revert to using TF. If there is anyone who has any insight into this matter. Leave your thoughts in the comments.
Thanks.

Because there are some information lost during the conversion (e.g. due to several optimization steps, etc.), there's no defined way to convert this back. In case you want to revert flatbuffer (.tflite) back to the fozen graph (.pb), you can refer to this Converting .tflite to .pb.

Related

PEGASUS From pytorch to tensorflow

I have fine-tuned PEGASUS model for abstractive summarization using this script which uses huggingface.
The output model is in pytorch.
Is there a way to transorm it into tensorflow model so I can use it in a javascript backend?
There are several ways in which you can potentially achieve a conversion, some of which might not even need Tensorflow at all.
Firstly, the way that does what you intend to do: PEGASUS seems to be completely based on the BartForConditionalGeneration model, according to the transformer implementation notes. This is important, because there exists a script to convert PyTorch checkpoints to TF2 checkpoints. While this script does not explicitly allow you to convert a PEGASUS model, it does have options available for BART. Running it with the respective parameters should give you the desired output.
Alternatively, you can potentially achieve the same by exporting the model into the ONNX format, which also has JS deployment options. Specific details for how to convert a Huggingface model to ONNX can be found here.

Tensorflow Extended: Is it possible to use pytorch training loop in Tensorflow extended flow

I have trained an image classification model using pytorch.
Now, I want to move it from research to production pipeline.
I am thinking of using TensorFlow extended. I have a very noob doubt that will I'll be able to use my PyTorch trained model in the TensorFlow extended pipeline(I can convert the trained model to ONNX and then to Tensorflow compatible format).
I don't want to rewrite and retrain the training part to TensorFlow as it'll be a great overhead.
Is it possible or Is there any better way to productionize the PyTorch trained models?
You should be able to convert your PyTorch image classification model to Tensorflow format using ONNX, as long as you are using standard layers. I would recommend doing the conversion and then look at both model summaries to make sure they are relatively similar. Also, do some tests to make sure your converted model handles any particular edge cases you have. Once you have confirmed that the converted model works, save your model as a TF SavedModel format and then you should be able to use it in Tensorflow Extended (TFX).
For more info on the conversion process, see this tutorial: https://learnopencv.com/pytorch-to-tensorflow-model-conversion/
You could considering using the torchX library. I haven't use it yet, but it seems to make it easier to deploy models by creating and running model pipelines. I don't think it has the same data validation functionality that Tensorflow Extended has, but maybe that will be added in the future.

Convert PoseNet TensorFlow.js params to TensorFlow Lite

I'm fairly new to TensorFlow so I apologize if I'm saying something absurd.
I've been playing with the PoseNet model in the browser using TensorFlow.js. In this project, I can change the algorithm and parameters so I can get better results on the detection of certain poses. The most important params in my use case are the Multiplier, Quant Bytes and Output Stride.
So far so good, I have the results I want. However, I want to convert these results to TensorFlow Lite so I can use it in an iOS application. I managed to find the PoseNet model in a TensorFlow Lite file (tflite) and I even found an iOS app example provided by TensorFlow to I'm able to load up the model file and have it working on iOS.
The problem is...I'm unable to change the params (Multiplier, Quant Bytes and Output Stride) on the iOS app. I can't find it anywhere how I can do this. I've tried searching for these params in the iOS app source code, I've tried to find ways to convert a TensorFlow.js model to TensorFlow Lite so I can load the model with the params I want in the app but no luck.
I'm writing this post so maybe you guys can point me in the right direction so I'm able to "translate" what I have on TensorFlow.js to TensorFlow Lite.
EDIT:
This is what I've learned in the last couple of days:
TFLite is designed for serving fixed model with lightweight runtime. Thus, modifying model parameters on demand is not a design goal for it.
I looked at the TF.js code for PoseNet, and found similar design. It seems you can modify parameters, because they actually have different models for each params. https://github.com/tensorflow/tfjs-models/blob/b72c10bdbdec6b04a13f780180ed904736fa52a5/posenet/src/checkpoints.ts#L37
TFLite models generally don't support dynamic parameters. Output stride Multiplier and Quant Bytes are fixed params when the neural network is created.
So what I want to do is to extract weights from TF.js model, and put then into existing MobileNet code.
And that's where I need help now. Could anyone point me in the direction to load and change the model so I can then convert it to tflite with my own params?
EDIT2:
I found a repo that is helping me convert TF.js models to TF Lite Griffin98/posenet_tfjs2tflite. I still can't define the Quant Bytes tho.

issue with converting the model from colab to tf.keras h5 model

I'm having a really hard time converting this model to s h5 model so I can then convert it to Tensorflow lite Someone managed to do that. I shared the colab here:
I really appreciate any help that I can get.
here is my colab:
https://colab.research.google.com/drive/1ZON8lvha8sI9ZCJEF0Ad8au2NNc9sUkU
I used this approach:
from docproduct.models import MedicalQAModelwithBert
medical_qa_model = MedicalQAModelwithBert(
config_file=os.path.join(
pretrained_path, 'bert_config.json'),
checkpoint_file=os.path.join(pretrained_path, 'biobert_model.ckpt'))
medical_qa_model.save("model.h5")
The error that I get is
NotImplementedError: The save method requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider using save_weights, in order to save the weights of the model.
I can save_weights but then I will have issue with converting with tflite because that requires the whole model. Any one have any suggestions how to solve this issue ?
My ultimate goal is to convert the model to tflite.
Thanks
Update:
It seems the issue is that they make their own subclass of Model and do not implement save().
https://github.com/re-search/DocProduct/blob/master/docproduct/models.py#L62
Is there any workaround to be able to convert the model without training it from scratch?

How to use SqueezeNet in CNTK?

I am a CNTK user. I use AlexNet but would like a more compact NN -- so SqueezeNet seems to be of interest. Or does someone have some other suggestion? How do CNTK users deploy when size matters? Does somebody have a CNTK implementation of SqueezeNet?
The new ONNX model format now has several pretrained vision models, including one for SqueezeNet. You can download the model and load it into CNTK:
import cntk as C
z = C.Function.load(<path of your ONNX model>, format=C.ModelFormat.ONNX)
You can find tutorials for importing/exporting ONNX models in CNTK here.
SqueezeNet is a good choice for a small network with the possibility of a good accuracy. Take a look at DSDSqueezeNet for even better accuracy.
However, if it does not need to be as small as SqueezeNet you also could take a look at MobileNet or NasNet Mobile. These networks may be bigger, but they provide state of the art performance in the task of image classification.
Unfortunately, I do not have an CNTK implementation of SqueezeNet, but maybe a pretrained CNTK model, which you can reuse and finetune using Transfer Learning is all what you are looking for. In this case I can recommend you MMdnn, a conversion tool, which allows to convert a existing pretrained Caffe network to the CNTK model format. In this issue you can find a step by step guide for SqueezeNet.
I would not know of a method for especially small deployment, but you have basically two choices when it comes to saving your model: The standard CNTK model format and the new ONNX format which CNTK is going to support or does already. Till now, I could not try it myself, but maybe it offers a smaller size for the same network.
Since the CNTK model format is already saving the model in binary, I would not expect high improvements anyway, for any format. Anyway, compressing the model definitively could be an option, if size is very important.