Converting Tensorflow 1.1 model into Tensorflow Lite - tensorflow

I want to convert my tensorflow 1.1 based model into tensorflow lite in order to serve the model locally and remotely for a PWA. The official guide only offers Python APIs for 1.11 at the earliest. Command line tools only seem to work starting at 1.7. Is it possible to convert a 1.1 model to tensorflow lite? Has anyone had experience with this?
The tf module is an out-of-the-box pre-trained model using BIDAF. I am having difficulty serving the full tf app on Heroku, which is unable to run it. I would like to try a tf lite app to see if hosting it locally will make it faster, and easier to set up as a PWA.

Related

Use a model trained by Google Cloud Vertex AI accelerated with TRT on Jetson Nano

I am trying to standardize our deployment workflow for machine vision systems. So we were thinking of the following workflow.
Deployment workflow
So, we want to create the prototype for the same, so we followed the workflow. So, there is no problem with GCP operation whatsoever but when we try to export models, which we train on the vertexAI it will give three models as mentioned in the workflow which is:
SaveModel
TFLite
TFJS
and we try these models to convert into the ONNX model but we failed due to different errors.
SaveModel - Always getting the same error with any parameter which is as follows
Error in savemodel
I tried to track the error and I identified that the model is not loading inside the TensorFlow only which is wired since it is exported from the GCP vertexAI which leverages the power of TensorFlow.
TFLite - Successfully converted but again the problem with the opset of ONNX but with 15 opset it gets successfully converted but then NVIDIA tensorRT ONNXparser doesn't recognize the model during ONNX to TRT conversion.
TFJS - yet not tried.
So we are blocked here due to these problems.
We can run these models exported directly from the vertexAI on the Jetson Nano device but the problem is TF-TRT and TensorFlow is not memory-optimized on the GPU so the system gets frozen after 3 to 4 hours of running.
We try this workflow with google teachable machine once and it workout well all steps are working perfectly fine so I am really confused How I conclude this full workflow since it's working on a teachable machine which is created by Google and not working on vertexAI model which is again developed by same Company.
Or am I doing Something wrong in this workflow?
For the background we are developing this workflow inside C++ framework for the realtime application in industrial environment.

Sony Spresense SDK Tensorflow Lite - Person Detection

I am trying to do the person detection using camera using tensorflow lite in spresense board. This is a sample program from sony developers in spresense website under examples of spresense sdk cli/gui examples for tensorflow lite.
Under this program they have made of examples/tf_person_detection to config. But when try to run this command , we are getting error as this file doesn't exist.
They have mentioned at the start to enable the tensorflow LM in Kconfig of spresense. We are not sure on how to exactly do that.
Can any please help us out here on how to create the tf_example file and configure the Kconfig with tensorflow LM.
Thank you

Are there any plans to migrate the deeplab code to tensorflow 2?

The deeplab code under tensorflow/models/research/deeplab seems to be using the older version of tensorflow. With the new version of tensorflow out, are there any plans to migrate the existing repo to use tf2, specifically the tf2.keras api for model definition?

What happens when using higher version tf serving to serve a model from lower version tensorflow?

I have trained and exported a model with tensorflow 1.12.
Then I tested tf-serving 2.1.0、1.15.0、1.12.0 separately to serve the saved_model.
Then I got right results with tf-serving 1.12.0
the results of tf-serving 2.1.0 and 1.15.0 are the same but wrong.
I made another test, use tf-serving 2.1.0 and 1.15.0 separately to serve a model trained by tensorflow 1.15. This time the two results are the same right. It seems tf-serving 2.1.0 and 1.15.0 could make the same results.
Isn't tf serving backward compatible?
I haven't got any warning or error throughout these experiments.

Does inception_v3 use dilations?

I'd like to ask whether the inception_v3 model uses dilations as I am planning to run my model for inference on a server but this server only has tensorflow version 1.3 installed. This version isn't compatible with said dilations so I'd like to make sure my model would work on the server.
No inception_v3 does not use dilations and is compatible with TF 1.3 and up however Mobilenet requires TF 1.5 and up.