saved models for tflites to use Edge - edge-tpu

Dear google mediapipe team
Could you offer the quantized models related pose, face, iris and hand of mediapipe's tflie file
I have used mediapipe's holistic at android with qualcomm device.
I want to improve the performance by using qualcomm's snpe sdk.
the sdk requires quantized models.
if you can offer quantized models of holistic, my plan is that I am going to try to replace tflite releated code to dlc code.
dlc(dinamic layer container) is snpe's format to do inference at qualcomm dsp, and sdk provide converting tool for quantized tflite file.
thanks,
Hoyeon
I have developed body gesture usign mediapipe.
but it's out-throuput couldn't meet our specs.
if I can use snpe sdk, I will achieve my mission.
I checked stackflow and tensorflow pages how I can convert tflite file to quantized tflite file and I found it required saved model

Related

TF Lite Retraining on Mobile

Let's assume I made an app that has machine learning in it using a tflite file.
Is it possible that I could retrain this model right inside the app?
I have tried to use the Model Maker which is provided by TensorFlow, but, without this, i don't think there's any other way to retrain your model with just the app i made.
Do you mean training on the device when the app is deployed? If yes, TFLite currently doesn't support training in general. But there's some experimental work in this direction with limited support as shown by https://github.com/tensorflow/examples/blob/master/lite/examples/model_personalization.
Currently, the retraining of a TFLite model, as you found out w/ Model Maker, has to happen offline w/ TF before the app is deployed.

How to convert model trained on custom data-set for the Edge TPU board?

I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing.
How to convert and run it on the Google Coral Edge TPU Board?
Thanks
Without reading the documentation, it will be very hard to continue. I'm not sure what your "prediction script" means, but I'm assuming that the script loaded a .pb tensorflow model, loaded some image data, and run inference on it to produce prediction results. That means you have a .pb tensorflow model at the "Frozen graph" stage of the following pipeline:
Image taken from coral.ai.
The next step would be to convert your .pb model to a "fully quantized .tflite model" using the post training quantization technique. The documentation to do that are given here. I also created a github gist, containing an example of Post Training Quantization here. Once you have produced the .tflite model, you'll need to compile the model via the edgetpu_compiler. Although everything you need to know about the edgetpu compiler is in that link, for your purpose, compiling a model is as simple as:
$ edgetpu_compiler your_model_name.tflite
Which will creates a your_model_name_edgetpu.tflite model that is compatible with the EdgeTPU. Now, if at this stage, instead of creating an edgetpu compatible model, you are getting some type of errors, then that means your model did not meets the requirements that are posted in the models-requirements section.
Once you have produced a compiled model, you can then deploy it on an edgetpu device. Currently are 2 main APIs that can be use to run inference with the model:
EdgeTPU API
python api
C++ api
tflite API
C++ api
python api
Ultimately, there are many demo examples to run inference on the model here.
The previous answer works with general classification models, but not with TF object detection API trained models.
You cannot do post-training quantization with TF Lite converter on TF object detection API models.
In order to run object detection models on EdgeTPU-s:
You must train the models in quantized aware training mode with this addition in model config:
graph_rewriter {
quantization {
delay: 48000
weight_bits: 8
activation_bits: 8
}
}
This might not work with all the models provided in the model-zoo, try a quantized model first.
After training, export the frozen graph with: object_detection/export_tflite_ssd_graph.py
Run tensorflow/lite/toco tool on the frozen graph to make it TFLite compatible
And finally run edgetpu_complier on the .tflite file
You can find more in-depth guide here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md

I want to learn about TensorFlowInferenceInterface which is used to create tensorflow apps. What sources can trusted?

I am working on project Audio classifier app. I am beginner in java Also if some one can help me figure out how to extract MFFC features from audio signal I would like give them credit,also provide me contact details if you are interested.
In Android, you can use TensorFlow Lite , a lightweight solution for TensorFlow.
You can convert a TensorFlow model, or Keras model to a TF Lite model ( .tflite ). See here.
This TF Lite model could be inferenced on an Android or iOS device using the TF Lite's Java and Swift API.
It may not support some layers like LSTM or BatchNormalization. Dense and all Conv layers work well.
Another method is using Tensorflow Mobile.
TensorFlow Mobile runs a protocol buffers file ( .pb ). It has been deprecated but can be still used. But, Google suggests its developers to use TF Lite.
You can find a complete tutorial here.

How to retrieve original TensorFlow frozen graph from .tflite?

Basically I am trying to use google's pre trained Speaker-id model for speaker detection. But this being a TensorFlow Lite model, I can't use it on my Linux pc. For that, I am trying to find a converter back to its frozen graph model.
Any help on this converter or any direct way to use tensorflow Lite pretrained models on desktop itself, will be appreciated.
You can use the converter which generates tflite models to convert it back to a .pb file if that is what you're searching for.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md

How do we deploy a trained tensorflow model on a mobile device?

One of the highlights of tensorflow appears to be "true portability" - seamless deployment of trained models across different platforms - especially running a trained model on a mobile device? Do you have an example or some tutorial that walks through how a trained tensorflow model can be packaged and executed within a mobile app?
The TensorFlow repository includes an example Android application that uses the mobile device camera as a data source, and the Inception image classification model for inference. The source can be found here, and the repository includes both the full source code and a link to download a trained model.
The model is the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014.