How do we deploy a trained tensorflow model on a mobile device? - tensorflow

One of the highlights of tensorflow appears to be "true portability" - seamless deployment of trained models across different platforms - especially running a trained model on a mobile device? Do you have an example or some tutorial that walks through how a trained tensorflow model can be packaged and executed within a mobile app?

The TensorFlow repository includes an example Android application that uses the mobile device camera as a data source, and the Inception image classification model for inference. The source can be found here, and the repository includes both the full source code and a link to download a trained model.
The model is the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014.

Related

saved models for tflites to use Edge

Dear google mediapipe team
Could you offer the quantized models related pose, face, iris and hand of mediapipe's tflie file
I have used mediapipe's holistic at android with qualcomm device.
I want to improve the performance by using qualcomm's snpe sdk.
the sdk requires quantized models.
if you can offer quantized models of holistic, my plan is that I am going to try to replace tflite releated code to dlc code.
dlc(dinamic layer container) is snpe's format to do inference at qualcomm dsp, and sdk provide converting tool for quantized tflite file.
thanks,
Hoyeon
I have developed body gesture usign mediapipe.
but it's out-throuput couldn't meet our specs.
if I can use snpe sdk, I will achieve my mission.
I checked stackflow and tensorflow pages how I can convert tflite file to quantized tflite file and I found it required saved model

Tensorflow vs Tensorflow Lite for mobile apps ML data pipeline

I want to build a ML data pipeline for a recommender system in a dating mobile app.
Currently, I am in a very early stage trying to figure out the infrastructure but I am confused with tensorflow and tensorflow lite.
Can I build the system using tensorflow and then after training, hyperparameter tuning etc. deploy the model in backend?
Is it mandatory to use tensorfow lite whenever wanting to use ML for mobile or that is used only when you actually want to train the model in a phone device?
TensorFlow Lite is mainly for inference use cases. After training TF models in the desktop/server side, you can convert the trained TF model to the corresponding TensorFlow Lite model to deploy them to mobile with some techniques, for example, quantizations.

TF Lite Retraining on Mobile

Let's assume I made an app that has machine learning in it using a tflite file.
Is it possible that I could retrain this model right inside the app?
I have tried to use the Model Maker which is provided by TensorFlow, but, without this, i don't think there's any other way to retrain your model with just the app i made.
Do you mean training on the device when the app is deployed? If yes, TFLite currently doesn't support training in general. But there's some experimental work in this direction with limited support as shown by https://github.com/tensorflow/examples/blob/master/lite/examples/model_personalization.
Currently, the retraining of a TFLite model, as you found out w/ Model Maker, has to happen offline w/ TF before the app is deployed.

I want to learn about TensorFlowInferenceInterface which is used to create tensorflow apps. What sources can trusted?

I am working on project Audio classifier app. I am beginner in java Also if some one can help me figure out how to extract MFFC features from audio signal I would like give them credit,also provide me contact details if you are interested.
In Android, you can use TensorFlow Lite , a lightweight solution for TensorFlow.
You can convert a TensorFlow model, or Keras model to a TF Lite model ( .tflite ). See here.
This TF Lite model could be inferenced on an Android or iOS device using the TF Lite's Java and Swift API.
It may not support some layers like LSTM or BatchNormalization. Dense and all Conv layers work well.
Another method is using Tensorflow Mobile.
TensorFlow Mobile runs a protocol buffers file ( .pb ). It has been deprecated but can be still used. But, Google suggests its developers to use TF Lite.
You can find a complete tutorial here.

How can I get access to intermediate activation maps of the pre-trained models in NiftyNet?

I could download and successfully test brain parcellation demo of NiftyNet package. However, this only gives me the ultimate parcellation result of a pre-trained network, whereas I need to get access to the output of the intermediate layers too.
According to this demo, the following line downloads a pre-trained model and a test MR volume:
wget -c https://www.dropbox.com/s/rxhluo9sub7ewlp/parcellation_demo.tar.gz -P ${demopath}
where ${demopath} is the path to the demo folder. Extracting the downloaded file will create a .ckpt file which seems to contain a pre-trained tensorflow model, however I could not manage to load it into a tensorflow session.
Is there a way that I can load the pre-trained model and have access to the all its intermediate activation maps? In other words, how can I load the pre-trained models from NiftyNet library into a tensorflow session such that I can explore through the model or probe certain intermediate layer for a any given input image?
Finally, in NiftyNet's website it is mentioned that "a number of models from the literature have been (re)implemented in the NiftyNet framework". Are pre-trained weights of these models also available? The demo is using a pre-trained model called HighRes3DNet. If the pre-trained weights of other models are also available, what is the link to download those weights or saved tensorflow models?
To answer your 'Finally' question first, NiftyNet has some network architectures implemented (e.g., VNet, UNet, DeepMedic, HighRes3DNet) that you can train on your own data. For a few of these, there are pre-trained weights for certain applications (e.g. brain parcellation with HighRes3DNet and abdominal CT segmentation with DenseVNet).
Some of these pre-trained weights are linked from the demos, like the parcellation one you linked to. We are starting to collect the pre-trained models into a model zoo, but this is still a work in progress.
Eli Gibson [NiftyNet developer]