How do you embed a tflite file into an Android application? - tensorflow

What are the step-by-step instructions for using a TFlite file and embedding it in an actual Android application? For reference, this is regression. Input will be an image, output should be a number. I have already looked at the TensorFlow documentation but they do not explain how to do it from scratch.

The following steps are required to use TFLite in Android:
include the dependency 'org.tensorflow:tensorflow-lite:+' to your build.gradle
Make sure files of type .tflite will not be compressed using the aaptOptions in your build.gradle
Make the model .tflite file available by putting it into your apps assets folder (To create one, right click res folder, click res > New > Folder > Assets Folder)
In your java class that will handle the inference, import the tflite interpreter
Load the model file as a MappedByteBuffer in your java class
Load the MappedByteBuffer into your TFLite interpreter
Convert input image to float ByteBuffer
Define output array matching the size of your output layer
Use the loaded TFLite interpreter to forward pass the input ByteBuffer through your model and write the prediction to your output array.
Let me know if something is unclear!

For Android applications, here is a quick example of using TF Lite for classification. You may be able to follow a similar structure as here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/android/app/src/main/java/org/tensorflow/demo/TFLiteImageClassifier.java

Related

SavedModel usage is just for Tensor serving or to retrain

Is this SavedModel just for Tensorflow front-end applications or it can be use to reload model in keras format. I created it using tf.saved_model.save and now I don't know what to make of it.
Following the guide above I was able to load a SavedModel directory, and it seemingly no use, not trainable nor use to predict input like model.predict, and that the only thing I have since I lost the h5 file in my files **cough trashbin **cough.
Note: I noticed this guide tell me to use tf.keras.models.load_model('inceptionv3')
and it return this
error
You have saved the model using tf.saved_model.save and so the correct way to load it back is tf.saved_model.load('inceptionv3'). This is also suggested in your error image.
After loading model, you can try doing prediction as follows:
model = tf.saved_model.load('inceptionv3')
out = model(inputs)

How do I convert a Tensorflow model to .mlmodel?

I want to convert a Tensorflow model with the following structure to a .mlmodel file for use in an iOS app:
cub_image_experiment/
logdir/
val_summaries/
test_summaries/
finetune/
val_summaries/
cmds.txt
config_train.yaml
config_test.yaml
I'm following this tutorial: https://github.com/visipedia/tf_classification/wiki/CUB-200-Image-Classification
However, I'm having trouble understanding the structure of the project. Which files are important and how do I convert all the separate config files and everything into a single .mlmodel file so that I can use in my application?
I've looked online and all I could find was how to convert .caffemodel to .mlmodel or .pb file to .mlmodel. These are all single files, however my project has multiple files. I found a tutorial on how to convert a tf model into a single .pb file, however, that model's structure was different and it did not contain any yaml files. My project is not focused on creating a model at the moment, but merely integrating a model into an iOS app. I found this model interesting for an app idea and wanted to know if it can be integrated. If there are any tutorials out there that might help me in this sort of problem please let me know.
None of that stuff is used by the Core ML model. The yaml files etc are used only to train the TF model.
All you need to provide is a frozen graph (a .pb file) and then convert it to an mlmodel using tfcoreml.
It looks like your project doesn't have a frozen graph but checkpoints. There is a TF utility that you can use to convert the checkpoint to a frozen graph, see https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py

how to check the quantized weights in tflite files

When I quantized the model by lite modules in tensorflow, I cann't check the weights values that had quantized.Is there any way to view these values in the .tflite files? or Is there any way to parse the .tflite files?
There are some neural network visualizers that can also provide an interface to inspect the file. I have been using Netron. You can click on the "weights" tab of the layer you are interested in to view the data. I haven't tried it yet, but there appears to be a floppy-disk save icon when you view weights/biases in the right side-bar.
The data format is a Google FlatBuffer, defined by the schema file here. You may prefer doing this if you want to do something with the data like output it in a different format. I found the output from parsing it myself using the schema.fbs file to match Netron's for the CNN's I passed in. You can check out the FlatBuffers documentation here.
here in first answer is guide, how to create json view of .tflite model. There you can see quantized values

Is there a C/C++ API for Tensorflow object detection

Is there a C/C++ API, pre-trained with Imagenet dataset, for Detection ?
I have tried Yolo, with
./darknet -i 0 detector demo cfg/imagenet1k.data extraction.cfg extraction.weights
But it gives me the error
Last layer must produce detections
And for Tensorflow, looks like there is only python API
https://github.com/tensorflow/models/tree/master/research/object_detection
When you develop a model in TensorFlow, it can be output as a protobuf file (usually with a pb extension, for more details on protobuf in TensorFlow check out this page). This protobuf file can then be used in different applications written in languages that TensorFlow has bindings to. A simple tutorial on how to accomplish this for a C++ application can be found here.
Regarding Yolo, you can generate a protobuf file from the Yolo script like this:
flow --model cfg/yolo.cfg --load bin/yolo.weights --savepb
(Further details on other parameters that can be passed to Yolo can be found on the Github readme page).
The output protobuf file can then be loaded into your C++ application to perform object detection.

Using retrained data with classify_image.py

I've been using tensorflow image recognition. I've build many scripts which interact with classify_image.py.
I also retrained the model using retrain.py, with my own dataset.
How can use the two files generated: output_graph.pb, output_labels.txt with classify_image.py ?
Ah, the docs say
If you'd like to use the retrained model in a Python program this example from #eldor4do shows what you'll need to do.
Just copied/edited that one file locally, and python .\edited-retraining-example.py. And that was easy.
Note that if you're on Windows, change all examples of /tmp/... to c:/tmp/....