I am using toco to optimize a frozen model (.pb).
How do I read the .tflite file in python - something similar to tf.gfile.GFile('frozen.pb', 'rb')?
.tflite file is flatbuffer format, as far as I know, there are two ways to parse info from .tflite file:
1.Parse by flatc and json. Tensorflow has implemented the parse function in visualize.py, which is in tensorflow/contrib/lite/tools, you can refer to it for parsing.
2.Parse by pure python. Flatbuffer format file has a schema, which can generate code for different programming language(link:https://google.github.io/flatbuffers/flatbuffers_guide_tutorial.html), you will get a series of python file, and you can use following code to parse the .tflite file:
from Model import Model
buf = open('you-tflite-file', 'rb').read()
buf = bytearray(buf)
model = Model.getRootAsModel(buf, 0)
Now you can get information from the model object.
The point isn't to read it in Python -- it's for Android and iOS where there are C++ libraries to read it (with a Java Wrapper for Android)
Related
I want to convert rcmalli_vggface_tf_vgg16.h5 pre-trained model to js format to use in a tensorflow-js project.
to convert this file I tried different ways but yet I could not to solve the problem.
I'm using converters.convert_tf_saved_model method to load and then convert it to a json file.
converters.convert_tf_saved_model('rcmalli_vggface_tf_vgg16.h5','web_model')
But every time following error is shown:
SavedModel file does not exist at: rcmalli_vggface_tf_vgg16.h5
While I am sure the h5 file is next to the file that is running the program.
I try full path address but same error occured. I do not know what is problem
You are converting a keras saved_model but the model is a keras layers_model because it is stored in h5 file.
I find this way the simplest to convert the model. And it did it in about 2 seconds.
Go to TensorflowJS converter Github
Follow installation instructions
Download the model you point to from the Github repo.
The model is written in the HDF5 file format
By the way, this is the oldest of the models, why not to download a new one ?
Also, this is a huge model, unless runs on the server it is of no use for a browser (produces 50 shard files of 4MB.)
Perform the conversion*
tensorflowjs_converter --input_format=keras rcmalli_vggface_tf_vgg16.h5 ./converted
The output will then be a layers model, and this has to be loaded with the tf.loadLayers API.
For example, use const model = await tf.loadGraphModel('path/to/model.json');
Note*: ./converted is the output directory, be sure not to overwrite your own stuff.
I am currently working on object detection task.
Till TF1.2 we could use toco to get .pb file get back again from .tflite model file. Is there any solution in tf2.4 or tf2.5?
If not, how can i test .tflite model performance on test tf records?
Thank you,
Anshu
You can read TFRecord file using some library such as tf.data or etc., and convert each data from there in np.float32 or np.something_array format so you can feed the data to tflite.interpreter.
I have a TFLite Model.
How to fetch the version of TFLite used to create the model?
During automation, I was trying to fetch the TFLite Models and running inference over them. Currently , I am using TFLite 2.4.1 library. The models created above this versions which has unsupported operations, need to error out.
What is the best way of handling ?
How to get TFLite version from the model.
The "min_runtime_version" model metadata in the TFLite mode file contains the information that describes the minimal runtime version that is capable of running the given model.
The above value in the TFLite flatbuffer schema can be read by the existing C++ and Python schema libraries. For example,
from tensorflow.lite.python import schema_py_generated as schema_fb
tflite_model = schema_fb.Model.GetRootAsModel(model_buf, 0)
# Gets metadata from the model file.
for i in range(tflite_model.MetadataLength()):
meta = tflite_model.Metadata(i)
if meta.Name().decode("utf-8") == "min_runtime_version":
buffer_index = meta.Buffer()
metadata = tflite_model.Buffers(buffer_index)
min_runtime_version_bytes = metadata.DataAsNumpy().tobytes()
References:
Model metadata table in TFLite flatbuffer schema
I am a newbie on the TensorFlow object detection library. I have a specific data set what I have to produce myself and labeled it with thousands of jpg. I have run the file to detect the object from these images.. any way.The end of the process i have gotten frozen_graph and from it I exported model.ckpl file to inference graph folder everything goes fine, and I have tested model.ckpl model on the object_detection.ipynb file it works fine. Until this step, there is no problem.
However,Am not able to understand how could convert that model.ckpl file to model.tflite file to use on android studio app.
I have see many things like but I am no idea what is the input_tensors = [...]
output_tensors = [...]
I may already know but what it was actually...
Could you show me how could I convert it?
Use tensorboard to find out your input and output layer. For reference follow these links -
https://heartbeat.fritz.ai/intro-to-machine-learning-on-android-how-to-convert-a-custom-model-to-tensorflow-lite-e07d2d9d50e3
Tensorflow Convert pb file to TFLITE using python
If you don't know your inputs and outputs, use summarize_graph tool and feed it your frozen model.
See command here
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#inspecting-graphs
If you have trained your model from scratch you must be having the .meta file. Also you need to specify the output node names using which you can create a .pb file. Please refer to below link on steps to create this file:
Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file
Once this is created you can further convert your .pb to tflite as below:
import tensorflow as tf
graph_def_file = "model.pb"
input_arrays = ["model_inputs"]
output_arrays = ["model_outputs"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
I am trying to generate the custom tensor flow model (tf/tflite file) which i wanted to use for my mobile application.
I have gone through few machine learning and tensor flow blogs, from there I started to generate a simple ML model.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
https://www.youtube.com/watch?v=ICY4Lvhyobk
All these are really nice and they guided me to do the below steps,
i)Install all necessary tools (TensorFlow,Python,Jupyter,etc).
ii)Load the Training and testing Data.
iii)Run the tensor flow session for train and evaluate the results.
iv)Steps to increase the accuracy
But i am not able to generate the .tf/.tflite files.
I tried the following code, but that generates an empty file.
converter = tf.contrib.lite.TFLiteConverter.from_session(sess,[],[])
model = converter.convert()
file = open( 'model.tflite' , 'wb' )
file.write( model )
I have checked few answers in stackoverflow and according to my understanding in-order to generate the .tf files we need to create the pb files, freezing the pb file and then generating the .tf files.
But how can we achieve this?
Tensorflow provides Tflite converter to convert saved model to Tflite model.For more details find here.
tf.lite.TFLiteConverter.from_saved_model() (recommended): Converts a SavedModel.
tf.lite.TFLiteConverter.from_keras_model(): Converts a Keras model.
tf.lite.TFLiteConverter.from_concrete_functions(): Converts concrete functions.