I am trying to generate the custom tensor flow model (tf/tflite file) which i wanted to use for my mobile application.
I have gone through few machine learning and tensor flow blogs, from there I started to generate a simple ML model.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
https://www.youtube.com/watch?v=ICY4Lvhyobk
All these are really nice and they guided me to do the below steps,
i)Install all necessary tools (TensorFlow,Python,Jupyter,etc).
ii)Load the Training and testing Data.
iii)Run the tensor flow session for train and evaluate the results.
iv)Steps to increase the accuracy
But i am not able to generate the .tf/.tflite files.
I tried the following code, but that generates an empty file.
converter = tf.contrib.lite.TFLiteConverter.from_session(sess,[],[])
model = converter.convert()
file = open( 'model.tflite' , 'wb' )
file.write( model )
I have checked few answers in stackoverflow and according to my understanding in-order to generate the .tf files we need to create the pb files, freezing the pb file and then generating the .tf files.
But how can we achieve this?
Tensorflow provides Tflite converter to convert saved model to Tflite model.For more details find here.
tf.lite.TFLiteConverter.from_saved_model() (recommended): Converts a SavedModel.
tf.lite.TFLiteConverter.from_keras_model(): Converts a Keras model.
tf.lite.TFLiteConverter.from_concrete_functions(): Converts concrete functions.
Related
I'm trying to create a .tflite model from a CycleGAN taken from GitHub (https://github.com/vanhuyz/CycleGAN-TensorFlow).
I am very new in this field and I do not understand how to expose the .pb model (which I have already created from the checkpoints) in a .tflite model.
I tried with tflite_convert but without any result, also because I don't know the parameters to insert as --input_arrays and --output_arrays.
Some idea?
I would recommend using the TFLiteConverter python api here: https://www.tensorflow.org/lite/convert/python_api and use SavedModel as your model input format. Otherwise, you can provide the input and output tensor names or your pb model as input_arrays and output_arrays.
I am using some implementation for creating a face recognition which uses this file:
"facenet.load_model("20170512-110547/20170512-110547.pb")"
What is the use of this file? I am not sure how it works.
console log :
Model filename: 20170512-110547/20170512-110547.pb
distance = 0.72212267
Github link of the actual owner of the code
https://github.com/arunmandal53/facematch
pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model.
Given a pb file, you can load it as follow.
def load_pb(path_to_pb):
with tf.gfile.GFile(path_to_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with
input = graph.get_tensor_by_name('input:0')
output = graph.get_tensor_by_name('output:0')
and use regular TensorFlow routine like:
sess.run(output, feed_dict={input: some_data})
Explanation
The .pb format is the protocol buffer (protobuf) format, and in Tensorflow, this format is used to hold models. Protobufs are a general way to store data by Google that is much nicer to transport, as it compacts the data more efficiently and enforces a structure to the data. When used in TensorFlow, it's called a SavedModel protocol buffer, which is the default format when saving Keras/ Tensorflow 2.0 models. More information about this format can be found here and here.
For example, the following code (specifically, m.save), will create a folder called my_new_model, and save in it, the saved_model.pb, an assets/ folder, and a variables/ folder.
# first download a SavedModel from TFHub.dev, a website with models
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/4")
])
m.build([None, 224, 224, 3]) # Batch input shape.
m.save("my_new_model") # defaults to save as SavedModel in tensorflow 2
In some places, you may also see .h5 models, which was the default format for TF 1.X. source
Extra information: In TensorFlow Lite, the library for running models on mobile and IoT devices, instead of protocol buffers, flatbuffers are used. This is what the TensorFlow Lite Converter converts into (.tflite format). This is another Google format which is also very efficient: it allows access to any part of the message without deserialization (unlike json, xml). For devices with less memory (RAM), it makes more sense to load what you need from the model file, instead of loading the entire thing into memory to deserialize it.
Loading SavedModels in TensorFlow 2
I noticed BiBi's answer to show loading models was popular, and there is a shorter way to do this in TF2:
import tensorflow as tf
model_path = "/path/to/directory/inception_v1_224_quant_20181026"
model = tf.saved_model.load(model_path)
Note,
the directory (i.e. inception_v1_224_quant_20181026) has to have a saved_model.pb or saved_model.pbtxt, otherwise the code will crash. You cannot specify the .pb path, specify the directory.
you might get TypeError: 'AutoTrackable' object is not callable for older models, fix here.
If you load a TF1 model, I found that I don't get any errors, but the loaded file doesn't behave as expected. (e.g. it doesn't have any functions on it, like predict)
I have a model trained with tf.estimator and it was exported after training as below
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
feature_placeholders)
classifier.export_savedmodel(
r'./path/to/model/trainedModel', serving_input_fn)
This gives me a saved_model.pb and a folder which contains weights as a .data file. I can reload the saved model using
predictor = tf.contrib.predictor.from_saved_model(r'./path/to/model/trainedModel')
I'd like to run this model on android and that requires the model to be in .pb format. How can I freeze this predictor for use on android platform?
I don't deploy to Android, so you might need to customize the steps a bit, but this is how I do this:
Run <tensorflow_root_installation>/python/tools/freeze_graph.py with arguments --input_saved_model_dir=<path_to_the_savedmodel_directory>, --output_node_names=<full_name_of_the_output_node> (you can get the name of the output node from graph.pbtxt, although that's not the most comfortable of ways), --output_graph=frozen_model.pb
(optionally) Run <tensorflow_root_installation>/python/tools/optimize_for_inference.py with adequate arguments. Alternatively you can look up the Graph Transform Tool and selectively apply optimizations.
At the end of step 1 you'll already have a frozen model with no variables left, that you can then deploy to Android.
I'm relatively new to TensorFlow and I'm having trouble modifying some of the examples to use batch/stream processing with input functions. More specifically, what is the 'best' way to modify this script to make it suitable for training and serving deployment on Google Cloud ML?
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/text_classification.py
Something akin to this example:
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/estimator/trainer
I can package it up and train it in the cloud, but I can't figure out how to apply even the simple vocab_processor transformations to an input tensor. I know how to do it with pandas, but there I can't apply the transformation to batches (using the chunk_size parameter). I would be very happy if I could reuse my pandas preprocessing pipelines in TensorFlow.
I think you have 3 options
1) You cannot reuse pandas preprocessing pipelines in TF. However, you could start TF with the output of your pandas preprocessing. So you could build a vocab and convert the text words to integers, and save a new preprocessed dataset to disk. Then read the integer data (which is encoding your text) in TF to do training.
2) You could build a vocab outside of TF in pandas. Then inside TF, after reading the words, you can make a table to map the text to integers. But if you are going to build a vocab outside of TF, you might as well do the transformation at the same time outside of TF, which is option 1.
3) Use tensorflow_transform. You can call tft.string_to_int() on the text column to automatically build the vocab and convert to integers. The output of tensorflow_transform is preprocessed data in tf.example format. Then training can start from the tf.example files. This is again option 1 but with tf.example files. If you want to run prediction on raw text data, this option allows you to make an exported graph that has the same text preprocessing built in, so you don't have to manage the preprocessing step at prediction time. However, this option is the most complicated as it introduces two additional ideas: tf.example files and beam pipelines.
For examples of tensorflow_transform see https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/criteo_tft
and
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/reddit_tft
I manage to retrain my specific classification model using the generic inception model following this tutorial. I would like now to deploy it on the google cloud machine learning following this steps.
I already managed to export it as MetaGraph but I can't manage to get the proper inputs and outputs.
Using it locally, my entry point to the graph is DecodeJpeg/contents:0 which is fed with a jpeg image in binary format. The output are my predictions.
The code I use locally (which is working) is:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
Should the input tensor be DecodeJpeg? What would be the changes I need to make if I would like to have a base64 image as input ?
I defined the output as:
outputs = {'prediction':softmax_tensor.name}
Any help is highly appreciated.
In your example, the input tensor is 'DecodeJpeg/contents:0', so you would have something like:
inputs = {'image': 'DecodeJpeg/contents:0')
outputs = {'prediction': 'final_result:0')
(Be sure to follow all of the instructions for preparing a model).
The model directory you intend to export should have files such as:
gs://my_bucket/path/to/model/export.meta
gs://my_bucket/path/to/model/checkpoint*
When you deploy your model, be sure to set gs://my_bucket/path/to/model as the deployment_uri.
To send an image to the service, as you suggest, you will need to base64 encode the image bytes. The body of your request should look like the following (note the 'tag', 'b64', indicating the data is base-64 encoded):
{'instances': [{'b64': base64.b64encode(image)}]}
We've now released a tutorial on how to retrain the Inception model, including instructions for how to deploy the model on the CloudML service.
https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow