How to Create model.graphdef and config.pbtxt of a trained model in tensorflow? - tensorflow

In order to serve a model in triton Inference server, the model repository should look like below
<model-repository-path>/
<model-name>/
config.pbtxt
1/
model.graphdef
But I am stuck at creating the above repository.
After you trained a model in TensorFlow,
1)How do you create model.graphdef?
2)how do you create config.pbtxt?

'model.graphdef' is your trained tensorflow model, it will have the extension .pb. When you train a model in tensorflow the weights are saved as a pb file. You can add that to this folder.
In this case you do not need to create a config.pbtxt file because triton inference server can automatically generate the configurations from tensorflow, tensorRT and ONNX models. You can simply start the server with the command --strict-model-config = false and it will generate the config.pbtxt file for you.
If you do wish to create your own config.pbtxt you can do so as well. The details are available in the official triton documentation.

The auto generated config file can be viewed using
curl localhost:8000/v2/models/<model_name>/config | jq
Use information from this output to create your own config file

Related

Using of Estamator.evaluate() on trained sagemaker tensorflow model

After I've trained and deployed the model with AWS SageMaker, I want to evaluate it on several csv files:
- category-1-eval.csv (~700000 records)
- category-2-eval.csv (~500000 records)
- category-3-eval.csv (~800000 records)
...
The right way to do this is with using Estimator.evaluate() method, as it is fast.
The problem is - I cannot find the way to restore SageMaker model into Tensorflow Estimator, is it possible?
I've tried to restore a model like this:
tf.estimator.DNNClassifier(
feature_columns=...,
hidden_units=[...],
model_dir="s3://<bucket_name>/checkpoints",
)
In AWS SageMaker documentation a different approach is described - to test the actual endpoint from the Notebook - but it takes to much time and requires a lot of API calls to the endpoint.
if you used the built-in Tensorflow container, your model has been saved in Tensorflow Serving format, e.g.:
$ tar tfz model.tar.gz
model/
model/1/
model/1/saved_model.pb
model/1/variables/
model/1/variables/variables.index
model/1/variables/variables.data-00000-of-00001
You can easily load it with Tensorflow Serving on your local machine, and send it samples to predict. More info at https://www.tensorflow.org/tfx/guide/serving

I don't understand how to switch from Tensorflow to Tensorflow Lite on a project taken from GitHub

I'm trying to create a .tflite model from a CycleGAN taken from GitHub (https://github.com/vanhuyz/CycleGAN-TensorFlow).
I am very new in this field and I do not understand how to expose the .pb model (which I have already created from the checkpoints) in a .tflite model.
I tried with tflite_convert but without any result, also because I don't know the parameters to insert as --input_arrays and --output_arrays.
Some idea?
I would recommend using the TFLiteConverter python api here: https://www.tensorflow.org/lite/convert/python_api and use SavedModel as your model input format. Otherwise, you can provide the input and output tensor names or your pb model as input_arrays and output_arrays.

Understanding export_tflite_ssd_graph.py

Here is tutorial about converting Mobilenet+SSD to tflite at some point they use export_tflite_ssd_graph.py, as I understand this custom script is used to support tf.image.non_max_suppression operation.
export CONFIG_FILE=gs://${YOUR_GCS_BUCKET}/data/pipeline.config
export CHECKPOINT_PATH=gs://${YOUR_GCS_BUCKET}/train/model.ckpt-2000
export OUTPUT_DIR=/tmp/tflite
python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=$CONFIG_FILE \
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
--output_directory=$OUTPUT_DIR \
--add_postprocessing_op=true
But I wonder what is pipeline.config and how to create it if I use custom model(for example FaceBoxes) that use tf.image.non_max_suppression operation?
The main objective of export_tflite_ssd_graph.py is to export the training checkpoint files into a frozen graph that you can later use for transfer learning or for straight inference (because they contain the model structure info as well as the trained weights info). In fact, all the models listed in model zoo are the frozen graph generated this way.
As for the tf.image.non_max_suppression, export_tflite_ssd_graph.py is not used to 'support' it but if --add_postprocessing_op is set true there will be another custom op node added to the frozen graph, this custom node will have the functionality similar to op tf.image.non_max_suppression. See reference here.
Finally the pipeline.config file directly corresponds to a config file in the you use for training (--pipeline_config_path), it is a copy of it but often with a modified score threshold (See description here about pipeline.config.), so you will have to create it before the training if you use a custom model. And to create a custom config file, here is the official tutorial.

Converting a model trained and saved with tf.estimator to .pb

I have a model trained with tf.estimator and it was exported after training as below
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
feature_placeholders)
classifier.export_savedmodel(
r'./path/to/model/trainedModel', serving_input_fn)
This gives me a saved_model.pb and a folder which contains weights as a .data file. I can reload the saved model using
predictor = tf.contrib.predictor.from_saved_model(r'./path/to/model/trainedModel')
I'd like to run this model on android and that requires the model to be in .pb format. How can I freeze this predictor for use on android platform?
I don't deploy to Android, so you might need to customize the steps a bit, but this is how I do this:
Run <tensorflow_root_installation>/python/tools/freeze_graph.py with arguments --input_saved_model_dir=<path_to_the_savedmodel_directory>, --output_node_names=<full_name_of_the_output_node> (you can get the name of the output node from graph.pbtxt, although that's not the most comfortable of ways), --output_graph=frozen_model.pb
(optionally) Run <tensorflow_root_installation>/python/tools/optimize_for_inference.py with adequate arguments. Alternatively you can look up the Graph Transform Tool and selectively apply optimizations.
At the end of step 1 you'll already have a frozen model with no variables left, that you can then deploy to Android.

Deploy Retrained inception model on Google cloud machine learning

I manage to retrain my specific classification model using the generic inception model following this tutorial. I would like now to deploy it on the google cloud machine learning following this steps.
I already managed to export it as MetaGraph but I can't manage to get the proper inputs and outputs.
Using it locally, my entry point to the graph is DecodeJpeg/contents:0 which is fed with a jpeg image in binary format. The output are my predictions.
The code I use locally (which is working) is:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
Should the input tensor be DecodeJpeg? What would be the changes I need to make if I would like to have a base64 image as input ?
I defined the output as:
outputs = {'prediction':softmax_tensor.name}
Any help is highly appreciated.
In your example, the input tensor is 'DecodeJpeg/contents:0', so you would have something like:
inputs = {'image': 'DecodeJpeg/contents:0')
outputs = {'prediction': 'final_result:0')
(Be sure to follow all of the instructions for preparing a model).
The model directory you intend to export should have files such as:
gs://my_bucket/path/to/model/export.meta
gs://my_bucket/path/to/model/checkpoint*
When you deploy your model, be sure to set gs://my_bucket/path/to/model as the deployment_uri.
To send an image to the service, as you suggest, you will need to base64 encode the image bytes. The body of your request should look like the following (note the 'tag', 'b64', indicating the data is base-64 encoded):
{'instances': [{'b64': base64.b64encode(image)}]}
We've now released a tutorial on how to retrain the Inception model, including instructions for how to deploy the model on the CloudML service.
https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow