What are these 2 files in the CenterNet MobileNetV2 from the Tensorflow OD model zoo?, Do we need them? - tensorflow

Do we need these files?, The Tensorflow Doc don't say anything about them

The model.tflite file is the pretrained model in .tflite format. So if you want to use the model out of the box, you can use this file.
The label_map.txt is used to map the output of your network to actual comprehensible results. I.e. both of the files are needed if you want to use the model out of the box. It is not needed for re-training.

Related

How to generate .tf/.tflite files from python

I am trying to generate the custom tensor flow model (tf/tflite file) which i wanted to use for my mobile application.
I have gone through few machine learning and tensor flow blogs, from there I started to generate a simple ML model.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
https://www.youtube.com/watch?v=ICY4Lvhyobk
All these are really nice and they guided me to do the below steps,
i)Install all necessary tools (TensorFlow,Python,Jupyter,etc).
ii)Load the Training and testing Data.
iii)Run the tensor flow session for train and evaluate the results.
iv)Steps to increase the accuracy
But i am not able to generate the .tf/.tflite files.
I tried the following code, but that generates an empty file.
converter = tf.contrib.lite.TFLiteConverter.from_session(sess,[],[])
model = converter.convert()
file = open( 'model.tflite' , 'wb' )
file.write( model )
I have checked few answers in stackoverflow and according to my understanding in-order to generate the .tf files we need to create the pb files, freezing the pb file and then generating the .tf files.
But how can we achieve this?
Tensorflow provides Tflite converter to convert saved model to Tflite model.For more details find here.
tf.lite.TFLiteConverter.from_saved_model() (recommended): Converts a SavedModel.
tf.lite.TFLiteConverter.from_keras_model(): Converts a Keras model.
tf.lite.TFLiteConverter.from_concrete_functions(): Converts concrete functions.

I don't understand how to switch from Tensorflow to Tensorflow Lite on a project taken from GitHub

I'm trying to create a .tflite model from a CycleGAN taken from GitHub (https://github.com/vanhuyz/CycleGAN-TensorFlow).
I am very new in this field and I do not understand how to expose the .pb model (which I have already created from the checkpoints) in a .tflite model.
I tried with tflite_convert but without any result, also because I don't know the parameters to insert as --input_arrays and --output_arrays.
Some idea?
I would recommend using the TFLiteConverter python api here: https://www.tensorflow.org/lite/convert/python_api and use SavedModel as your model input format. Otherwise, you can provide the input and output tensor names or your pb model as input_arrays and output_arrays.

What are input and output node names in inception v3 with slim library?

I retrained inceptionV3 model on my own data using Tensorflow slim. Below files are generated after training :-
graph.pbtxt, model.ckpt, model.meta, model.index, checkpoint,
events.out.tfevents
I want to freeze the graph files and create a .pb file. I don't know what is input node and output node in inception v3. And using Tensorboard is complex for me.
What are the input/output nodes in inceptionV3?(in slim/nets) OR how can I find the input/output nodes ?
OS : window 7
(A). If you will make it to bottom of this link. You would find this somewhere(specific to inceptionV3) :
input_layer=input
output_layer=InceptionV3/Predictions/Reshape_1
(B). Another way is to print all tensors of the model and get input/output tensor
from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file
ckpt_path="model.ckpt"
print_tensors_in_checkpoint_file(file_name=ckpt_path, tensor_name='', all_tensors=True, all_tensor_names=True)
(C). If you need to print tensor names of .pb file. You can use this simple code.
Check what would work for you.

Converting a model trained and saved with tf.estimator to .pb

I have a model trained with tf.estimator and it was exported after training as below
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
feature_placeholders)
classifier.export_savedmodel(
r'./path/to/model/trainedModel', serving_input_fn)
This gives me a saved_model.pb and a folder which contains weights as a .data file. I can reload the saved model using
predictor = tf.contrib.predictor.from_saved_model(r'./path/to/model/trainedModel')
I'd like to run this model on android and that requires the model to be in .pb format. How can I freeze this predictor for use on android platform?
I don't deploy to Android, so you might need to customize the steps a bit, but this is how I do this:
Run <tensorflow_root_installation>/python/tools/freeze_graph.py with arguments --input_saved_model_dir=<path_to_the_savedmodel_directory>, --output_node_names=<full_name_of_the_output_node> (you can get the name of the output node from graph.pbtxt, although that's not the most comfortable of ways), --output_graph=frozen_model.pb
(optionally) Run <tensorflow_root_installation>/python/tools/optimize_for_inference.py with adequate arguments. Alternatively you can look up the Graph Transform Tool and selectively apply optimizations.
At the end of step 1 you'll already have a frozen model with no variables left, that you can then deploy to Android.

[Tensorflow]Different Graph architecture in COCO pre-trained model and re-train model

I have followed the doc:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md
to retrained the model of ssd_mobilenet_v1_coco_2017_11_17.tar.gz.
I only modify the pipe config with
- finetune to TRUE
- set the model path
- set the trained data path with actual path of tfrecord
then, I try to use the export script with :
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md
Finally I have got the pb with my own data.
but I found that the result of re-train model was not the same architecture with the one in ssd_mobilenet_v1_coco_2017_11_17.tar.gz
Here is the picture which I captured from Tensorboard:
The retrained one of my own data and pipe configure .
https://i.stack.imgur.com/kegcT.jpg
The original one from pb file within ssd_mobilenet_v1_coco_2017_11_17.tar.gz
https://i.stack.imgur.com/IAGi3.jpg
According to the pictures from tensorboard , I found that there were two input tensors in BoxPredictor but there were three with original.
Could anyone who could help me to solve this problem ?
Thanks...
PS: I use the tensorflow with version 1.4.0-GPU