I don't understand how to switch from Tensorflow to Tensorflow Lite on a project taken from GitHub - tensorflow

I'm trying to create a .tflite model from a CycleGAN taken from GitHub (https://github.com/vanhuyz/CycleGAN-TensorFlow).
I am very new in this field and I do not understand how to expose the .pb model (which I have already created from the checkpoints) in a .tflite model.
I tried with tflite_convert but without any result, also because I don't know the parameters to insert as --input_arrays and --output_arrays.
Some idea?

I would recommend using the TFLiteConverter python api here: https://www.tensorflow.org/lite/convert/python_api and use SavedModel as your model input format. Otherwise, you can provide the input and output tensor names or your pb model as input_arrays and output_arrays.

Related

How to deploy custom tensorflow model to web?

so im facing a problem about deployment my custom sign-language recognition model. I converted my_ssd_mobnet with exporter_main_v2.py to saved_model.pb and then i tried to use the tensorflowjs convertor with this code:
from tensorflow import keras
import tensorflowjs as tfjs
def importModel(modelPath):
model = tf.keras.models.load_model(modelPath)
tfjs.converters.save_tf_model(model, "tfjsmodel")
importModel("saved_model")
#importModel("modelDirectory")
then i got an error like this..
ValueError: Unable to create a Keras model from this SavedModel. This SavedModel was created with tf.saved_model.save, and lacks the Keras metadata.Please save your Keras model by calling model.saveor tf.keras.models.save_model.
Finally i decide to convert my model to h5, but.. i don't know how.
How can i convert my_ssd_mobnet model to h5?
Thanks!
If you're creating a custom Keras layer in python and wanting to export it to tfjs for the browser to predict, then you'll most likely encounter "Unknown layer" and will have to implement them yourself in JS.
Instead of exporting the layers, it's best to export a graph since you're only using it for prediction and not training in the browser.
tf.saved_model.save(model, 'saved_model')
This will save the files in the saved_model folder and contains the .pb file.
Use the tensorflowjs_converter tool to convert the model into a graph tfjs model.
tensorflow_converter --input_format=tf_saved_model saved_model model
This will convert your saved model into the browser-compatible tfjs model without the custom layer. (The Keras layers will be built in.)
Move this folder to your website's public folder.
In the browser:
const model = await tf.loadGraphModel('/model/model.json')
const img = tf.browser.fromPixels(imageData, 3) // imageElement, videoElement, ImageData
.toFloat().resizeBilinear([224, 224]) // mobilenet dims
.div(tf.scalar(255)) // mobilenet [0,1] normalization
.expandDims()
const { values, indices } = model.predict(img).topk()
const label = indices.dataSync()[0]
const confidence = values.dataSync()[0]
NOTE: The .bin files will end up in the 10's of MB so put this inside a webworker. You can send a buffered data from the main thread to the worker thread for processing.
First and foremost, if you have used "exporter_main_v2.py" script to export the model, you will only get the model format in tensorflow model. This way of exporting is mainly used to make inference on the trained model. So the main problem in your code is that you are trying to import a "keras model" with that tf.keras.models.load_model() function. Instead of using "exporter_main_v2.py" you have to use tf.keras.models.save_model() function to export/save your model.
I am also giving you a simple video explanation link to clarify a few things for you
https://www.youtube.com/watch?v=Lx7OCFXPG8o
After watching the video you might want to checkout the following colab notebook
https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l07c01_saving_and_loading_models.ipynb
This is a material provided by Udacity from its introduction to tensorflow training course. That should be very helpful in your case to understand the difference between tensorflow model file and keras model file.
Have a nice day.
Edit:
HDF5 format
Keras provides a basic save format using the HDF5 standard.
Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
Save the entire model to a HDF5 file.
The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
You should add '.h5' extension to filename when calling model.save function, by this way the model will be saved in h5 format.

List operations of a TFLite model

I'm having trouble trying to list the operations of a TFLite model. I know operations can be listed given a frozen graph, but what about a TFLite .tflite model? Can operations be listed?
You can get a list of all used Tensorflow Lite Operations with the visualization script.
wget -O tflite_visualize.py https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/lite/tools/visualize.py
Assuming your model is saved in model.tflite create the html file using the downloaded script.
python tflite_visualize.py model.tflite model_visualization.html
Right in the section labled Ops.
As mentioned in the TensorFlow Lite docs, you need to use a tf.lite.Interpreter to parse a .tflite model.
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
Then use the get_tensor_details method to get the list of Tensors.
interpreter.get_tensor_details()
As per the docs,
Gets tensor details for every tensor with valid tensor details.
Tensors where required information about the tensor is not found are not added to the list. This includes temporary tensors without a name.
Returns: A list of dictionaries containing tensor information.

How to convert a TensorFlow SavedModel graph to a Caffe model?

I want to use MMdnn to convert a tensorflow ResNet model to other frameworks. It seems that I can only use mmconvert to read from a .pb frozen graph file.
However, when using tf.estimator.Estimator, the .pb file that it creates is a SavedModelDef. I understand this to be a wrapper around the tf GraphDef. Thus the GraphDef .pb file can be extracted from the SavedModel using freeze_graph.py.
From there, I will need the name of the input node in the tf GraphDef. But I'm unsure how to identify the name from looking at the .pbtxt. The tf.Estimator inputs with a tf.Dataset object, according to the framework.
I'm guessing there should be a tf.Placeholder somewhere that accepts the input. But I'm not sure how to find what the input node actually is.
Answering my own question here. The freeze_graph utility that comes with tensorflow is useful for extracting the graphdef from the tf SavedModel format.
To find the name of the input node, make sure to saved the tf SavedModel in pbtxt format. Open it up and look for the first node of your compute graph, e.g. if using tf resnet, the first nodes will be named resnet_model/*. Find the node that feeds this node, and you will have the name of the input node to specify to MMdnn tools. I expected this to be a tf.Placeholder that the Estimator adds for inputs. This node was just named Placeholder, so that's what I specified as the input node.
First extract the compute graph.
freeze_graph --input_saved_model_dir <path/to/saved_model_dir> --output_node_names softmax --output_graph ./graph_def.pb
Then use MMdnn to convert it to caffe.
mmconvert -sf tensorflow -iw ./graph_def.pb --inNodeName Placeholder --inputShape 224,224,3 --dstNodeName softmax -df caffe -om tf_resnet

Using model optimizer for tensorflow slim models

I am aiming to inference tensorflow slim model with Intel OpenVINO optimizer. Using open vino docs and slides for inference and tf slim docs for training model.
It's a multi-class classification problem. I have trained tf slim mobilnet_v2 model from scratch (using sript train_image_classifier.py). Evaluation of trained model on test set gives relatively good results to begin with (using script eval_image_classifier.py):
eval/Accuracy[0.8017]eval/Recall_5[0.9993]
However, single .ckpt file is not saved (even though at the end of train_image_classifier.py run there is a message like "model.ckpt is saved to checkpoint_dir"), there are 3 files (.ckpt-180000.data-00000-of-00001, .ckpt-180000.index, .ckpt-180000.meta) instead.
OpenVINO model optimizer requires a single checkpoint file.
According to docs I call mo_tf.py with following params:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_checkpoint D:/model/model.ckpt-180000 -b 1
It gives the error (same if pass --input_checkpoint D:/model/model.ckpt):
[ ERROR ] The value for command line parameter "input_checkpoint" must be existing file/directory, but "D:/model/model.ckpt-180000" does not exist.
Error message is clear, there are not such files on disk. But as I know most tf utilities convert .ckpt-????.meta to .ckpt under the hood.
Trying to call:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_meta_graph D:/model/model.ckpt-180000.meta -b 1
Causes:
[ ERROR ] Unknown configuration of input model parameters
It doesn't matter for me in which way I will transfer graph to OpenVINO intermediate representation, just need to reach that result.
Thanks a lot.
EDIT
I managed to run OpenVINO model optimizer on frozen graph of tf slim model. However I still have no idea why had my previous attempts (based on docs) failed.
you can try converting the model to frozen format (.pb) and then convert the model using OpenVINO.
.ckpt-meta has the metagraph. The computation graph structure without variable values.
the one you can observe in tensorboard.
.ckpt-data has the variable values,without the skeleton or structure. to restore a model we need both meta and data files.
.pb file saves the whole graph (meta+data)
As per the documentation of OpenVINO:
When a network is defined in Python* code, you have to create an inference graph file. Usually, graphs are built in a form that allows model training. That means that all trainable parameters are represented as variables in the graph. To use the graph with the Model Optimizer, it should be frozen.
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
the OpenVINO optimizes the model by converting the weighted graph passed in frozen form.

Visualize a Word2Vec model using Embedding Projector

What is the best way to visualize a Word2Vec model using TensorFlow's Embedding Projector?
is there a way to export the Word2Vec model's vectors to the format that Embedding Projector expects? or is there a built in function in tensorflow for that?
Thanks!
Save your model using
model = Word2Vec(sentences)
model.wv.save_word2vec_format('model_name')
And then convert the model to the input files required by Embedding Projector:
python -m gensim.scripts.word2vec2tensor --input model_name --output model_name
This will produce both: model_name_tensor.tsv and model_name_metadata.tsv
This last script was introduced in a PR from the issue linked in a comment in your original question.