Saving a Keras/Sklearn in python and loading the saved model in tensorflow.js - tensorflow

I have a trained sklearn SVM model in .pkl format and a Keras .h5 model. Can I load these models using tensorflow.js on a browser?
I do most of my coding in python and not sure how to work with tensorflow.js
My model saving code looks like this
from sklearn.externals import joblib
joblib.dump(svc,'model.pkl')
model = joblib.load('model.pkl')
prediction = model.predict(X_test)
#------------------------------------------------------------------
from keras.models import load_model
model.save('model.h5')
model = load_model('my_model.h5')

In order to deploy your model with tensorflow-js, you need to use the tensorflowjs_converter, so you also need to install the tensorflowjs dependency.
You can do that in python via pip install tensorflowjs.
Next, you convert your trained model via this operation, according to your custom names: tensorflowjs_converter --input_format=keras /tmp/model.h5 /tmp/tfjs_model, where the last path is the output path of the conversion result.
Note that, after the conversion you will get a model.json (architecture of your model) and a list of N shards (weights split in N shards).
Then, in JavaScript, you need to us the function tf.loadLayersModel(MODEL_URL), where MODEL_URL is the url pointing to your model.json. Ensure that, at the same location with the model.json, the shards are also located.
Since this is an asynchronous operation(you do not want your web-page to get blocked while your model is loading), you need to use the JavaScript await keyword; hence await tf.loadLayersModel(MODEL_URL)
Please have a look at the following link to see an example: https://www.tensorflow.org/js/guide/conversion

Related

How to deploy custom tensorflow model to web?

so im facing a problem about deployment my custom sign-language recognition model. I converted my_ssd_mobnet with exporter_main_v2.py to saved_model.pb and then i tried to use the tensorflowjs convertor with this code:
from tensorflow import keras
import tensorflowjs as tfjs
def importModel(modelPath):
model = tf.keras.models.load_model(modelPath)
tfjs.converters.save_tf_model(model, "tfjsmodel")
importModel("saved_model")
#importModel("modelDirectory")
then i got an error like this..
ValueError: Unable to create a Keras model from this SavedModel. This SavedModel was created with tf.saved_model.save, and lacks the Keras metadata.Please save your Keras model by calling model.saveor tf.keras.models.save_model.
Finally i decide to convert my model to h5, but.. i don't know how.
How can i convert my_ssd_mobnet model to h5?
Thanks!
If you're creating a custom Keras layer in python and wanting to export it to tfjs for the browser to predict, then you'll most likely encounter "Unknown layer" and will have to implement them yourself in JS.
Instead of exporting the layers, it's best to export a graph since you're only using it for prediction and not training in the browser.
tf.saved_model.save(model, 'saved_model')
This will save the files in the saved_model folder and contains the .pb file.
Use the tensorflowjs_converter tool to convert the model into a graph tfjs model.
tensorflow_converter --input_format=tf_saved_model saved_model model
This will convert your saved model into the browser-compatible tfjs model without the custom layer. (The Keras layers will be built in.)
Move this folder to your website's public folder.
In the browser:
const model = await tf.loadGraphModel('/model/model.json')
const img = tf.browser.fromPixels(imageData, 3) // imageElement, videoElement, ImageData
.toFloat().resizeBilinear([224, 224]) // mobilenet dims
.div(tf.scalar(255)) // mobilenet [0,1] normalization
.expandDims()
const { values, indices } = model.predict(img).topk()
const label = indices.dataSync()[0]
const confidence = values.dataSync()[0]
NOTE: The .bin files will end up in the 10's of MB so put this inside a webworker. You can send a buffered data from the main thread to the worker thread for processing.
First and foremost, if you have used "exporter_main_v2.py" script to export the model, you will only get the model format in tensorflow model. This way of exporting is mainly used to make inference on the trained model. So the main problem in your code is that you are trying to import a "keras model" with that tf.keras.models.load_model() function. Instead of using "exporter_main_v2.py" you have to use tf.keras.models.save_model() function to export/save your model.
I am also giving you a simple video explanation link to clarify a few things for you
https://www.youtube.com/watch?v=Lx7OCFXPG8o
After watching the video you might want to checkout the following colab notebook
https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l07c01_saving_and_loading_models.ipynb
This is a material provided by Udacity from its introduction to tensorflow training course. That should be very helpful in your case to understand the difference between tensorflow model file and keras model file.
Have a nice day.
Edit:
HDF5 format
Keras provides a basic save format using the HDF5 standard.
Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
Save the entire model to a HDF5 file.
The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
You should add '.h5' extension to filename when calling model.save function, by this way the model will be saved in h5 format.

How to convert from Tensorflow.js (.json) model into Tensorflow (SavedModel) or Tensorflow Lite (.tflite) model?

I have downloaded a pre-trained PoseNet model for Tensorflow.js (tfjs) from Google, so its a json file.
However, I want to use it on Android, so I need the .tflite model. Although someone has 'ported' a similar model from tfjs to tflite here, I have no idea what model (there are many variants of PoseNet) they converted. I want to do the steps myself. Also, I don't want to run some arbitrary code someone uploaded into a file in stackOverflow:
Caution: Be careful with untrusted code—TensorFlow models are code. See Using TensorFlow Securely for details. Tensorflow docs
Does anyone know any convenient ways to do this?
You can find out what tfjs format you have by looking in the json file. It often says "graph-model". The difference between them are here.
From tfjs graph model to SavedModel (more common)
Use tfjs-to-tf by Patrick Levin.
import tfjs_graph_converter.api as tfjs
tfjs.graph_model_to_saved_model(
"savedmodel/posenet/mobilenet/float/050/model-stride16.json",
"realsavedmodel"
)
# Code below taken from https://www.tensorflow.org/lite/convert/python_api
converter = tf.lite.TFLiteConverter.from_saved_model("realsavedmodel")
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
From tfjs layers model to SavedModel
Note: This will only work for layers model format, not graph model format as in the question. I've written the difference between them here.
Install and use tensorflowjs-convert to convert the .json file into a Keras HDF5 file (from another SO thread).
On mac, you'll face issues running pyenv (fix) and on Z-shell, pyenv won't load correctly (fix). Also, once pyenv is running, use python -m pip install tensorflowjs instead of pip install tensorflowjs, because pyenv did not change python used by pip for me.
Once you've followed the tensorflowjs_converter guide, run tensorflowjs_converter to verify it works with no errors, and should just warn you about Missing input_path argument. Then:
tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras tfjs_model.json hdf5_keras_model.hdf5
Convert the Keras HDF5 file into a SavedModel (standard Tensorflow model file) or directly into .tflite file using the TFLiteConverter. The following runs in a Python file:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
or to save to a SavedModel:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
tf.keras.models.save_model(
model, filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None
)

"Unkown (custom) loss function" when using tflite_convert on a {TF 2.0.0-beta1 ; Keras} model

Summary
My question is composed by:
A context in which I present my project, my working environment and my workflow
The detailed problem
The concerned parts of my code
The solutions I tried to solve my problem
The question reminder
Context
I've written a Python Keras implementation of a downgraded version of the original Super-Resolution GAN. Now I want to test it using Google Firebase Machine Learning Kit, by hosting it in the Google servers. That's why I have to convert my Keras program to a TensorFlow Lite one.
Environment and workflow (with the problem)
I'm training my program on Google Colab working environment: there, I've installed TF 2.0.0-beta1 (this choice is motivated by this uncorrect answer: https://datascience.stackexchange.com/a/57408/78409).
Workflow (and problem):
I write locally my Python Keras program, keeping in mind that it will run on TF 2. So I use TF 2 imports, for example: from tensorflow.keras.optimizers import Adam and also from tensorflow.keras.layers import Conv2D, BatchNormalization
I send my code to my Drive
I run without any problem my Google Colab Notebook: TF 2 is used.
I get the output model in my Drive, and I download it.
I try to convert this model to the TFLite format by executing the following CLI: tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5: here the problem appears.
The problem
Instead of outputing the TF Lite converted model from the TF (Keras) model, the previous CLI outputs this error:
ValueError: Unknown loss function:build_vgg19_loss_network
The function build_vgg19_loss_network is a custom loss function that I've implemented and that must be used by the GAN.
Parts of code that rise this problem
Presenting the custom loss function
The custom loss function is implemented like that:
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
Compiling the generator network with my custom loss function
generator_model.compile(optimizer=the_optimizer, loss=build_vgg19_loss_network)
What I've tried to do in order to solve the problem
As I read it on StackOverflow (link at the beginning of this question), TF 2 was thought to be sufficient to output a Keras model which would be correctly processed by my tflite_convert CLI. But it's not, obviously.
As I read it on GitHub, I tried to manually set my custom loss function among Keras' loss functions, by adding these lines: import tensorflow.keras.losses
tensorflow.keras.losses.build_vgg19_loss_network = build_vgg19_loss_network. It didn't work.
I read on GitHub I could use custom objects with load_model Keras function: but I only want to use compile Keras function. Not load_model.
My final question
I want to do only minor changes to my code, since it works fine. So I don't want, for example, to replace compile with load_model. With this constraint, could you help me, please, to make my CLI tflite_convert works with my custom loss function?
Since you are claiming that TFLite conversion is failing due to a custom loss function, you can save the model file without keep the optimizer details. To do that, set include_optimizer parameter to False as shown below:
model.save('model.h5', include_optimizer=False)
Now, if all the layers inside your model are convertible, they should get converted into TFLite file.
Edit:
You can then convert the h5 file like this:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5') # srgan.h5 for you
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Usual practice to overcome the unsupported operators in TFLite conversion is documented here.
I had the same error. I recommend changing the loss to "mse" since you already have a well-trained model and you don't need to train with the .tflite file.

What is the use of a *.pb file in TensorFlow and how does it work?

I am using some implementation for creating a face recognition which uses this file:
"facenet.load_model("20170512-110547/20170512-110547.pb")"
What is the use of this file? I am not sure how it works.
console log :
Model filename: 20170512-110547/20170512-110547.pb
distance = 0.72212267
Github link of the actual owner of the code
https://github.com/arunmandal53/facematch
pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model.
Given a pb file, you can load it as follow.
def load_pb(path_to_pb):
with tf.gfile.GFile(path_to_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with
input = graph.get_tensor_by_name('input:0')
output = graph.get_tensor_by_name('output:0')
and use regular TensorFlow routine like:
sess.run(output, feed_dict={input: some_data})
Explanation
The .pb format is the protocol buffer (protobuf) format, and in Tensorflow, this format is used to hold models. Protobufs are a general way to store data by Google that is much nicer to transport, as it compacts the data more efficiently and enforces a structure to the data. When used in TensorFlow, it's called a SavedModel protocol buffer, which is the default format when saving Keras/ Tensorflow 2.0 models. More information about this format can be found here and here.
For example, the following code (specifically, m.save), will create a folder called my_new_model, and save in it, the saved_model.pb, an assets/ folder, and a variables/ folder.
# first download a SavedModel from TFHub.dev, a website with models
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/4")
])
m.build([None, 224, 224, 3]) # Batch input shape.
m.save("my_new_model") # defaults to save as SavedModel in tensorflow 2
In some places, you may also see .h5 models, which was the default format for TF 1.X. source
Extra information: In TensorFlow Lite, the library for running models on mobile and IoT devices, instead of protocol buffers, flatbuffers are used. This is what the TensorFlow Lite Converter converts into (.tflite format). This is another Google format which is also very efficient: it allows access to any part of the message without deserialization (unlike json, xml). For devices with less memory (RAM), it makes more sense to load what you need from the model file, instead of loading the entire thing into memory to deserialize it.
Loading SavedModels in TensorFlow 2
I noticed BiBi's answer to show loading models was popular, and there is a shorter way to do this in TF2:
import tensorflow as tf
model_path = "/path/to/directory/inception_v1_224_quant_20181026"
model = tf.saved_model.load(model_path)
Note,
the directory (i.e. inception_v1_224_quant_20181026) has to have a saved_model.pb or saved_model.pbtxt, otherwise the code will crash. You cannot specify the .pb path, specify the directory.
you might get TypeError: 'AutoTrackable' object is not callable for older models, fix here.
If you load a TF1 model, I found that I don't get any errors, but the loaded file doesn't behave as expected. (e.g. it doesn't have any functions on it, like predict)

Tensorflow Keras GCP ML engine model serving

I'm working on an image classifier with tensorflow estimator + keras retraining the last layer of a pretrained application inception_v3 on GCP ML engine.
The keras model is exported with tf.keras.estimator.model_to_estimator and the input function receive the path of the image stored on GCP cloud storage open the image with tf.image.decode_jpeg and return a dataset with the following format dict(zip(['inception_v3_input'], [image])), label
I'm trying to define the tf.estimator.export.ServingInputReceiver but I'm having some trouble defining it.
The model is serving correctly the prediction with the predict method using the input function without the labels.
My idea was to reuse the input_function to decode the image passing only the path of the image on cloud storage to the prediction also for the google endpoint, but I can't understand how to do it.
Thank's for your help
If I'm understanding correctly, your question is how to get the file from Cloud Storage, considering that you want to decode the image this way:
image_decoded = tf.image.decode_jpeg(image_string)
So, in this case, you can use:
image_string = file_io.FileIO(filename, mode='r')
By importing file_io first:
from tensorflow.python.lib.io import file_io
According to the comments on this question about reading input data from GCS, using the file_read function should provide the same results since " there was a bunch of work done to abstract file io and file systems, so there all the io functionality works consistently". So you can try also with read_file function.