Failure to load model in tensorflow.js - tensorflow

I have converted different transfer trained models (VGG16, InceptionV3, EfficientNetB0) from tensorflow in python to tensorflowjs.
And after implementing into tensorflowjs, it fails to load the model.
One of the error is:
Uncaught Error: Unknown layer: Functional. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().
at jN (generic_utils.js:242)
at GI (serialization.js:31)
at e.fromConfig (models.js:1026)
at jN (generic_utils.js:277)
at GI (serialization.js:31)
at models.js:295
at u (runtime.js:45)
at Generator._invoke (runtime.js:274)
at Generator.forEach.t.<computed> [as next] (runtime.js:97)
at Wm (runtime.js:728)
Also, there is
jquery-3.3.1.slim.min.js:2 Uncaught Error: Unknown layer: RandomFlip. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().
at jN (generic_utils.js:242)
at GI (serialization.js:31)
at e.fromConfig (models.js:1026)
at jN (generic_utils.js:277)
at GI (serialization.js:31)
at e.fromConfig (models.js:1026)
at jN (generic_utils.js:277)
at GI (serialization.js:31)
at models.js:295
at u (runtime.js:45)
Also,
Failed to load resource: the server responded with a status of 404 ()
What is the problem?
If I use the .json file generated from teachable machine, the model can be loaded. (However, the predictions become completely wrong for unknown reasons, and the problems seems to be more than just labelling issue.)
But if I use model.json file generated from .h5 or SavedModel via tensorflow converter, no matter which pretrained models I use, or the file formats (.h5 or SavedModel) to generate, the model cannot be loaded into the javascript.
Please help!!

Go to the model.json file and search for the keyword Functional(you could use ctrl+f Functional) and replace the word with Model.

Related

"Unkown (custom) loss function" when using tflite_convert on a {TF 2.0.0-beta1 ; Keras} model

Summary
My question is composed by:
A context in which I present my project, my working environment and my workflow
The detailed problem
The concerned parts of my code
The solutions I tried to solve my problem
The question reminder
Context
I've written a Python Keras implementation of a downgraded version of the original Super-Resolution GAN. Now I want to test it using Google Firebase Machine Learning Kit, by hosting it in the Google servers. That's why I have to convert my Keras program to a TensorFlow Lite one.
Environment and workflow (with the problem)
I'm training my program on Google Colab working environment: there, I've installed TF 2.0.0-beta1 (this choice is motivated by this uncorrect answer: https://datascience.stackexchange.com/a/57408/78409).
Workflow (and problem):
I write locally my Python Keras program, keeping in mind that it will run on TF 2. So I use TF 2 imports, for example: from tensorflow.keras.optimizers import Adam and also from tensorflow.keras.layers import Conv2D, BatchNormalization
I send my code to my Drive
I run without any problem my Google Colab Notebook: TF 2 is used.
I get the output model in my Drive, and I download it.
I try to convert this model to the TFLite format by executing the following CLI: tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5: here the problem appears.
The problem
Instead of outputing the TF Lite converted model from the TF (Keras) model, the previous CLI outputs this error:
ValueError: Unknown loss function:build_vgg19_loss_network
The function build_vgg19_loss_network is a custom loss function that I've implemented and that must be used by the GAN.
Parts of code that rise this problem
Presenting the custom loss function
The custom loss function is implemented like that:
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
Compiling the generator network with my custom loss function
generator_model.compile(optimizer=the_optimizer, loss=build_vgg19_loss_network)
What I've tried to do in order to solve the problem
As I read it on StackOverflow (link at the beginning of this question), TF 2 was thought to be sufficient to output a Keras model which would be correctly processed by my tflite_convert CLI. But it's not, obviously.
As I read it on GitHub, I tried to manually set my custom loss function among Keras' loss functions, by adding these lines: import tensorflow.keras.losses
tensorflow.keras.losses.build_vgg19_loss_network = build_vgg19_loss_network. It didn't work.
I read on GitHub I could use custom objects with load_model Keras function: but I only want to use compile Keras function. Not load_model.
My final question
I want to do only minor changes to my code, since it works fine. So I don't want, for example, to replace compile with load_model. With this constraint, could you help me, please, to make my CLI tflite_convert works with my custom loss function?
Since you are claiming that TFLite conversion is failing due to a custom loss function, you can save the model file without keep the optimizer details. To do that, set include_optimizer parameter to False as shown below:
model.save('model.h5', include_optimizer=False)
Now, if all the layers inside your model are convertible, they should get converted into TFLite file.
Edit:
You can then convert the h5 file like this:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5') # srgan.h5 for you
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Usual practice to overcome the unsupported operators in TFLite conversion is documented here.
I had the same error. I recommend changing the loss to "mse" since you already have a well-trained model and you don't need to train with the .tflite file.

Can't save save/export and load a keras model that uses eager execution

I'm following the RNN text-generation tutorial with eager execution pretty much line for line. I've trained the model with my own data set and have saved a low loss checkpoint. I'm able to load the weights and generate text but I want to export/save the model so that I can learn how to deploy one using flask. However I can't figure out how. The version I'm using is '1.14.0-rc1'.
The tutorial: https://www.tensorflow.org/tutorials/sequences/text_generation
I have been able to save the model as an HDF5 file but I cannot load it. I've also disabled eager execution but that causes problems with running the code later on. I have tried the following and a few more snippets but those led to nothing as well:
new_model = keras.models.load_model("/content/gdrive/My Drive/ColabNotebooks/ckpt4/my_model.h5")
How ever I get
RuntimeError: tf.placeholder() is not compatible with eager execution.
Lastly I found this in another post and tried it as well but was met with another error:
tf.saved_model.save(model, "/content/gdrive/My Drive/Colab Notebooks/ckpt4/my_model.h5")
error:
AssertionError: Tried to export a function which references untracked object Tensor("StatefulPartitionedCall/args_2:0", shape=(), dtype=resource).TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

Tensorflow.js error: unknown layer: GaussianNoise

I converted a pretrained keras model to use it with Tensorflow.js following the steps in this guide
Now, when I try to import it to javascript using
const model = tf.loadModel("{% static "keras/model.json" %}");
The following error shows up:
Uncaught (in promise) Error: Unknown layer: GaussianNoise. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with
tf.serialization.registerClass().
at new t (errors.ts:48)
at deserializeKerasObject (generic_utils.ts:239)
at deserialize (serialization.ts:31)
at t.fromConfig (models.ts:940)
at deserializeKerasObject (generic_utils.ts:274)
at deserialize (serialization.ts:31)
at models.ts:302
at common.ts:14
at Object.next (common.ts:14)
at i (common.ts:14)
I'm using 0.15.3 version of Tensorflow.js, imported this way:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#0.15.3/dist/tf.min.js"></script>
I trained my neural network with Tensorflow 1.12.0 and Keras 2.2.4
You are using the layer tf.layer.gaussianNoise that is not supported yet by tfjs.
Consider changing this layer by another one supported

CoreMLTools convert causes "Error reading protobuf spec. validator error"

I have been working on building a custom convolutional network, which has been saved into .h5 file. Further one I've applied transfer learning by popping the last layers (FC) ones, and then compiling the model with the new data. Again saved the model in .h5 format.
The problem occurs when I try to convert this model to mlModel format. I get the following error:
return _MLModelProxy(filename)
RuntimeError: Error compiling model: "Error reading protobuf spec. validator error: Layer 'conv2d_2__activation__' consumes a layer named 'conv2d_2__activation___output' which is not present in this network."
I am freezing the layers of the original convolutional neural network.
The versions I'm using are:
Keras (2.1.6)
Protobuf(3.6.0)
Tensorflow(1.8.0)
For the conversion :
coreml_model = coremltools.converters.keras.convert(
pathToh5File,
class_labels=['0','1','2','3','4','5','6','7','8','9']
)
I've tried adding input names and so on. Still getting the same result.
I would be grateful for any suggestion.
Thank you in advance!

Error with 8-bit Quantization in Tensorflow

I have been experimenting with the new 8-bit quantization feature available in TensorFlow. I could run the example given in the blog post (quantization of googlenet) without any issue and it works fine for me !!!
Now, I would like to apply the same for a simpler network. So I used a pre-trained network for CIFAR-10 (which is trained on Caffe), extracted its parameters, created corresponding graph in tensorflow, initialized the weights with this pre-trained weights and finally saved it as a GraphDef object. See this IPython Notebook for full procedure.
Now I applied the 8-bit quantization with the tensorflow script as mentioned in the Pete Warden's blog:
bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph --input=cifar.pb --output=qcifar.pb --mode=eightbit --bitdepth=8 --output_node_names="ArgMax"
Now I wanted to run the classification on this quantized network. So I loaded the new qcifar.pb to a tensorflow session and passed the image (the same way I passed it to original version). Full code can be found in this IPython Notebook.
But as you can see at the end, I am getting following error:
NotFoundError: Op type not registered 'QuantizeV2'
Can anybody suggest what am I missing here?
Because the quantized ops and kernels are in contrib, you'll need to explicitly load them in your python script. There's an example of that in the quantize_graph.py script itself:
from tensorflow.contrib.quantization import load_quantized_ops_so
from tensorflow.contrib.quantization.kernels import load_quantized_kernels_so
This is something that we should update the documentation to mention!