"Unkown (custom) loss function" when using tflite_convert on a {TF 2.0.0-beta1 ; Keras} model - tensorflow

Summary
My question is composed by:
A context in which I present my project, my working environment and my workflow
The detailed problem
The concerned parts of my code
The solutions I tried to solve my problem
The question reminder
Context
I've written a Python Keras implementation of a downgraded version of the original Super-Resolution GAN. Now I want to test it using Google Firebase Machine Learning Kit, by hosting it in the Google servers. That's why I have to convert my Keras program to a TensorFlow Lite one.
Environment and workflow (with the problem)
I'm training my program on Google Colab working environment: there, I've installed TF 2.0.0-beta1 (this choice is motivated by this uncorrect answer: https://datascience.stackexchange.com/a/57408/78409).
Workflow (and problem):
I write locally my Python Keras program, keeping in mind that it will run on TF 2. So I use TF 2 imports, for example: from tensorflow.keras.optimizers import Adam and also from tensorflow.keras.layers import Conv2D, BatchNormalization
I send my code to my Drive
I run without any problem my Google Colab Notebook: TF 2 is used.
I get the output model in my Drive, and I download it.
I try to convert this model to the TFLite format by executing the following CLI: tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5: here the problem appears.
The problem
Instead of outputing the TF Lite converted model from the TF (Keras) model, the previous CLI outputs this error:
ValueError: Unknown loss function:build_vgg19_loss_network
The function build_vgg19_loss_network is a custom loss function that I've implemented and that must be used by the GAN.
Parts of code that rise this problem
Presenting the custom loss function
The custom loss function is implemented like that:
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
Compiling the generator network with my custom loss function
generator_model.compile(optimizer=the_optimizer, loss=build_vgg19_loss_network)
What I've tried to do in order to solve the problem
As I read it on StackOverflow (link at the beginning of this question), TF 2 was thought to be sufficient to output a Keras model which would be correctly processed by my tflite_convert CLI. But it's not, obviously.
As I read it on GitHub, I tried to manually set my custom loss function among Keras' loss functions, by adding these lines: import tensorflow.keras.losses
tensorflow.keras.losses.build_vgg19_loss_network = build_vgg19_loss_network. It didn't work.
I read on GitHub I could use custom objects with load_model Keras function: but I only want to use compile Keras function. Not load_model.
My final question
I want to do only minor changes to my code, since it works fine. So I don't want, for example, to replace compile with load_model. With this constraint, could you help me, please, to make my CLI tflite_convert works with my custom loss function?

Since you are claiming that TFLite conversion is failing due to a custom loss function, you can save the model file without keep the optimizer details. To do that, set include_optimizer parameter to False as shown below:
model.save('model.h5', include_optimizer=False)
Now, if all the layers inside your model are convertible, they should get converted into TFLite file.
Edit:
You can then convert the h5 file like this:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5') # srgan.h5 for you
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Usual practice to overcome the unsupported operators in TFLite conversion is documented here.

I had the same error. I recommend changing the loss to "mse" since you already have a well-trained model and you don't need to train with the .tflite file.

Related

How would I convert this TensorFlow image classification model to Core ML?

I’m learning TensorFlow and want to convert an image classification model to Core ML for use in an iOS app.
This TensorFlow image classification tutorial is a close match to what I want to do for the training, but I haven’t been able to figure out how to convert that to Core ML.
Here’s what I’ve tried, adding the following to the end of the Colab notebook for the tutorial:
# install coremltools
!pip install coremltools
# import coremltools
import coremltools as ct
# define the input type
image_input = ct.ImageType()
# create classifier configuration with the class labels
classifier_config = ct.ClassifierConfig(class_names)
# perform the conversion
coreml_model = ct.convert(
model, inputs=[image_input], classifier_config=classifier_config,
)
# print info about the converted model
print(coreml_model)
# save the file
coreml_model.save('my_coreml_model')
That successfully creates an mlmodel file, but when I download the file and open it in Xcode to test it (under the “Preview” tab) it shows results like “Roses 900% Confidence” and “Tulips 1,120% Confidence”. For my uses, the confidence percentage needs to be from 0 to 100%, so I think I’m missing some parameter for the conversion.
On import coremltools as ct I do get some warnings like “WARNING:root:TensorFlow version 2.8.2 has not been tested with coremltools. You may run into unexpected errors.” but I’m guessing that’s not the problem since the conversion doesn’t report any errors.
Based on information here, I’ve also tried setting a scale on the image input:
image_input = ct.ImageType(scale=1/255.0)
… but that made things worse as it then has around 315% confidence that every image is a dandelion. A few other attempts at setting a scale / bias all resulted in the same thing.
At this point I’m not sure what else to try. Any help is appreciated!
The last layer of your model should be something like this:
layers.Dense(num_classes, activation='softmax')
The softmax function transforms your output into the probabilities you need.

keras multi_gpu_model saved_model failed to load model in TF2 code

I have trained a multi_gpu_model using tensorflow 1.13/1.14 and saved them with keras.model.save('<.hdf5>').
Now, after migrating to tensorflow 2.4.1, in which Keras is integrated as tensorflow.keras, I cannot tensorflow.keras.models.load_model as I did before, due to the following error:
AttributeError: module 'tensorflow.python.keras.backend' has no attribute 'slice'
After trying to import keras.models.load_model, and trying different versions of keras (2.2.4 -> 2.4.1) and tensorflow (2.2 -> 2.4.1), I cannot load_model from my .hdf5 file using my TF 2.2+ code.
I do know that in TF 2.X + we can train using distributed machines by implementing the "strategy" scope, and it does work, but I have a lot of "old" models that I need to work on the same code base which is now being migrated to TF 2.4.1
Apparently the problem was not the TF versions, but the way I was saving my models on my TF 1.X code versions.
I used the keras.multi_gpu_model class for both training and saving, while this practice is wrong, as clearly stated on Keras documentation:
"To save the multi-gpu model, use .save(fname) or .save_weights(fname)
with the template model (the argument you passed to multi_gpu_model),
rather than the model returned by multi_gpu_model."
So, after figuring this out a method for model conversion, using TF 1.X code, was adopted:
build you model from scratch, namely new_model
load your pre-trained weights from the multi_gpu_model, namely 'old_model'
copy your old_model's weights, which is old_model.layers[3] (due to the wrong usage of multi_gpu_model) to your new_model
save new_model as .hdf5 file
use new_model.hdf5 everywhere - TF 1.X and TF 2.X

Tensorflow 2.3.0 -> 2.2.0 comparability: ValueError: Unknown layer: Functional

I'm having a problem similar to the one described here:
ValueError: Unknown layer: Functional
import tensorflow as tf
model = tf.keras.models.load_model("model.h5")
which throws: ValueError: Unknown layer: Functional.
I'm pretty sure this is because the h5 file was saved in TF 2.3.0 and I'm trying to load it in 2.2.0. I'd rather not convert using tf 2.3.0 directly, and I'm hoping to find a way of manually fixing the h5py file itself, or passing the right custom object to the model loader. I've noticed that it seems like it's just an extra key wherever the config file is stored, e.g. https://github.com/tensorflow/tensorflow/issues/41929
The problem is, I'm not sure how to manually get rid of the Functional layer in the h5 file. Specifically, I've tried:
import h5py
f = h5py.File("model.h5",'r')
print(f['model_weights'].keys())
which gives:
<KeysViewHDF5 ['concatenate_1', 'conv1d_3', 'conv1d_4', 'conv1d_5', 'dense_1', 'dropout_4', 'dropout_5', 'dropout_6', 'dropout_7', 'embedding_1', 'global_average_pooling1d_1', 'global_max_pooling1d_1', 'input_2']>
and I don't see the Functional layer anywhere. Where exactly is the config for the model stored in this file? E.g. I'm looking for something like {"class_name": "Functional", "config": {"name": "model", "layers":...}}
Question: is there a way I can manually edit the h5 file using h5py to get rid of the Functional layer?
Alternatively, can I pass a specific custom_obects={'Functiona':???} to the load_model function?
I've tried {'Functional':tf.keras.models.Model} but that returns ('Keyword argument not understood:', 'groups') because I think it's trying to load a model into weights?
I had a similar problem. The only way I could solve it without changing the Tensorflow version and retraining the model is by building the model structure again using Keras API in TensorFlow 2.2.0 and then call:
model.load_weights(<h5 file>)
where the original h5 file was created using TensorFlow 2.3.0. If you already have the code that builds the model structure then this method should be relatively easy since all you have to do is replace load_model(<h5 file>) with the line above.
Just change
keras.models import load_model
tensorflow.keras.models import load_model
then
load_model('model.h5', compile = False)

Exporting a frozen graph .pb file in Tensorflow 2

I've beeen trying out the Tensorflow 2 alpha and I have been trying to freeze and export a model to a .pb graphdef file.
In Tensorflow 1 I could do something like this:
# Freeze the graph.
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph to .pb file.
with open('model.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
However this doesn't seem possible anymore as convert_variables_to_constants is removed and use of sessions is discouraged.
I looked and found there is the freeze graph util
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py that works with SavedModel exports.
Is there some way to do it within Python still or I am meant to switch and use this tool now?
I have also faced this same problem while migrating from tensorflow1.x to tensoflow2.0 beta.
This problem can be solved by 2 methods:
1st is to go to the tensflow2.0 docs search for the methods you have used and change the syntax for each line &
To use google's tf_ugrade_v2 script
tf_upgrade_v2 --infile your_tf1_script_file --outfile converted_tf2_file
You try above command to change your tensorflow1.x script to tensorflow2.0, it will solve all your problem.
Also, you can rename the method (Manual step by refering documentation)
Rename 'tf.graph_util.convert_variables_to_constants' to 'tf.compat.v1.graph_util.convert_variables_to_constants'
The measure problem is that in tensorflow2.0 is that many syntax and function has changed try referring the tensoflow2.0 docs or use the google's tf_upgrade_v2 script
Not sure if you've seen this Tensorflow 2.0 issue, but this response seems to be a work-around:
https://github.com/tensorflow/tensorflow/issues/29253#issuecomment-530782763
Note: this hasn't worked for my nlp model but maybe it will work for you. The suggested work-around is to use model.save_weights('weights.h5') while in TF 2.0 environment. Then create new environment with TF 1.14 and do all following steps in TF 1.14 env. Build your model model = create_model() and use model.load_weights('weights.h5') to load weights back into your model. Then save entire model with model.save('final_model.h5'). If you manage to have success with the above steps, then follow the rest of the steps in the link to use freeze_graph.

Error with 8-bit Quantization in Tensorflow

I have been experimenting with the new 8-bit quantization feature available in TensorFlow. I could run the example given in the blog post (quantization of googlenet) without any issue and it works fine for me !!!
Now, I would like to apply the same for a simpler network. So I used a pre-trained network for CIFAR-10 (which is trained on Caffe), extracted its parameters, created corresponding graph in tensorflow, initialized the weights with this pre-trained weights and finally saved it as a GraphDef object. See this IPython Notebook for full procedure.
Now I applied the 8-bit quantization with the tensorflow script as mentioned in the Pete Warden's blog:
bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph --input=cifar.pb --output=qcifar.pb --mode=eightbit --bitdepth=8 --output_node_names="ArgMax"
Now I wanted to run the classification on this quantized network. So I loaded the new qcifar.pb to a tensorflow session and passed the image (the same way I passed it to original version). Full code can be found in this IPython Notebook.
But as you can see at the end, I am getting following error:
NotFoundError: Op type not registered 'QuantizeV2'
Can anybody suggest what am I missing here?
Because the quantized ops and kernels are in contrib, you'll need to explicitly load them in your python script. There's an example of that in the quantize_graph.py script itself:
from tensorflow.contrib.quantization import load_quantized_ops_so
from tensorflow.contrib.quantization.kernels import load_quantized_kernels_so
This is something that we should update the documentation to mention!