Tensorflow.js error: unknown layer: GaussianNoise - tensorflow

I converted a pretrained keras model to use it with Tensorflow.js following the steps in this guide
Now, when I try to import it to javascript using
const model = tf.loadModel("{% static "keras/model.json" %}");
The following error shows up:
Uncaught (in promise) Error: Unknown layer: GaussianNoise. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with
tf.serialization.registerClass().
at new t (errors.ts:48)
at deserializeKerasObject (generic_utils.ts:239)
at deserialize (serialization.ts:31)
at t.fromConfig (models.ts:940)
at deserializeKerasObject (generic_utils.ts:274)
at deserialize (serialization.ts:31)
at models.ts:302
at common.ts:14
at Object.next (common.ts:14)
at i (common.ts:14)
I'm using 0.15.3 version of Tensorflow.js, imported this way:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#0.15.3/dist/tf.min.js"></script>
I trained my neural network with Tensorflow 1.12.0 and Keras 2.2.4

You are using the layer tf.layer.gaussianNoise that is not supported yet by tfjs.
Consider changing this layer by another one supported

Related

Can not load saved model in keras / tensorflow?

I trained the model using autokeras with TensorFlow 2.5.
I saved the pre-trained model using both methods explained on Keras (TensorFlow) home page.
model.save(f'model_auto_keras{max_trials}.h5') model.save("keras_test_save_model")
again when I want to load the saved model using
model = tf.keras.models.load_model(f'model_auto_keras{max_trials}.h5')
and
model1 = tf.keras.models.load_model("keras_test_save_model/")
both methods are not doing well in my case.
saying ValueError: Unknown layer: Custom>
ValueError
ValueError: Unknown layer: Custom>MultiCategoryEncoding.
Please ensure this object is passed to the `custom_objects` argument. See
https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for
details.
the main problem is Custom layer >> MultiCategoryEncoding which is not available in keras.
RuntimeError
#krishna
You can try:
model = tf.keras.models.load_model('model.h5', custom_objects={'CategoryLayerName': tf.keras.layers.CategoryEncoding()})
In your model declaration use layer name for CategoryEncoding layer.
I'm not sure if it should be tf.keras.layers.CategoryEncoding() or tf.keras.layers.CategoryEncoding

Failure to load model in tensorflow.js

I have converted different transfer trained models (VGG16, InceptionV3, EfficientNetB0) from tensorflow in python to tensorflowjs.
And after implementing into tensorflowjs, it fails to load the model.
One of the error is:
Uncaught Error: Unknown layer: Functional. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().
at jN (generic_utils.js:242)
at GI (serialization.js:31)
at e.fromConfig (models.js:1026)
at jN (generic_utils.js:277)
at GI (serialization.js:31)
at models.js:295
at u (runtime.js:45)
at Generator._invoke (runtime.js:274)
at Generator.forEach.t.<computed> [as next] (runtime.js:97)
at Wm (runtime.js:728)
Also, there is
jquery-3.3.1.slim.min.js:2 Uncaught Error: Unknown layer: RandomFlip. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().
at jN (generic_utils.js:242)
at GI (serialization.js:31)
at e.fromConfig (models.js:1026)
at jN (generic_utils.js:277)
at GI (serialization.js:31)
at e.fromConfig (models.js:1026)
at jN (generic_utils.js:277)
at GI (serialization.js:31)
at models.js:295
at u (runtime.js:45)
Also,
Failed to load resource: the server responded with a status of 404 ()
What is the problem?
If I use the .json file generated from teachable machine, the model can be loaded. (However, the predictions become completely wrong for unknown reasons, and the problems seems to be more than just labelling issue.)
But if I use model.json file generated from .h5 or SavedModel via tensorflow converter, no matter which pretrained models I use, or the file formats (.h5 or SavedModel) to generate, the model cannot be loaded into the javascript.
Please help!!
Go to the model.json file and search for the keyword Functional(you could use ctrl+f Functional) and replace the word with Model.

Deployment of keras layer UpSampling2D to tensorRT

Kears/TensorFlow layer UpSampling2D() cannot be deployed to TensorRT (known behavior).
I am trying to find a solution by replacing the layer UpSampling2D() by other Keras layer with parallel behaviour.
Theoretically Conv2DTranspose() should do the work, by setting specific weights and fixing the weights of the layers in training.
I am looking for some help on how to do that.
I did a test run by replacing all the UpSampling 2D() with Conv2DTranspose() in my model and then converted it to UFF. (I only trained the model for 1 epoch to save time).
The converter then complained about DataFormatVecPermute instead.
Converting conv2d_transpose_1/conv2d_transpose-0-VecPermuteNHWCToNCHW-LayoutOptimizer as custom op: DataFormatVecPermute
Warning: No conversion function registered for layer: DataFormatVecPermute yet.
And the parser in C++ couldn't parse this model successfully either.

Export MXNet model to ONNX with _contrib_MultiBoxPrior Error

I created an object detection model in AWS SageMaker, based on SSD/ResNet50 and in MXNet.
Now I would like to optimize it in TensorRT, for which I need to export to ONNX as a first step.
Looking for any recommendation on converting _contrib_MultiBoxPrior to a supported symbol didn't yield any result for me.
Basic code
input_shape = (1, 3, 512, 512)
converted_model_path = onnx_mxnet.export_model(sym_file, params_file, [input_shape], np.float32, onnx_file)
The exact error message is
"AttributeError: No conversion function registered for op type _contrib_MultiBoxPrior yet."
What is the recommended way to solve this error?
The implementation of the MultiBoxPrior operator is dependent on ONNX supporting it. You can track the issue here: https://github.com/apache/incubator-mxnet/issues/15181
Alternatively you can try using mxnet-tensorrt. It uses the subgraph API which means that the symbol that can be executed in TensorRT are executed in the TensorRT runtime, and the ones that cannot are executed in the MXNet runtime.
https://mxnet.incubator.apache.org/versions/master/tutorials/tensorrt/inference_with_trt.html
Note that the current version of this tutorial is for the 1.3.0 version of MXNet I believe. An update is coming for the next release with a simpler API and better performance.

CoreMLTools convert causes "Error reading protobuf spec. validator error"

I have been working on building a custom convolutional network, which has been saved into .h5 file. Further one I've applied transfer learning by popping the last layers (FC) ones, and then compiling the model with the new data. Again saved the model in .h5 format.
The problem occurs when I try to convert this model to mlModel format. I get the following error:
return _MLModelProxy(filename)
RuntimeError: Error compiling model: "Error reading protobuf spec. validator error: Layer 'conv2d_2__activation__' consumes a layer named 'conv2d_2__activation___output' which is not present in this network."
I am freezing the layers of the original convolutional neural network.
The versions I'm using are:
Keras (2.1.6)
Protobuf(3.6.0)
Tensorflow(1.8.0)
For the conversion :
coreml_model = coremltools.converters.keras.convert(
pathToh5File,
class_labels=['0','1','2','3','4','5','6','7','8','9']
)
I've tried adding input names and so on. Still getting the same result.
I would be grateful for any suggestion.
Thank you in advance!