Why Keras Lambda-Layer cause problem Mask_RCNN? - tensorflow

I'm using the Mask_RCNN package from this repo: https://github.com/matterport/Mask_RCNN.
I tried to train my own dataset using this package but it gives me an error at the beginning.
2020-11-30 12:13:16.577252: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-11-30 12:13:16.587017: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-11-30 12:13:16.587075: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (7612ade969e5): /proc/driver/nvidia/version does not exist
2020-11-30 12:13:16.587479: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-30 12:13:16.593569: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2300000000 Hz
2020-11-30 12:13:16.593811: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1b2aa00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-30 12:13:16.593846: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "machines.py", line 345, in <module>
model_dir=args.logs)
File "/content/Mask_RCNN/mrcnn/model.py", line 1837, in __init__
self.keras_model = self.build(mode=mode, config=config)
File "/content/Mask_RCNN/mrcnn/model.py", line 1934, in build
anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 926, in __call__
input_list)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1117, in _functional_construction_call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 904, in call
self._check_variables(created_variables, tape.watched_variables())
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 931, in _check_variables
raise ValueError(error_str)
ValueError:
The following Variables were created within a Lambda layer (anchors)
but are not tracked by said layer:
<tf.Variable 'anchors/Variable:0' shape=(1, 261888, 4) dtype=float32>
The layer cannot safely ensure proper Variable reuse across multiple
calls, and consquently this behavior is disallowed for safety. Lambda
layers are not well suited to stateful computation; instead, writing a
subclassed Layer is the recommend way to define layers with
Variables.
I looked up the part of code responsible for the problem (located at file: /mrcnn/model.py line: 1935 in the repo):
IN[0]: anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
If anyone have an idea how to solve it or have already solved it, please mention the solution.

Go to mrcnn/model.py and add:
class AnchorsLayer(KL.Layer):
def __init__(self, anchors, name="anchors", **kwargs):
super(AnchorsLayer, self).__init__(name=name, **kwargs)
self.anchors = tf.Variable(anchors)
def call(self, dummy):
return self.anchors
def get_config(self):
config = super(AnchorsLayer, self).get_config()
return config
Then find the line:
anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
and replace it with:
anchors = AnchorsLayer(anchors, name="anchors")(input_image)
Works like a charm in TF 2.4!

ROOT CAUSE:
The bahavior of Lambda layer of Keras in Tensorflow 2.X was changed from Tensorflow 1.X.
In Keras in Tensorflow 1.X, all tf.Variable and tf.get_variable are automatically tracked into the layer.weights via variable creator context so they receive gradient and trainable automatically. Such approach has problem with auto graph compilation that convert Python code into Execution Graph in Tensorflow 2.X so it is removed and now Lambda layer has the code to check for variable creation and raise the error as you see. In short, Lambda layer in Tensorflow 2.X has to be stateless. If you want to create variable, the correct way in Tensorflow 2.X is to subclass layer class and add trainable weight as a class member.
SOLUTIONS:
There are 2 choices -
Change to use Tensorflow 1.X.. This error will not be raised.
Replace the Lambda layer with subclass of Keras Layer:
class AnchorsLayer(tensorflow.keras.layers.Layer):
def __init__(self, anchors):
super(AnchorLayer, self).__init__()
self.anchors_v = tf.Variable(anchors)
def call(self):
return self.anchors_v
# Then replace the Lambda call with this:
anchors_layer = AnchorLayers(anchors)
anchors = anchors_layer()

Related

TensorFlow Keras SavedModel throws a TypeError after being saved and loaded twice

When I create a Keras model with one or more custom layers, I can use the model.save() method to persist the Keras model using the TensorFlow SavedModel format.
I can load this model from the filesystem using tf.keras.models.load_model() function and save it to the filesystem again.
But when I load the SavedModel from the filesystem a second time, it fails with this exception:
TypeError: f(inputs, training, training, training, training, *, training, training) missing 1 required argument: training
You can try replicating this issue with the following code:
import tensorflow as tf
class CustomLayer(tf.keras.layers.Layer):
def call(self, inputs, *args, **kwargs):
return inputs
model1 = tf.keras.Sequential([
CustomLayer()
])
model1.build((None, 1))
model1.compile()
model1.save("model1")
model2 = tf.keras.models.load_model("model1")
model2.save("model2")
# This line should raise a TypeError.
model3 = tf.keras.models.load_model("model2")
Why the problem exists
The problem is that the TensorFlow SavedModel format does not actually serialize custom Python code. It only saves the TensorFlow graph generated by custom Keras layers and other Python objects.
The tf.keras.models.load_model() function--by default--does not return the Python layer. Instead, it returns a placeholder layer containing the same part of the TensorFlow computation graph. We can see this in the example in my question:
>>> model1.layers
[<__main__.CustomLayer at 0x7ff04c14ee20>]
>>> model2.layers
[<keras.saving.saved_model.load.CustomLayer at 0x7ff114fd7be0>]
When model2 is saved and loaded from the filesystem, TensorFlow cannot correctly parse the *args and **kwargs arguments in CustomLayer.call().
I don't know whether the actual bug is within the saving code, the loading code, or both.
The real fix needs to happen within TensorFlow/Keras, but in the meantime, there are
Workarounds
You can choose any ONE of the below workarounds to avoid serialization errors with custom Keras layers.
Change the signature on Layer.call()
Currently, the official method signature on Layer.call() is def call(self, inputs, *args, **kwargs):
But TensorFlow will throw a TypeError when trying to load a model with a custom layer with this signature. To fix the error, write all of your custom layers with a signature of def call(self, inputs):. If your layer behaves differently during training or inference, then you can use the method signature def call(self, inputs, training=None):
This makes it easier for TensorFlow to generate placeholder layers generated in the keras.saving.saved_model.load module. But this placeholder layer is still not exactly the same as the original Python code.
Use the custom_objects parameter on tf.keras.models.load_model()
It is possible to load a model with its original Python layers instead of the placeholder layers. Just pass a dictionary mapping layer names to Python layer class objects. This requires your code to be able to import the original Python layer. The example in my question can be fixed as follows:
model3 = tf.keras.models.load_model(
"model2",
custom_objects=dict(
CustomLayer=CustomLayer,
),
)
Make sure that your layer implements Layer.get_config() and returns a dictionary with all of the parameters needed to recreate the layer from scratch. The layer must be able to be recreated with Layer.from_config().
Import the Python layer and add it to Keras's global registry
Keras maintains a global registry of custom Python classes and other objects to refer to when loading SavedModels. You can register your custom Keras layer with the #tf.keras.utils.register_keras_serializable() decorator. For example:
#tf.keras.utils.register_keras_serializable(
package="my_python_package"
)
class CustomLayer(tf.keras.layers.Layer):
def call(self, inputs, *args, **kwargs):
return inputs
This method also requires that your layer properly implement Layer.get_config().
Install the Python layer object with tf.keras.utils.custom_object_scope()
Much like the above two solutions, the tf.keras.utils.custom_object_scope() context manager can specify which custom layers to use when deserialization.

tflite converter error operation not supported

I was trying to convert .pb model of albert to tflite
I made .pb model using https://github.com/google-research/albert in tf 1.15
And I used
tconverter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
to make tflite file(in tf 2.4.1)
but
Traceback (most recent call last):
File "convert.py", line 7, in <module>
tflite_model = converter.convert()
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 983, in convert
**converter_kwargs)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 449, in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 200, in toco_convert_protos
raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: See console for info.
2021-04-25 17:30:33.543663: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ParseExample
2021-04-25 17:30:33.546255: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 163 operators, 308 arrays (0 quantized)
2021-04-25 17:30:33.547201: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 162 operators, 301 arrays (0 quantized)
2021-04-25 17:30:33.548519: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 162 operators, 301 arrays (0 quantized)
2021-04-25 17:30:33.550930: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 134 operators, 264 arrays (0 quantized)
2021-04-25 17:30:33.577037: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.578278: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.579051: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.580196: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 0 bytes, theoretical optimal value: 0 bytes.
2021-04-25 17:30:33.580514: I tensorflow/lite/toco/toco_tooling.cc:454] Number of parameters: 11640702
2021-04-25 17:30:33.580862: E tensorflow/lite/toco/toco_tooling.cc:481] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
and pasting the following:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, MEAN, MUL, PACK, POW, RESHAPE, RSQRT, SHAPE, SOFTMAX, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, TANH, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BatchMatMul, ParseExample.
Traceback (most recent call last):
File "/home/pgb/anaconda3/envs/test2/bin/toco_from_protos", line 8, in <module>
sys.exit(main())
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 89, in main
app.run(main=execute, argv=[sys.argv[0]] + unparsed)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 52, in execute
enable_mlir_converter)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
and pasting the following:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, MEAN, MUL, PACK, POW, RESHAPE, RSQRT, SHAPE, SOFTMAX, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, TANH, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BatchMatMul, ParseExample.
So I used
converter.allow_custom_ops = True
And it worked but when I tried to measure the runtime in android device with method https://www.tensorflow.org/lite/performance/measurement
nothing comes out(And cpu goes to Idel).
In albert github code I cannot find BatchMatMul, ParseExample where did it came from?
Is there any way beside converter.allow_custom_ops = True?
Could the reason failure of running model in adb might be due to converter.allow_custom_ops = True?
Please consider using the Select TF option in order to fall back to the TF ops when TFLite builtin op coverage does not fit your case.
For the conversion procedure, you can enable the Select TF option as follows:
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
Allowing custom ops requires users to write down the TFLite custom ops for the ops, that are not covered by TFLite builtin op set. For example, BatchMatMul and ParseExample ops are needed to be implemented by yourself. In most of cases, using the existing TF op implementations is much eaiser than implementing custom ops.
Please refer to this link.

Tf 2.0 MirroredStrategy on Albert TF Hub model (multi gpu)

I'm trying to run Albert Tensorflow hub version on multiple GPUs in the same machine. The model works perfectly on single GPU.
This is the structure of my code:
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) # it prints 2 .. correct
if __name__ == "__main__":
with strategy.scope():
run()
Where in run() function, I read the data, build the model, and fit it.
I'm getting this error:
Traceback (most recent call last):
File "Albert.py", line 130, in <module>
run()
File "Albert.py", line 88, in run
model = build_model(bert_max_seq_length)
File "Albert.py", line 55, in build_model
model.compile(loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
File "/home/****/py_transformers/lib/python3.5/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/bighanem/py_transformers/lib/python3.5/site-packages/tensorflow_core/python/keras/engine/training.py", line 471, in compile
' model.compile(...)'% (v, strategy))
ValueError: Variable (<tf.Variable 'bert/embeddings/word_embeddings:0' shape=(30000, 128) dtype=float32>) was not created in the distribution strategy scope of (<tensorflow.python.distribute.mirrored_strategy.MirroredStrategy object at 0x7f62e399df60>). It is most likely due to not all layers or the model or optimizer being created outside the distribution strategy scope. Try to make sure your code looks similar to the following.
with strategy.scope():
model=_create_model()
model.compile(...)
Is it possible that this error occures because Albert model was prepared before by tensorflow team (built and compiled)?
Edited:
To be precise, Tensorflow version is 2.1.
Also, this is the way I load Albert pretrained model:
features = {"input_ids": in_id, "input_mask": in_mask, "segment_ids": in_segment, }
albert = hub.KerasLayer(
"https://tfhub.dev/google/albert_xxlarge/3",
trainable=False, signature="tokens", output_key="pooled_output",
)
x = albert(features)
Following this tutorial: SavedModels from TF Hub in TensorFlow 2
Two-part answer:
1) TF Hub hosts two versions of ALBERT (each in several sizes):
https://tfhub.dev/google/albert_base/3 etc. from the Google research team that originally developed ALBERT comes in the hub.Module format for TF1. This will likely not work with a TF2 distribution strategy.
https://tfhub.dev/tensorflow/albert_en_base/1 etc. from the TensorFlow Model Garden comes in the revised TF2 SavedModel format. Please try this one for use in TF2 with a distribution strategy.
2) That said, the immediate problem appears to be what is explained in the error message (abridged):
Variable 'bert/embeddings/word_embeddings' was not created in the distribution strategy scope ... Try to make sure your code looks similar to the following.
with strategy.scope():
model = _create_model()
model.compile(...)
For a SavedModel (from TF Hub or otherwise), it's the loading that needs to happen under the distribution strategy scope, because that's what's re-creating the tf.Variable objects in the current program. Specifically, any of the following ways to load a TF2 SavedModel from TF Hub have to occur under the distribution strategy scope for distribution to work:
tf.saved_model.load();
hub.load(), which just calls tf.saved_model.load() (after downloading if necessary);
hub.KerasLayer when used with a string-valued model handle, on which it then calls hub.load().

How to predict in multiple models consisting of tensorflow (.pb) model and keras model (.h5) at the same time in flask?

I try to describe the situations completely. But due to my ability of language, there will be possible some unclear statements. Please let me know. I will try to explain my meaning.
Recently, I want to apply facenet (I mean davisking's project on github) to my project. Therefore, I wrote a class
class FacenetEmbedding:
def __init__(self, model_path):
self.sess = tf.InteractiveSession()
self.sess.run(tf.global_variables_initializer())
# Load the model
facenet.load_model(model_path)
# Get input and output tensors
self.images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
self.tf_embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
self.phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")
def get_embedding(self, images):
feed_dict = {self.images_placeholder: images, self.phase_train_placeholder: False}
embedding = self.sess.run(self.tf_embeddings, feed_dict=feed_dict)
return embedding
def free(self):
self.sess.close()
I can use this class independent in flask.
model_path = "models/20191025-223514/"
fe = FacenetEmbedding(model_path)
But I have different demands later. I train two models by using keras. I want to use them (.h5 model) with the above facenet model to predict. I load them first.
modelPic = load_model('models/pp.h5')
lePic = pickle.loads(open('models/pp.pickle', "rb").read())
print(modelPic.predict(np.zeros((1, 128, 128, 3))))
modelM = load_model('models/pv.h5')
leM = pickle.loads(open('models/pv.pickle', "rb").read())
print(modelM.predict(np.zeros((1, 128, 128, 3))))
I print the fake image to test the models. It seems to work normally. But when I run flask server and try to post an image to this api, the message pop up and the prediction doesn't work.
Tensor input_1_3:0, specified in either feed_devices or fetch_devices was not found in the Graph
Exception ignored in: <bound method BaseSession._Callable.__del__ of <tensorflow.python.client.session.BaseSession._Callable object at 0x7ff27d0f0dd8>>
Traceback (most recent call last):
File "/home/idgate/.virtualenvs/Line_POC/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1455, in __del__
self._session._session, self._handle, status)
File "/home/idgate/.virtualenvs/Line_POC/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: No such callable handle: 140675571821088
I try to use these two keras model without loading facenet model in flask server. It works normally. I think that it must collide with something (maybe about session?) to make these three models cannot work simultaneously. But I don't know how to solve this problem. Please help me! Thanks in advance.

Error converting delf to tensorflow js web

I'm following this [1] and trying to convert this [2] to tensorflow js with [0]. I run into [3]. Any chance anyone knows what's going on?
[0]
tensorflowjs_converter
--input_format=tf_hub
'https://tfhub.dev/google/delf/1'
delf
[1] https://github.com/tensorflow/tfjs-converter#step-1-converting-a-savedmodel-keras-h5-session-bundle-frozen-model-or-tensorflow-hub-module-to-a-web-friendly-format
[2] https://www.tensorflow.org/hub/modules/google/delf/1
[3]
Using TensorFlow backend.
2018-08-21 17:49:34.351121: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Creating a model with inputs [u'score_threshold', u'image', u'image_scales', u'max_feature_num'] and outputs [u'module_apply_default/NonMaxSuppression/Gather/GatherV2_1', u'module_apply_default/NonMaxSuppression/Gather/GatherV2_3', u'module_apply_default/postprocess_1/pca_l2_normalization', u'module_apply_default/Reshape_4', u'module_apply_default/truediv_2', u'module_apply_default/NonMaxSuppression/Gather/GatherV2', u'module_apply_default/ExpandDims'].
Traceback (most recent call last):
File "/usr/local/bin/tensorflowjs_converter", line 11, in
sys.exit(main())
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/converter.py", line 286, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 420, in convert_tf_hub_module
graph = load_graph(frozen_file, ','.join(output_node_names))
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 63, in load_graph
tf.import_graph_def(graph_def, name='')
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def
raise ValueError(str(e))
ValueError: Input 0 of node module_apply_default/while/resnet_v1_50/conv1/Conv2D/ReadVariableOp/Enter was passed float from module/resnet_v1_50/conv1/weights:0 incompatible with expected resource.
What version of the tensorflowjs_converter are you using? My guess is that the DELF model uses some Ops which are unsupported by TFJS. The latest version of the TFJS converter should give clearer error messages about unsupported ops if that is in fact the issue.
Not all TensorFlow Hub modules are TFJS compatible. In particular, there are some Ops which are not implemented in TFJS and so the modules cannot be converted. You can find a list of supported TFJS Ops here
You can try updating to the latest version of the TFJS converter to get a better error message and update TFJS to see if more of the ops are supported in a more recent version. Otherwise, you can search for open features requests or file a new one here to request the Op be supported.