Error converting delf to tensorflow js web - tensorflow

I'm following this [1] and trying to convert this [2] to tensorflow js with [0]. I run into [3]. Any chance anyone knows what's going on?
[0]
tensorflowjs_converter
--input_format=tf_hub
'https://tfhub.dev/google/delf/1'
delf
[1] https://github.com/tensorflow/tfjs-converter#step-1-converting-a-savedmodel-keras-h5-session-bundle-frozen-model-or-tensorflow-hub-module-to-a-web-friendly-format
[2] https://www.tensorflow.org/hub/modules/google/delf/1
[3]
Using TensorFlow backend.
2018-08-21 17:49:34.351121: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Creating a model with inputs [u'score_threshold', u'image', u'image_scales', u'max_feature_num'] and outputs [u'module_apply_default/NonMaxSuppression/Gather/GatherV2_1', u'module_apply_default/NonMaxSuppression/Gather/GatherV2_3', u'module_apply_default/postprocess_1/pca_l2_normalization', u'module_apply_default/Reshape_4', u'module_apply_default/truediv_2', u'module_apply_default/NonMaxSuppression/Gather/GatherV2', u'module_apply_default/ExpandDims'].
Traceback (most recent call last):
File "/usr/local/bin/tensorflowjs_converter", line 11, in
sys.exit(main())
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/converter.py", line 286, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 420, in convert_tf_hub_module
graph = load_graph(frozen_file, ','.join(output_node_names))
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 63, in load_graph
tf.import_graph_def(graph_def, name='')
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def
raise ValueError(str(e))
ValueError: Input 0 of node module_apply_default/while/resnet_v1_50/conv1/Conv2D/ReadVariableOp/Enter was passed float from module/resnet_v1_50/conv1/weights:0 incompatible with expected resource.

What version of the tensorflowjs_converter are you using? My guess is that the DELF model uses some Ops which are unsupported by TFJS. The latest version of the TFJS converter should give clearer error messages about unsupported ops if that is in fact the issue.
Not all TensorFlow Hub modules are TFJS compatible. In particular, there are some Ops which are not implemented in TFJS and so the modules cannot be converted. You can find a list of supported TFJS Ops here
You can try updating to the latest version of the TFJS converter to get a better error message and update TFJS to see if more of the ops are supported in a more recent version. Otherwise, you can search for open features requests or file a new one here to request the Op be supported.

Related

tflite converter error operation not supported

I was trying to convert .pb model of albert to tflite
I made .pb model using https://github.com/google-research/albert in tf 1.15
And I used
tconverter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
to make tflite file(in tf 2.4.1)
but
Traceback (most recent call last):
File "convert.py", line 7, in <module>
tflite_model = converter.convert()
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 983, in convert
**converter_kwargs)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 449, in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py", line 200, in toco_convert_protos
raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: See console for info.
2021-04-25 17:30:33.543663: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ParseExample
2021-04-25 17:30:33.546255: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 163 operators, 308 arrays (0 quantized)
2021-04-25 17:30:33.547201: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 162 operators, 301 arrays (0 quantized)
2021-04-25 17:30:33.548519: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 162 operators, 301 arrays (0 quantized)
2021-04-25 17:30:33.550930: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 134 operators, 264 arrays (0 quantized)
2021-04-25 17:30:33.577037: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.578278: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.579051: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 127 operators, 257 arrays (0 quantized)
2021-04-25 17:30:33.580196: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 0 bytes, theoretical optimal value: 0 bytes.
2021-04-25 17:30:33.580514: I tensorflow/lite/toco/toco_tooling.cc:454] Number of parameters: 11640702
2021-04-25 17:30:33.580862: E tensorflow/lite/toco/toco_tooling.cc:481] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
and pasting the following:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, MEAN, MUL, PACK, POW, RESHAPE, RSQRT, SHAPE, SOFTMAX, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, TANH, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BatchMatMul, ParseExample.
Traceback (most recent call last):
File "/home/pgb/anaconda3/envs/test2/bin/toco_from_protos", line 8, in <module>
sys.exit(main())
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 89, in main
app.run(main=execute, argv=[sys.argv[0]] + unparsed)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/home/pgb/anaconda3/envs/test2/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 52, in execute
enable_mlir_converter)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
and pasting the following:
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, MEAN, MUL, PACK, POW, RESHAPE, RSQRT, SHAPE, SOFTMAX, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, TANH, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BatchMatMul, ParseExample.
So I used
converter.allow_custom_ops = True
And it worked but when I tried to measure the runtime in android device with method https://www.tensorflow.org/lite/performance/measurement
nothing comes out(And cpu goes to Idel).
In albert github code I cannot find BatchMatMul, ParseExample where did it came from?
Is there any way beside converter.allow_custom_ops = True?
Could the reason failure of running model in adb might be due to converter.allow_custom_ops = True?
Please consider using the Select TF option in order to fall back to the TF ops when TFLite builtin op coverage does not fit your case.
For the conversion procedure, you can enable the Select TF option as follows:
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
Allowing custom ops requires users to write down the TFLite custom ops for the ops, that are not covered by TFLite builtin op set. For example, BatchMatMul and ParseExample ops are needed to be implemented by yourself. In most of cases, using the existing TF op implementations is much eaiser than implementing custom ops.
Please refer to this link.

Can't convert onnx model to tflite using TF 2.4.1

I'm having an ONNX model, which I can successfully convert to TF with TF 2.4.1. But when it comes to the conversion of that saved model to TFLite an error happens.
The code:
import onnx
import tensorflow as tf
from onnx_tf.backend import prepare
print(tf.__version__)
# Convert model.onnx to Tensorflow
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.export_graph('model')
# Convert saved model to tflite
converter = tf.lite.TFLiteConverter.from_saved_model('model')
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)
Everything goes OK until the conversion step, which ends like so:
>>> tf_lite_model = converter.convert()
2021-04-22 18:18:14.715046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-04-22 18:18:14.715072: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-04-22 18:18:14.715078: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-04-22 18:18:14.716044: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: model
2021-04-22 18:18:14.778050: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-04-22 18:18:14.778083: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: model
2021-04-22 18:18:14.998062: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-04-22 18:18:15.043862: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-04-22 18:18:15.438804: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: model
2021-04-22 18:18:15.809851: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1093808 microseconds.
2021-04-22 18:18:18.757257: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): error: operand #0 does not dominate this use
Traceback (most recent call last):
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
result = _convert_saved_model(**converter_kwargs)
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
data = toco_convert_protos(
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
I have no idea, what this message means, but if I switch to TF 2.2 the conversion passes w/o errors. The bad thing is, that due to another problem now the initial ONNX to TF conversion fails.
Anybody having an idea, what this message means and what could be done with it?
TIA
Is it possible to share your the saved model directory to me? I can help debugging.
The general advise is that, there are two possibilities that
(1) TF Lite converter may not handle the saved model correctly.
(2) onnx conversion tool may not create a valid TF saved model.
Using the recent TF version (2.5 or tf-nightly) might help resolve this problem in the (1) case but it's not guaranteed.
I confirmed that the tf-nightly version could convert the attached saved model without any issue:
converter = tf.lite.TFLiteConverter.from_saved_model(
"/tmp/onnx_model")
tflite_model = converter.convert()
with open("/tmp/onnx.tflite", "wb") as f:
f.write(tflite_model)

Converting TF 2.0 saved model for TensorRT on Jetson Nano

I am trying to convert a TF 2.0 saved_model to tensorRT on the Jetson Nano.
The model was saved in TF 2.0.0. The nano has Jetpack 4.2.2 w/ TensorRT __ and Tensorflow 1.14 (that is the latest Tensorflow release for Jetson).
I have been following the instuctions from here which describe how to convert a TF 2.0.0 saved_model into TensorRT.
Below is my code:
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
tf.enable_eager_execution()
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)
saved_model_loaded = tf.saved_model.load(
output_saved_model_dir, tags=[tag_constants.SERVING])
graph_func = saved_model_loaded.signatures[
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
frozen_func = convert_to_constants.convert_variables_to_constants_v2(
graph_func)
def wrap_func(*args, **kwargs):
# Assumes frozen_func has one output tensor
return frozen_func(*args, **kwargs)[0]
output = wrap_func(input_data).numpy()
It seems to start converting successfully. However I get an KeyError: 'serving_default' error when it reaches the convert_to_tensor line. My complete printout is below found here (too long for SO), but the python traceback appears below. How can I fix this?
Thanks!
printout summary (complete printout here):
Traceback (most recent call last):
File "tst.py", line 38, in <module>
convert_savedmodel()
File "tst.py", line 24, in convert_savedmodel
converter.convert()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 956, in convert
func = self._saved_model.signatures[self._input_saved_model_signature_key]
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py", line 196, in __getitem__
return self._signatures[key]
KeyError: 'serving_default'
I can see two problems in your experiment:
You are using TF-TRT 2.0 API while having TF 1.14 installed. That is not supported. If you have TF 1.14 installed on your system, then you would need to use TF-TRT 1.x API.
TF Models saved in TF2.0 are not compatible with TF1.14 according to https://www.tensorflow.org/guide/versions
If you only have access to TF1.14, I suggest to re-generate the graph in TF1.14 and save the model there before applying TF-TRT, and then use TF-TRT 1.x API.

How to convert the body-pix models for tfjs to keras h5 or tensorflow frozen graph

I'm porting body-pix to Python and C++ and want to export the body-pix pre-trained model for tensorflow.js into a tensorflow frozen graph. Is it possible?
I've already download the following files and tried to convert using tensorflowjs_converter, but it didn't work.
https://storage.googleapis.com/tfjs-models/savedmodel/posenet_mobilenet_025_partmap/model.json
https://storage.googleapis.com/tfjs-models/savedmodel/posenet_mobilenet_025_partmap/group1-shard1of1
The result is here.
$ tensorflowjs_converter --input_format tfjs_layers_model --output_format keras posenet_mobilenet_025_partmap/model.json test.h5
Traceback (most recent call last):
File "/home/xxx/anaconda3/envs/tfjs_test2/bin/tensorflowjs_converter", line 10, in <module>
sys.exit(main())
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 368, in main
FLAGS.output_path)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 169, in dispatch_tensorflowjs_to_keras_h5_conversion
model = keras_tfjs_loader.load_keras_model(config_json_path)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/keras_tfjs_loader.py", line 218, in load_keras_model
use_unique_name_scope=use_unique_name_scope)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/keras_tfjs_loader.py", line 65, in _deserialize_keras_model
model = keras.models.model_from_json(json.dumps(model_topology_json))
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 96, in model_from_json
return deserialize(config, custom_objects=custom_objects)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 81, in deserialize
layer_class_name = config['class_name']
KeyError: 'class_name'
The converter version is here.
tensorflowjs 1.0.1
Dependency versions:
keras 2.2.4-tf
tensorflow 2.0.0-dev20190405
On ubuntu 16.04 LTS and anaconda 3.
I've tried tensorflowjs 0.8.5, but it also didn't work.
It will be helpful if you tell me how to convert them. Either keras format or tensorflow frozen graph is OK. I think that both can be converted to each other.
Download the model.json file
Eg: https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/resnet50/float/model-stride16.json
Download Corresponding weights from manifest.json
https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/resnet50/float/manifest.json
Install tfjs_graph_converter
from https://github.com/ajaichemmanam/tfjs-to-tf
Convert model to .pb file
tfjs_graph_converter path/to/js/model path/to/frozen/model.pb
Here is an example of POSENET converted to keras h5 model. https://github.com/tensorflow/tfjs/files/3943875/posenet.zip
Same way you can use the bodypix models and convert it .

Trying to restore model, but tf.train.import_meta_graph(meta_path) raises error

I downloaded pretrained mobilenetV2 models from tensorflow models,and try to restore the graph,but got unexpected error.
Codes to reproduce the error is pretty concise:
import tensorflow as tf
meta_path = 'path/to/mobilenet_v2_0.35_224/mobilenet_v2_0.35_224.ckpt.meta'
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(meta_path)
then the last line raises error:
Traceback (most recent call last):
File "/home/CVAR/study/codes/languages/python/pycharm/learn_tensorflow/train_mobileNet_v2/test_of_functions/saver_test.py", line 21, in <module>
saver = tf.train.import_meta_graph(meta_path)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1960, in import_meta_graph
**kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/meta_graph.py", line 744, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 391, in import_graph_def
_RemoveDefaultAttrs(op_dict, producer_op_list, graph_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 158, in _RemoveDefaultAttrs
op_def = op_dict[node.op]
KeyError: 'InfeedEnqueueTuple'
My system information is :
ubuntu 16.04
python 3.5
tensorflow-gpu 1.9
Any idea?
I recently also met such a problem. It seems like the reason is that the TensorFlow version you use to train the model is different from the version you use to read the graph description proto. What you need to do is to reinstall the TensorFlow to your training version. Otherwise, retraining the model would work.
FYI, the TensorFlow version I used to train is 1.12.0, by contrast, the version I use to load the graph is 1.13.1. Reinstallation solves the problem.
There are some ops not defined. from conv_blocks import * will fix this bug but I got another problem "ValueError: NodeDef expected inputs 'float, int32' do not match 1 inputs specified;". Still debugging, but hope this tip solves your problem.