How to convert PoseNet ResNet50 tfjs model into tflite? - tensorflow

I'm trying to get this PoseNet ResNet50 model working in Swift on iPhone XS. For that, I need to convert it to the tflite format. As I did not manage to find any direct way to do that; the most promising solution seems to be converting the tfjs model to keras_saved_model with the tfjs converter and then converting that to tflite using the TensorFlow Lite converter.
I struggle to convert to keras_saved_model. This is what I tried and how to reproduce the error:
Download this model https://tfhub.dev/tensorflow/tfjs-model/posenet/resnet50/float/1/default/1
Open the downloaded folder and rename model-stride32.json to model.json
Move back to the parent folder and run tensorflowjs_wizard
Input the model path (./posenet_resnet50_float_1_default_1)
You will see similar error to this:
Welcome to TensorFlow.js Converter.
? Please provide the path of model file or the directory that contains model files.
If you are converting TFHub module please provide the URL. ./posenet_resnet50_float_1_default_1
Traceback (most recent call last):
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/bin/tensorflowjs_wizard", line 8, in <module>
sys.exit(pip_main())
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/site-packages/tensorflowjs/converters/wizard.py", line 590, in pip_main
main([' '.join(sys.argv[1:])])
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/site-packages/tensorflowjs/converters/wizard.py", line 598, in main
run(dry_run)
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/site-packages/tensorflowjs/converters/wizard.py", line 410, in run
input_params[common.INPUT_PATH])
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/site-packages/tensorflowjs/converters/wizard.py", line 107, in detect_input_format
if get_tfjs_model_type(filename) == common.TFJS_LAYERS_MODEL_FORMAT:
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/site-packages/tensorflowjs/converters/wizard.py", line 67, in get_tfjs_model_type
data = json.load(f)
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/Users/daniel/anaconda3/envs/tfjs-graph-converter/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
TensorFlow.js 1.7.2
TensorFlow 2.1.0
macOS Catalina 10.15.5
I've also created this GitHub issue.
I appreciate any help on this:) Many thanks!

Related

How to convert YOLOv4-CSP darknet weight to Tensorflow format?

How to convert YOLOv4-CSP darknet weights to Tensorflow (tf) format?
I have tried using this repo but it didn't work.
I had this error message:
Traceback (most recent call last):
File "save_model.py", line 58, in <module>
app.run(main)
File "C:\Python37\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Python37\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "save_model.py", line 54, in main
save_tf()
File "save_model.py", line 49, in save_tf
utils.load_weights(model, FLAGS.weights, FLAGS.model, FLAGS.tiny)
File "D:\swap\20210319\tensorflow-yolov4-tflite\core\utils.py", line 63, in load_weights
conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 3791890 into shape (1024,512,3,3)
The repository that you are using doesn't support conversion of Scaled YoloV4 or Yolov4-csp yet. It's still a feature request according to this issue
There's luckily a workaround. I found this repository that does the same thing, only difference being it converts the model to .h5 (keras format) before converting into tensorflow format. This also supports yolov4-csp.
I made a Google Colab notebook that does the conversion, which can be found here.

How to convert the body-pix models for tfjs to keras h5 or tensorflow frozen graph

I'm porting body-pix to Python and C++ and want to export the body-pix pre-trained model for tensorflow.js into a tensorflow frozen graph. Is it possible?
I've already download the following files and tried to convert using tensorflowjs_converter, but it didn't work.
https://storage.googleapis.com/tfjs-models/savedmodel/posenet_mobilenet_025_partmap/model.json
https://storage.googleapis.com/tfjs-models/savedmodel/posenet_mobilenet_025_partmap/group1-shard1of1
The result is here.
$ tensorflowjs_converter --input_format tfjs_layers_model --output_format keras posenet_mobilenet_025_partmap/model.json test.h5
Traceback (most recent call last):
File "/home/xxx/anaconda3/envs/tfjs_test2/bin/tensorflowjs_converter", line 10, in <module>
sys.exit(main())
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 368, in main
FLAGS.output_path)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 169, in dispatch_tensorflowjs_to_keras_h5_conversion
model = keras_tfjs_loader.load_keras_model(config_json_path)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/keras_tfjs_loader.py", line 218, in load_keras_model
use_unique_name_scope=use_unique_name_scope)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflowjs/converters/keras_tfjs_loader.py", line 65, in _deserialize_keras_model
model = keras.models.model_from_json(json.dumps(model_topology_json))
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 96, in model_from_json
return deserialize(config, custom_objects=custom_objects)
File "/home/xxx/anaconda3/envs/tfjs_test2/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 81, in deserialize
layer_class_name = config['class_name']
KeyError: 'class_name'
The converter version is here.
tensorflowjs 1.0.1
Dependency versions:
keras 2.2.4-tf
tensorflow 2.0.0-dev20190405
On ubuntu 16.04 LTS and anaconda 3.
I've tried tensorflowjs 0.8.5, but it also didn't work.
It will be helpful if you tell me how to convert them. Either keras format or tensorflow frozen graph is OK. I think that both can be converted to each other.
Download the model.json file
Eg: https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/resnet50/float/model-stride16.json
Download Corresponding weights from manifest.json
https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/resnet50/float/manifest.json
Install tfjs_graph_converter
from https://github.com/ajaichemmanam/tfjs-to-tf
Convert model to .pb file
tfjs_graph_converter path/to/js/model path/to/frozen/model.pb
Here is an example of POSENET converted to keras h5 model. https://github.com/tensorflow/tfjs/files/3943875/posenet.zip
Same way you can use the bodypix models and convert it .

Error converting delf to tensorflow js web

I'm following this [1] and trying to convert this [2] to tensorflow js with [0]. I run into [3]. Any chance anyone knows what's going on?
[0]
tensorflowjs_converter
--input_format=tf_hub
'https://tfhub.dev/google/delf/1'
delf
[1] https://github.com/tensorflow/tfjs-converter#step-1-converting-a-savedmodel-keras-h5-session-bundle-frozen-model-or-tensorflow-hub-module-to-a-web-friendly-format
[2] https://www.tensorflow.org/hub/modules/google/delf/1
[3]
Using TensorFlow backend.
2018-08-21 17:49:34.351121: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Creating a model with inputs [u'score_threshold', u'image', u'image_scales', u'max_feature_num'] and outputs [u'module_apply_default/NonMaxSuppression/Gather/GatherV2_1', u'module_apply_default/NonMaxSuppression/Gather/GatherV2_3', u'module_apply_default/postprocess_1/pca_l2_normalization', u'module_apply_default/Reshape_4', u'module_apply_default/truediv_2', u'module_apply_default/NonMaxSuppression/Gather/GatherV2', u'module_apply_default/ExpandDims'].
Traceback (most recent call last):
File "/usr/local/bin/tensorflowjs_converter", line 11, in
sys.exit(main())
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/converter.py", line 286, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 420, in convert_tf_hub_module
graph = load_graph(frozen_file, ','.join(output_node_names))
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 63, in load_graph
tf.import_graph_def(graph_def, name='')
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def
raise ValueError(str(e))
ValueError: Input 0 of node module_apply_default/while/resnet_v1_50/conv1/Conv2D/ReadVariableOp/Enter was passed float from module/resnet_v1_50/conv1/weights:0 incompatible with expected resource.
What version of the tensorflowjs_converter are you using? My guess is that the DELF model uses some Ops which are unsupported by TFJS. The latest version of the TFJS converter should give clearer error messages about unsupported ops if that is in fact the issue.
Not all TensorFlow Hub modules are TFJS compatible. In particular, there are some Ops which are not implemented in TFJS and so the modules cannot be converted. You can find a list of supported TFJS Ops here
You can try updating to the latest version of the TFJS converter to get a better error message and update TFJS to see if more of the ops are supported in a more recent version. Otherwise, you can search for open features requests or file a new one here to request the Op be supported.

Trying to restore model, but tf.train.import_meta_graph(meta_path) raises error

I downloaded pretrained mobilenetV2 models from tensorflow models,and try to restore the graph,but got unexpected error.
Codes to reproduce the error is pretty concise:
import tensorflow as tf
meta_path = 'path/to/mobilenet_v2_0.35_224/mobilenet_v2_0.35_224.ckpt.meta'
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(meta_path)
then the last line raises error:
Traceback (most recent call last):
File "/home/CVAR/study/codes/languages/python/pycharm/learn_tensorflow/train_mobileNet_v2/test_of_functions/saver_test.py", line 21, in <module>
saver = tf.train.import_meta_graph(meta_path)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1960, in import_meta_graph
**kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/meta_graph.py", line 744, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 391, in import_graph_def
_RemoveDefaultAttrs(op_dict, producer_op_list, graph_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 158, in _RemoveDefaultAttrs
op_def = op_dict[node.op]
KeyError: 'InfeedEnqueueTuple'
My system information is :
ubuntu 16.04
python 3.5
tensorflow-gpu 1.9
Any idea?
I recently also met such a problem. It seems like the reason is that the TensorFlow version you use to train the model is different from the version you use to read the graph description proto. What you need to do is to reinstall the TensorFlow to your training version. Otherwise, retraining the model would work.
FYI, the TensorFlow version I used to train is 1.12.0, by contrast, the version I use to load the graph is 1.13.1. Reinstallation solves the problem.
There are some ops not defined. from conv_blocks import * will fix this bug but I got another problem "ValueError: NodeDef expected inputs 'float, int32' do not match 1 inputs specified;". Still debugging, but hope this tip solves your problem.

TF object detection API - Compute evaluation measures failed

I successfully trained a model on my own dataset, exported the inference graph and did the inference on my test dataset.
I now have
the detections as tfrecord file, specified in input config
an eval_config file with the specified metrics set
When I try to compute the measures like in the new object detector inference and evaluation measure computation tutorial with
python object_detection/metrics/offline_eval_map_corloc.py --eval_dir=/media/sf_shared --eval_config_path=/media/sf_shared/eval_config.pbtxt --input_config_path=/media/sf_shared/input_config.pbtxt
It returns this AttributeError:
INFO:tensorflow:Processing file: /media/sf_shared/detections.record
INFO:tensorflow:Processed 0 images...
Traceback (most recent call last):
File "object_detection/metrics/offline_eval_map_corloc.py", line 173, in <module>
tf.app.run(main)
File "/home/chrza/anaconda2/envs/tf27/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/metrics/offline_eval_map_corloc.py", line 166, in main
metrics = read_data_and_evaluate(input_config, eval_config)
File "object_detection/metrics/offline_eval_map_corloc.py", line 124, in read_data_and_evaluate
decoded_dict)
File "/home/chrza/anaconda2/envs/tf27/lib/python2.7/site-packages/tensorflow/models/research/object_detection/utils/object_detection_evaluation.py", line 174, in add_single_ground_truth_image_info
(groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult]
AttributeError: 'NoneType' object has no attribute 'size'
Any hints?
I fixed it (temporarily) as follows:
if (standard_fields.InputDataFields.groundtruth_difficult in groundtruth_dict.keys()) and groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult]:
if groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult].size or not groundtruth_classes.size:
groundtruth_difficult = groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult]
In place of the existing lines (195-198) in
object_detection/metrutils/object_detection_evaluation.py
The error is caused due to the fact that, even in the case there is no difficulty flag passed, the size of the object is being checked for.
This is an error if you skipped that parameter in your tf records.
Perhaps this was the intent of the developers, but the clarity of documentation certainly leaves a lot to be desired for.