Yolov4 Darknet weights to tflite - tensorflow

I've successfully trained a Yolov4 Darknet detection model to detect lego bricks.
The Colab code is available here (with prefiled dataset): https://colab.research.google.com/drive/11d8BLJ4xnXrXumsV-0Z-jqPbz2cfWUUu
The generated config custom-yolov4-tiny-detector.cfg here: https://pastebin.com/68WLs2JM
And the file custom-yolov4-tiny-detector_finals.weights here: https://file.io/BeBokJG8G5lC
The test image produces awesome results with the following command:
!./darknet detect cfg/custom-yolov4-tiny-detector.cfg backup/custom-yolov4-tiny-detector_finals.weights {img_path} -dont-show
What I'd like to do now, is to export this model to TFlite.
I've tried to use the convertion code from https://github.com/hunglc007/tensorflow-yolov4-tflite.git
And especially that piece of code which create a temporary pb file:
!python save_model.py \
--weights /content/darknet/backup/custom-yolov4-tiny-detector_final.weights \
--output ./checkpoints/yolov4-tiny-pretflite-416 \
--input_size 416 \
--model yolov4 \
--tiny \
--framework tflite
The pretflite pb model is then converted to the final tflite:
%cd /content/tensorflow-yolov4-tflite
!python convert_tflite.py --weights ./checkpoints/yolov4-tiny-pretflite-416 --output ./checkpoints/yolov4-tiny-416.tflite
That function gives many error which make impossible to get a TFlite.
Traceback (most recent call last):
File "convert_tflite.py", line 76, in <module>
app.run(main)
File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 71, in main
save_tflite()
File "convert_tflite.py", line 45, in save_tflite
tflite_model = converter.convert()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 929, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 908, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 1212, in convert
return self._convert_from_saved_model(graph_def)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 1095, in _convert_from_saved_model
result = _convert_saved_model(**converter_kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 212, in wrapper
raise converter_error from None # Re-throws the exception.
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py", line 809, in convert_saved_model
data = convert(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py", line 311, in convert
raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: <unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_1/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_1/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_2/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_2/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_3/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_3/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_4/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_4/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_5/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_5/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_6/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_6/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_7/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_7/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_8/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_8/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_9/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_9/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_10/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_10/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_11/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_11/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_12/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_12/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_13/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_13/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_14/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_14/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_15/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_15/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_16/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_16/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_17/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_17/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_18/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.FusedBatchNormV3' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["FusedBatchNormV3:", "model/batch_normalization_18/FusedBatchNormV3#__inference__wrapped_model_2529"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall#__inference_signature_wrapper_8064"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: FusedBatchNormV3
Details:
tf.FusedBatchNormV3(tensor<?x104x104x32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>) -> (tensor<?x104x104x32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x104x104x64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>) -> (tensor<?x104x104x64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x13x13x128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>) -> (tensor<?x13x13x128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x13x13x256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>) -> (tensor<?x13x13x256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x13x13x512xf32>, tensor<512xf32>, tensor<512xf32>, tensor<512xf32>, tensor<512xf32>) -> (tensor<?x13x13x512xf32>, tensor<512xf32>, tensor<512xf32>, tensor<512xf32>, tensor<512xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x208x208x32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>) -> (tensor<?x208x208x32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<32xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x26x26x128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>) -> (tensor<?x26x26x128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x26x26x256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>) -> (tensor<?x26x26x256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<256xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x52x52x128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>) -> (tensor<?x52x52x128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
tf.FusedBatchNormV3(tensor<?x52x52x64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>) -> (tensor<?x52x52x64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<64xf32>, tensor<*xf32>) : {data_format = "NHWC", device = "", epsilon = 1.000000e-03 : f32, exponential_avg_factor = 1.000000e+00 : f32, is_training = false}
Is there a way to convert Yolov4 weights and config to a TFlite model ?

Related

TypeError: _variable_v2_call() got an unexpected keyword argument 'initializer' in tensor flow version=2.0.0

phi = tf.Variable("phi", shape=(k),dtype=tf.float32, initializer=tf.zeros_initializer(),
trainable=False)
TypeError Traceback (most recent call last)
<ipython-input-29-da3533caa9df> in <module>
3 dtype=tf.float32,
4 initializer=tf.zeros_initializer(),
----> 5 trainable=False)
6
~\Anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\ops\variables.py in __call__(cls, *args, **kwargs)
258 return cls._variable_v1_call(*args, **kwargs)
259 elif cls is Variable:
--> 260 return cls._variable_v2_call(*args, **kwargs)
261 else:
262 return super(VariableMetaclass, cls).__call__(*args, **kwargs)
TypeError: _variable_v2_call() got an unexpected keyword argument 'initializer'
got an unexpected keyword argument 'initializer' in tensor flow version=2.0.0
Use this instead of tf.compat.v1.get_variable instead of tf.Variable. This works tensorflow 2.0 and above.
Use this (the initializer is deprecated in tensorflow 2.0):
phi = tf.Variable(
"phi",
shape=(k),
dtype=tf.float32,
tf.zeros_initializer(),
trainable=False)

"undefined input shape at index" warning in training

Tensorflow2 is used in training and I have quite a number of warnings printed out in object classification training.
What could be the reason for those warnings?
2020-04-17 12:15:16.091784: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 486 in the outer inference context.
2020-04-17 12:15:16.091846: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 414 in the outer inference context.
2020-04-17 12:15:16.091860: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 529 in the outer inference context.
2020-04-17 12:15:16.091882: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 351 in the outer inference context.
2020-04-17 12:15:16.091926: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 476 in the outer inference context.
2020-04-17 12:15:16.091937: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 444 in the outer inference context.
2020-04-17 12:15:16.091953: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 372 in the outer inference context.
2020-04-17 12:15:16.091994: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 466 in the outer inference context.
2020-04-17 12:15:16.092009: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 404 in the outer inference context.
2020-04-17 12:15:16.092015: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 395 in the outer inference context.
2020-04-17 12:15:16.092022: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 341 in the outer inference context.
2020-04-17 12:15:16.092036: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 289 in the outer inference context.
2020-04-17 12:15:16.092060: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 319 in the outer inference context.
2020-04-17 12:15:16.092076: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 249 in the outer inference context.
2020-04-17 12:15:16.092087: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 219 in the outer inference context.
2020-04-17 12:15:16.092107: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 279 in the outer inference context.
2020-04-17 12:15:16.092119: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 240 in the outer inference context.
2020-04-17 12:15:16.092136: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 309 in the outer inference context.
2020-04-17 12:15:16.092160: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 210 in the outer inference context.
2020-04-17 12:15:16.092177: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 270 in the outer inference context.
2020-04-17 12:15:16.092193: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 330 in the outer inference context.
2020-04-17 12:15:16.092203: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 300 in the outer inference context.
2020-04-17 12:15:16.092238: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 229 in the outer inference context.
2020-04-17 12:15:16.092248: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 199 in the outer inference context.
2020-04-17 12:15:16.092262: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 259 in the outer inference context.
2020-04-17 12:15:16.092281: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 176 in the outer inference context.
2020-04-17 12:15:16.092291: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 156 in the outer inference context.
2020-04-17 12:15:16.092301: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 124 in the outer inference context.
2020-04-17 12:15:16.092325: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 115 in the outer inference context.
2020-04-17 12:15:16.092339: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 384 in the outer inference context.
2020-04-17 12:15:16.092353: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 361 in the outer inference context.
2020-04-17 12:15:16.092387: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 166 in the outer inference context.
2020-04-17 12:15:16.092396: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 145 in the outer inference context.
2020-04-17 12:15:16.092444: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 104 in the outer inference context.
2020-04-17 12:15:16.092454: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 134 in the outer inference context.
2020-04-17 12:15:16.092464: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 187 in the outer inference context.
2020-04-17 12:15:16.092516: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 550 in the outer inference context.
2020-04-17 12:15:16.092529: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 581 in the outer inference context.
2020-04-17 12:15:16.092548: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 520 in the outer inference context.
2020-04-17 12:15:16.092576: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 425 in the outer inference context.
2020-04-17 12:15:16.092604: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 497 in the outer inference context.
2020-04-17 12:15:16.092627: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 571 in the outer inference context.
2020-04-17 12:15:16.092639: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 539 in the outer inference context.
2020-04-17 12:15:16.092666: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 434 in the outer inference context.
2020-04-17 12:15:16.092674: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 94 in the outer inference context.
2020-04-17 12:15:16.092712: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 85 in the outer inference context.
2020-04-17 12:15:16.092840: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 455 in the outer inference context.
2020-04-17 12:15:16.092862: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 561 in the outer inference context.
2020-04-17 12:15:16.092881: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 509 in the outer inference context.
2020-04-17 12:15:16.313307: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 156 in the outer inference context.
2020-04-17 12:15:16.313353: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 134 in the outer inference context.
2020-04-17 12:15:16.313373: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 571 in the outer inference context.
2020-04-17 12:15:16.313391: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 561 in the outer inference context.
2020-04-17 12:15:16.313415: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 539 in the outer inference context.
2020-04-17 12:15:16.313448: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 414 in the outer inference context.
2020-04-17 12:15:16.313463: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 529 in the outer inference context.
2020-04-17 12:15:16.313471: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 509 in the outer inference context.
2020-04-17 12:15:16.313489: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 550 in the outer inference context.
2020-04-17 12:15:16.313495: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 94 in the outer inference context.
2020-04-17 12:15:16.313521: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 581 in the outer inference context.
2020-04-17 12:15:16.313546: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 85 in the outer inference context.
2020-04-17 12:15:16.313643: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 444 in the outer inference context.
2020-04-17 12:15:16.313655: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 404 in the outer inference context.
2020-04-17 12:15:16.313672: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 520 in the outer inference context.
2020-04-17 12:15:16.313684: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 476 in the outer inference context.
2020-04-17 12:15:16.313721: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 434 in the outer inference context.
2020-04-17 12:15:16.313732: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 395 in the outer inference context.
2020-04-17 12:15:16.313741: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 372 in the outer inference context.
2020-04-17 12:15:16.313754: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 497 in the outer inference context.
2020-04-17 12:15:16.313764: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 466 in the outer inference context.
2020-04-17 12:15:16.313799: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 341 in the outer inference context.
2020-04-17 12:15:16.313827: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 425 in the outer inference context.
2020-04-17 12:15:16.313850: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 361 in the outer inference context.
2020-04-17 12:15:16.313861: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 486 in the outer inference context.
2020-04-17 12:15:16.313873: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 455 in the outer inference context.
2020-04-17 12:15:16.313925: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 124 in the outer inference context.
2020-04-17 12:15:16.313957: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 145 in the outer inference context.
2020-04-17 12:15:16.313976: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 229 in the outer inference context.
2020-04-17 12:15:16.313985: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 351 in the outer inference context.
2020-04-17 12:15:16.313994: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 330 in the outer inference context.
2020-04-17 12:15:16.314011: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 270 in the outer inference context.
2020-04-17 12:15:16.314017: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 384 in the outer inference context.
2020-04-17 12:15:16.314030: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 219 in the outer inference context.
2020-04-17 12:15:16.314061: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 309 in the outer inference context.
2020-04-17 12:15:16.314068: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 300 in the outer inference context.
2020-04-17 12:15:16.314085: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 104 in the outer inference context.
2020-04-17 12:15:16.314139: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 210 in the outer inference context.
2020-04-17 12:15:16.314172: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 289 in the outer inference context.
2020-04-17 12:15:16.314183: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 259 in the outer inference context.
2020-04-17 12:15:16.314213: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 319 in the outer inference context.
2020-04-17 12:15:16.314234: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 249 in the outer inference context.
2020-04-17 12:15:16.314248: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 115 in the outer inference context.
2020-04-17 12:15:16.314272: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 199 in the outer inference context.
2020-04-17 12:15:16.314282: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 176 in the outer inference context.
2020-04-17 12:15:16.314289: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 240 in the outer inference context.
2020-04-17 12:15:16.314313: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 279 in the outer inference context.
2020-04-17 12:15:16.314325: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 187 in the outer inference context.
2020-04-17 12:15:16.314357: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 166 in the outer inference context.
I had similar errors popping up. I used a dataset from a generator without specifing the output shape of the generator. After adding the output shape, no warning was generated:
tf.data.Dataset.from_generator(generator, output_types=(tf.float32, tf.float32), output_shapes=(tf.TensorShape([2997, 16]), tf.TensorShape([None])))

Keras witf tensorflow optimizer

I am trying to optimize a Keras model with a Tensorflow backend optimizer.
I can run the model, calculate a loss, and print all the weights and biases, but when I try to apply the gradient the model crashes.
PS: I am using a tensorflow Cuda docker container and the host system is Ubuntu 18.04.2 LTS.
Code:
import numpy as np
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Dense, Input
from tensorflow.python.keras.optimizers import Adam
from tensorflow.python.keras import backend as K
import tensorflow as tf
data = np.random.uniform(low=0, high=5, size=50)
Y_Data = np.random.randint(low=0, high=3, size=(50, 3))
config = tf.ConfigProto(allow_soft_placement=True)
sess = tf.Session(config=config)
K.set_session(sess)
input = Input(shape=(1, ))
dens_1 = Dense(128, activation='relu')(input)
output = Dense(3, activation='linear')(dens_1)
Y_data_placeholder = tf.placeholder(tf.float32, shape=[None, 3])
loss = tf.losses.absolute_difference(Y_data_placeholder, output)
params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
grads = tf.gradients(loss, params)
grads = list(zip(grads, params))
trainer = tf.train.RMSPropOptimizer(learning_rate=0.003, decay=0.99, epsilon=1e-5)
_train = trainer.apply_gradients(grads)
model = Model(inputs=input, outputs=output)
adam = Adam(lr=0.001)
model.compile(loss="mse", optimizer=adam)
model.summary()
print('')
print(model.predict(data)) # Runs fine.
print('')
print(sess.run(output, feed_dict={input: data.reshape(50, 1)})) # Runs fine.
out_data, out_loss, out_params = sess.run([output, loss, params], feed_dict={input: data.reshape(50, 1), Y_data_placeholder: Y_Data}) # Runs fine.
print('')
print(out_params)
print('')
print(out_data)
print('')
print(out_loss)
out_data, out_loss, out_train = sess.run([output, loss, _train], feed_dict={input: data.reshape(50, 1), Y_data_placeholder: Y_Data}) # Runs Error.
print('')
print(out_train)
Error:
2019-03-12 21:59:20.737232: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737326: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737359: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737436: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at training_ops.cc:2933 : Not found: Resource localhost/dense_1/bias/RMSProp/N10tensorflow3VarE does not exist.
2019-03-12 21:59:20.737524: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737569: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737605: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737647: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at training_ops.cc:2933 : Not found: Resource localhost/dense_1/kernel/RMSProp/N10tensorflow3VarE does not exist.
2019-03-12 21:59:20.737896: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737944: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.737983: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.738050: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at training_ops.cc:2933 : Not found: Resource localhost/dense/bias/RMSProp/N10tensorflow3VarE does not exist.
2019-03-12 21:59:20.738110: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.738149: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.738186: W tensorflow/core/framework/op_kernel.cc:1261] Internal: Invalid variable reference.
2019-03-12 21:59:20.738231: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at training_ops.cc:2933 : Not found: Resource localhost/dense/kernel/RMSProp/N10tensorflow3VarE does not exist.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Invalid variable reference.
[[{{node RMSProp/update_dense_1/bias/ResourceApplyRMSProp}} = ResourceApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](dense_1/bias, dense_1/bias/RMSProp, dense_1/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, gradients/dense_1/BiasAdd_grad/BiasAddGrad)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/project/test6.py", line 50, in <module>
out_data, out_loss, out_train = sess.run([output, loss, _train], feed_dict={input: data.reshape(50, 1), Y_data_placeholder: Y_Data}) # Runs Error.
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Invalid variable reference.
[[node RMSProp/update_dense_1/bias/ResourceApplyRMSProp (defined at /opt/project/test6.py:31) = ResourceApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](dense_1/bias, dense_1/bias/RMSProp, dense_1/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, gradients/dense_1/BiasAdd_grad/BiasAddGrad)]]
Caused by op 'RMSProp/update_dense_1/bias/ResourceApplyRMSProp', defined at:
File "/opt/project/test6.py", line 31, in <module>
_train = trainer.apply_gradients(grads)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/optimizer.py", line 610, in apply_gradients
update_ops.append(processor.update_op(self, grad))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/optimizer.py", line 167, in update_op
update_op = optimizer._resource_apply_dense(g, self._v)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/rmsprop.py", line 194, in _resource_apply_dense
use_locking=self._use_locking)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/gen_training_ops.py", line 2073, in resource_apply_rms_prop
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InternalError (see above for traceback): Invalid variable reference.
[[node RMSProp/update_dense_1/bias/ResourceApplyRMSProp (defined at /opt/project/test6.py:31) = ResourceApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](dense_1/bias, dense_1/bias/RMSProp, dense_1/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, gradients/dense_1/BiasAdd_grad/BiasAddGrad)]]
I fight it out...
After defining the model, loss and Tensorflow optimizer, I need to initialize all variables first.
sess.run(tf.global_variables_initializer())

Q: Error while testing Tensorflow Object Detection Training

I am getting following error while running object_detection/train.py, Do I have to specify the regularizer in the pipeline config file ?
File "/Users/test/Desktop/models-master/object_detection/builders/hyperparams_builder.py", line 117, in _build_regularizer
raise ValueError('Unknown regularizer function: {}'.format(regularizer_oneof))
ValueError: Unknown regularizer function: None

"Create kernel failed" in Tensorflow when mapping nodes to different devices

I have done some manual graph partitioning in tensorFlow using a simple hash function to map the nodes on different (2) CPU devices.
When I map the whole graph on the first device or the second it works. (That is why I don't understand the error message "Create kernel failed").
Do you have any idea what is wrong?
However the following error occurs:
E tensorflow/core/framework/op_segment.cc:53] Create kernel failed: Invalid argument: AttrValue must not have reference type value of float_ref
for attr 'tensor_type'
; NodeDef: Variable/_9 = _Recv[_start_time=0, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:1", send_device_incarnation=1, tensor_name="edge_10_Variable", tensor_type=DT_FLOAT_REF, _device="/job:localhost/replica:0/task:0/cpu:0"](^zeros/_11); Op<name=_Recv; signature= -> tensor:tensor_type; attr=tensor_type:type; attr=tensor_name:string; attr=send_device:string; attr=send_device_incarnation:int; attr=recv_device:string; attr=client_terminated:bool,default=false; is_stateful=true>
E tensorflow/core/common_runtime/executor.cc:334] Executor failed to create kernel. Invalid argument: AttrValue must not have reference type value of float_ref
for attr 'tensor_type'
; NodeDef: Variable/_9 = _Recv[_start_time=0, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:1", send_device_incarnation=1, tensor_name="edge_10_Variable", tensor_type=DT_FLOAT_REF, _device="/job:localhost/replica:0/task:0/cpu:0"](^zeros/_11); Op<name=_Recv; signature= -> tensor:tensor_type; attr=tensor_type:type; attr=tensor_name:string; attr=send_device:string; attr=send_device_incarnation:int; attr=recv_device:string; attr=client_terminated:bool,default=false; is_stateful=true>
[[Node: Variable/_9 = _Recv[_start_time=0, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:1", send_device_incarnation=1, tensor_name="edge_10_Variable", tensor_type=DT_FLOAT_REF, _device="/job:localhost/replica:0/task:0/cpu:0"](^zeros/_11)]]
Traceback (most recent call last):
File "/Users/larissa/Desktop/GraphPartSched/Theorie/TensorFlow/TensorFlow_Tutorials/MNIST_For_ML_Beginners.py", line 53, in <module>
sess.run(init)
File "/Users/larissa/tensorflowSource/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 717, in run
run_metadata_ptr)
File "/Users/larissa/tensorflowSource/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 915, in _run
feed_dict_string, options, run_metadata)
File "/Users/larissa/tensorflowSource/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _do_run
target_list, options, run_metadata)
File "/Users/larissa/tensorflowSource/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 985, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: AttrValue must not have reference type value of float_ref
for attr 'tensor_type'
; NodeDef: Variable/_9 = _Recv[_start_time=0, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:1", send_device_incarnation=1, tensor_name="edge_10_Variable", tensor_type=DT_FLOAT_REF, _device="/job:localhost/replica:0/task:0/cpu:0"](^zeros/_11); Op<name=_Recv; signature= -> tensor:tensor_type; attr=tensor_type:type; attr=tensor_name:string; attr=send_device:string; attr=send_device_incarnation:int; attr=recv_device:string; attr=client_terminated:bool,default=false; is_stateful=true>
[[Node: Variable/_9 = _Recv[_start_time=0, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:1", send_device_incarnation=1, tensor_name="edge_10_Variable", tensor_type=DT_FLOAT_REF, _device="/job:localhost/replica:0/task:0/cpu:0"](^zeros/_11)]]