Using the example at
https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers
I created a model with my own data. I want to save it in Tensorflow lite format. I am saving as SavedModel, but while converting, I encountered many error codes. The last error code I encountered;
WARNING:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
INFO:tensorflow:Assets written to: /tmp/test_saved_model/assets
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
212 model body, the input/output will be quantized as well.
--> 213 inference_type: Data type for the activations. The default value is int8.
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
4 frames
Exception: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
During handling of the above exception, another exception occurred:
ConverterError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
215 whether to add NumericVerify ops into the debug mode quantized model.
--> 216
217 Returns:
218 Quantized model in serialized form (e.g. a TFLITE model) with floating-point
ConverterError: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
code;
# Save the model into temp directory
export_dir = "/tmp/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/tmp/save_model_tflite.tflite')
tflite_model_file.write_bytes(tflite_model)
What is the cause of this error code? My goal is to embed this model with react native in the app. Thank you.
Looking at your trace, seems like you have some HashTable ops. You need to set converter.allow_custom_ops = True in order to convert this model.
export_dir = "/content/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.allow_custom_ops = True
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/content/save_model_tflite.tflite')
tflite_model_files.write_bytes(tflite_model)
Related
I am having trouble compiling a simple function in no-Python mode with Numba:
#njit
def fun(x,y):
points = np.hstack((x,y))
return points
a = 5
b = 2
res = fun(a,b)
While this very simple script works without the #njit decorator, if run with it throws the error:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function hstack at 0x7f491475fe50>) found for signature:
>>> hstack(UniTuple(int64 x 2))
There are 4 candidate implementations:
- Of which 4 did not match due to:
Overload in function '_OverloadWrapper._build.<locals>.ol_generated': File: numba/core/overload_glue.py: Line 129.
With argument(s): '(UniTuple(int64 x 2))':
Rejected as the implementation raised a specific error:
TypeError: np.hstack(): expecting a non-empty tuple of arrays, got UniTuple(int64 x 2)
raised from /usr/local/lib/python3.8/dist-packages/numba/core/typing/npydecl.py:748
During: resolving callee type: Function(<function hstack at 0x7f491475fe50>)
During: typing of call at <ipython-input-41-7a0a3bcd4b1a> (28)
File "<ipython-input-41-7a0a3bcd4b1a>", line 28:
def fun(x, y):
points = np.hstack((x, y))
^
If I try to stack one scalar and one array (which it might be the case in the original function from which this problem arised), the behavior doesn't change:
b = np.ones(3)
res = fun(a,b)
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function hstack at 0x7f491475fe50>) found for signature:
>>> hstack(Tuple(int64, array(float64, 1d, C)))
There are 4 candidate implementations:
- Of which 4 did not match due to:
Overload in function '_OverloadWrapper._build.<locals>.ol_generated': File: numba/core/overload_glue.py: Line 129.
With argument(s): '(Tuple(int64, array(float64, 1d, C)))':
Rejected as the implementation raised a specific error:
TypeError: np.hstack(): expecting a non-empty tuple of arrays, got Tuple(int64, array(float64, 1d, C))
raised from /usr/local/lib/python3.8/dist-packages/numba/core/typing/npydecl.py:748
During: resolving callee type: Function(<function hstack at 0x7f491475fe50>)
During: typing of call at <ipython-input-42-39bffd13df71> (28)
File "<ipython-input-42-39bffd13df71>", line 28:
def fun(x, y):
points = np.hstack((x, y))
This is very puzzling to me. I am using Numba 0.56.4—which should be the last stable release.
A very similar behavior happens with np.concatenate, too.
Any suggestion would much appreciated.
Thank you!
I want to use the numpy.round_ ina method of a class.
Any calculation done by methods of this class I want to accelerate by using numba.
In general, it works fine. But I somehow do not get numpy.round_ running.
When using numpy.round_ numba throws an error.
Here is my code of a reduced example:
from numba import types
from numba.experimental import jitclass
import numpy as np
spec = [
('arr', types.Array(types.uint8, 1, 'C')),
('quot', types.Array(types.float64, 1, 'C')),
]
#jitclass(spec)
class test:
def __init__(self):
self.arr = np.array((130,190,130),dtype=np.uint8)
def rnd_(self):
quot = np.zeros(3, dtype=np.float64)
val = self.arr
quot = np.round(val/3.0)
return quot
t = test()
a = t.rnd_()
It throws the following error:
TypingError: - Resolution failure for literal arguments:
Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function round_ at 0x0000021B3500D870>) found for signature:
round_(array(float64, 1d, C))
There are 4 candidate implementations:
- Of which 4 did not match due to:
Overload in function '_OverloadWrapper._build.<locals>.ol_generated': File: numba\core\overload_glue.py: Line 131.
With argument(s): '(array(float64, 1d, C))':
Rejected as the implementation raised a specific error:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<intrinsic stub>) found for signature:
stub(array(float64, 1d, C))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Intrinsic of function 'stub': File: numba\core\overload_glue.py: Line 35.
With argument(s): '(array(float64, 1d, C))':
No match.
During: resolving callee type: Function(<intrinsic stub>)
During: typing of call at <string> (3)
File "<string>", line 3:
<source missing, REPL/exec in use?>
raised from C:\ProgramData\Anaconda3\envs\mybase_conda\lib\site-packages\numba\core\typeinfer.py:1086
During: resolving callee type: Function(<function round_ at 0x0000021B3500D870>)
During: typing of call at .......\python\playground\tmp.py (27)
File "tmp.py", line 27:
def rnd_(self):
<source elided>
val = self.arr
quot = np.round(val/3.0)
^
- Resolution failure for non-literal arguments:
None
During: resolving callee type: BoundFunction((<class 'numba.core.types.misc.ClassInstanceType'>, 'rnd_') for instance.jitclass.test#21b3c031930<arr:array(uint8, 1d, C),quot:array(float64, 1d, C)>)
During: typing of call at <string> (3)
What am I doing wrong?
Seems like you need to pass round's optional arguments as well. I could reproduce the error with an even smaller example:
#nb.jit(nopython=True)
def foo(x):
return np.round(x)
The fix to this is something like:
#nb.jit(nopython=True)
def foo(x):
out = np.empty_like(x)
np.round(x, 0, out)
return out
So for your case, it should be:
def rnd_(self):
quot = np.zeros(3, dtype=np.float64)
np.round(self.arr / 3.0, 0, quot)
return quot
What does this error message mean?
TypeError: Could not build a TypeSpec for name: "tf.print/PrintV2"
op: "PrintV2"
input: "tf.print/StringFormat"
attr {
key: "end"
value {
s: "\n"
}
}
attr {
key: "output_stream"
value {
s: "stdout"
}
}
of unsupported type <class 'google3.third_party.tensorflow.python.framework.ops.Operation'>
I'm printing the shape of a tensor. My code "works" without the print, so I'm sure it is this statement, and the tensor is valid. I can print the shape of a tensor in a test colab. I'm clueless how to narrow this down and debug this. My failure is in a big hairy program.
I can't find any information on the web about what might be causing this error.
What does it mean when I get a TypeSpec error from a tf.print?
-- Malcolm
(TF 2.7.0)
I'm sorry for the tardy followup.
Turns out that the output from Keras layers is not a regular tf.tensor. I still don't understand the reason, the error message, or how to give a better message. :-(
Here is a simple example of the problem (and the error message) and an (undocumented) solution.
import tensorflow as tf
keras_input = tf.keras.layers.Input([10])
tf.print(keras_input)
==> TypeError: Could not build a TypeSpec for name: "tf.print_2/PrintV2"
tf.keras.backend.print_tensor(keras_input)
==> <KerasTensor: shape=(None, 10) dtype=float32 (created by layer 'tf.keras.backend.print_tensor')>
So the moral of the story is use tf.keras.backend.print_tensor when working with Keras models.
excuse my english.
I've been trying to handle Estimators API of tensorflow (v2.x), but when i'm trying to convert a model from tf.estimators to tflite with this code :
import tensorflow as tf
import numpy as np
feature_name = "features"
feature_columns = [tf.feature_column.numeric_column(feature_name, shape=[2])]
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
n_classes=2,
model_dir="Z:\\tests\\iris")
feature_spec = {'features': tf.io.FixedLenFeature(shape=[2], dtype=np.float32)}
serving_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_saved_model(export_dir_base='Z:\\tests\\iris\\', serving_input_receiver_fn=serving_fn)
saved_model_obj = tf.saved_model.load("Z:\\tests\\iris\\1613055608")
concrete_func = saved_model_obj.signatures['serving_default']
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
print(saved_model_obj.signatures.keys())
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()
with open('Z:\\tests\\model.tflite_estimators', 'wb') as f:
f.write(tflite_model)
I got the following error :
ConverterError: C:\Users\\.....\tensorflow\python\saved_model\load.py:909:0: error: 'tf.ParseExampleV2' op is neither a custom op nor a flex op
C:\Users\\.....\tensorflow\python\saved_model\load.py:859:0: note: called from
P:\\.....\sanstitre3.py:19:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py:465:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py:578:0: note: called from
<ipython-input-115-f30bf3b642d5>:1:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3343:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3263:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3072:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\async_helpers.py:68:0: note: called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.ParseExampleV2 {dense_shapes = [#tf.shape<2>], device = "", num_sparse = 0 : i64, result_segment_sizes = dense<[0, 0, 0, 1, 0, 0]> : vector<6xi32>}
Some guy on the internet already proposed to add those 2 lines under converter.experimental_new_converter = True :
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
It compiles without errors, just warnings, but when i put the TFLite model on my STM32, it gaves me the error TOOL ERROR/ Unknown layer type FlexParseExampleV2, stopping.
Can someone help me on this ?
Have a nice day
TensorFlow Lite Micro doesn't support Flex delegate, so Selece TF ops can't be run on MCUs. You can try restructuring your model with (for example) keras sequential API instead to make it converted only with TFLite ops.
context: https://github.com/tensorflow/tensorflow/issues/34350#issuecomment-579027135
I'm trying to quantize a model with TensorFlow 2.3.0. I'm having some trouble with saving the final result, and it's not clear to me what the exact issue is. Here's my code
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow as tf
saved_model_dir = "quantization/recognizer/"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
tflite_model_quant_file = "quantization/recognizer_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model)
When the I run the above code, it just starts spitting out an endless stream of bytes that ultimately crashes my terminal. I've captured part of the output below:
loc(callsite(callsite(callsite(unknown at "functional_9/lstm_10/PartitionedCall#__inference__wrapped_model_37849") at "StatefulPartitionedCall#__inference_signature_wrapper_51309") at "StatefulPartitionedCall")): error: We cannot duplicate the value since it's not constant.
error: Failed to duplicate values for the stateful op
ExceptionTraceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
198 debug_info_str,
--> 199 enable_mlir_converter)
200 return model_str
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/wrap_toco.py in wrapped_toco_convert(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
37 debug_info_str,
---> 38 enable_mlir_converter)
39
Exception: <unknown>:0: error: loc(callsite(callsite(callsite(unknown at "functional_9/lstm_10/PartitionedCall#__inference__wrapped_model_37849") at "StatefulPartitionedCall#__inference_signature_wrapper_51309") at "StatefulPartitionedCall")): We cannot duplicate the value since it's not constant.
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite(callsite(unknown at "functional_9/lstm_10/PartitionedCall#__inference__wrapped_model_37849") at "StatefulPartitionedCall#__inference_signature_wrapper_51309") at "StatefulPartitionedCall")): see current operation: %123 = "tfl.unidirectional_sequence_lstm"(%118, %cst_55, %cst_56, %cst_57, %cst_58, %cst_47, %cst_48, %cst_49, %cst_50, %cst_111, %cst_111, %cst_111, %cst_51, %cst_52, %cst_53, %cst_54, %cst_111, %cst_111, %122, %122, %cst_111, %cst_111, %cst_111, %cst_111) {cell_clip = 1.000000e+01 : f32, fused_activation_function = "TANH", proj_clip = 0.000000e+00 : f32, time_major = false} : (tensor<?x?x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, none, none, none, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, none, none, tensor<?x128xf32>, tensor<?x128xf32>, none, none, none, none) -> tensor<?x?x128xf32>
<unknown>:0: error: Failed to duplicate values for the stateful op
<unknown>:0: note: see current operation: "func"() ( {
^bb0(%arg0: tensor<?x31x200x1xf32>): // no predecessors
%cst = "std.constant"() {value = dense<"0x24DCBABB3DE4A1BD88A370BDEA32E63DEE6B0ABCDA88983A0DCF663DA68FA2B90BE8013E52E7A3BD37CDFCBC3CDC4FBD480C3E3EA53BB1BB87379A3E62D1A4BC29FA0CBC35494ABBE4EE9FBBF45DDE3DA6FE86BBE4734D3D50DC40BEE242BA3D1E02EFBB72AFF8BD7AA8ED3DFCD65CBDB8C6D1BAA2E480BC89914CBDE92E023E60F8D03D4C0C423EDD5CA53EDE6E9DBD09E075BD0FAE6CBCFCA8863C6916DCBC94D941BCC93EDD3D5767883B8C3FA53DFB53953D3CE828BB70A3ADBD29ED9BBC56E6E8BCEA7839BD71EDA13DD5917D3EEBCAC43D047498BBCF196FBC2EF2473E0C1412BDCF8BCABB608C87BDD8AB993EB3E52E3E5FCC68BCBBD3043DE0D5BD3DD12282BB6B4B543E52333BBD87E2A8BD1DEAA0BDC36B7ABAF1D85DBDADF5B2BB109CA2BB3CC67B3CF53198BBB0BA8D3D73935D3CAE532B3AE236C83E144DBDBB623E383E8692683CC4E59E3E251AEABB65EC8EBD7CBD0DBDB40BA4BC45A8383ED9668EBC4253D0BC727D3EBC10CEAEBBEAB3D0BB6917FA3D77650E3DE5289ABB02AF85BD96AA80BC8FC2DDBCB30484BD6F0ECBBBFF91C43DAC484EBD5DB9093E78F846BD91F06FBBD6B3893EEAF42D3CD62BC0BD84C5873D6A2887BB0F372EBE0AFDC7BBD4F1ABBC03B57A3E66E245BD723BB2BDA2CE223D1CEA
Is it obvious from the above if there's truly an error in the quantization process? What I cannot figure out is why it's still logging the model bytes (the last part 0x24DCBABB3DE4A1BD88A370BD... ultimately overwhelms the terminal), despite me trying to disable tensorflow's logging option.
I faced the same problem while using tensorflow 2.3.0.
I then used tf-nightly for running the optimization and it worked well.
Try saving your converted model with
open(tflite_model_quant_file).write(tflite_model)