I'm trying to quantize a model with TensorFlow 2.3.0. I'm having some trouble with saving the final result, and it's not clear to me what the exact issue is. Here's my code
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow as tf
saved_model_dir = "quantization/recognizer/"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
tflite_model_quant_file = "quantization/recognizer_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model)
When the I run the above code, it just starts spitting out an endless stream of bytes that ultimately crashes my terminal. I've captured part of the output below:
loc(callsite(callsite(callsite(unknown at "functional_9/lstm_10/PartitionedCall#__inference__wrapped_model_37849") at "StatefulPartitionedCall#__inference_signature_wrapper_51309") at "StatefulPartitionedCall")): error: We cannot duplicate the value since it's not constant.
error: Failed to duplicate values for the stateful op
ExceptionTraceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
198 debug_info_str,
--> 199 enable_mlir_converter)
200 return model_str
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/wrap_toco.py in wrapped_toco_convert(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
37 debug_info_str,
---> 38 enable_mlir_converter)
39
Exception: <unknown>:0: error: loc(callsite(callsite(callsite(unknown at "functional_9/lstm_10/PartitionedCall#__inference__wrapped_model_37849") at "StatefulPartitionedCall#__inference_signature_wrapper_51309") at "StatefulPartitionedCall")): We cannot duplicate the value since it's not constant.
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite(callsite(unknown at "functional_9/lstm_10/PartitionedCall#__inference__wrapped_model_37849") at "StatefulPartitionedCall#__inference_signature_wrapper_51309") at "StatefulPartitionedCall")): see current operation: %123 = "tfl.unidirectional_sequence_lstm"(%118, %cst_55, %cst_56, %cst_57, %cst_58, %cst_47, %cst_48, %cst_49, %cst_50, %cst_111, %cst_111, %cst_111, %cst_51, %cst_52, %cst_53, %cst_54, %cst_111, %cst_111, %122, %122, %cst_111, %cst_111, %cst_111, %cst_111) {cell_clip = 1.000000e+01 : f32, fused_activation_function = "TANH", proj_clip = 0.000000e+00 : f32, time_major = false} : (tensor<?x?x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, tensor<128x128xf32>, none, none, none, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, tensor<128xf32>, none, none, tensor<?x128xf32>, tensor<?x128xf32>, none, none, none, none) -> tensor<?x?x128xf32>
<unknown>:0: error: Failed to duplicate values for the stateful op
<unknown>:0: note: see current operation: "func"() ( {
^bb0(%arg0: tensor<?x31x200x1xf32>): // no predecessors
%cst = "std.constant"() {value = dense<"0x24DCBABB3DE4A1BD88A370BDEA32E63DEE6B0ABCDA88983A0DCF663DA68FA2B90BE8013E52E7A3BD37CDFCBC3CDC4FBD480C3E3EA53BB1BB87379A3E62D1A4BC29FA0CBC35494ABBE4EE9FBBF45DDE3DA6FE86BBE4734D3D50DC40BEE242BA3D1E02EFBB72AFF8BD7AA8ED3DFCD65CBDB8C6D1BAA2E480BC89914CBDE92E023E60F8D03D4C0C423EDD5CA53EDE6E9DBD09E075BD0FAE6CBCFCA8863C6916DCBC94D941BCC93EDD3D5767883B8C3FA53DFB53953D3CE828BB70A3ADBD29ED9BBC56E6E8BCEA7839BD71EDA13DD5917D3EEBCAC43D047498BBCF196FBC2EF2473E0C1412BDCF8BCABB608C87BDD8AB993EB3E52E3E5FCC68BCBBD3043DE0D5BD3DD12282BB6B4B543E52333BBD87E2A8BD1DEAA0BDC36B7ABAF1D85DBDADF5B2BB109CA2BB3CC67B3CF53198BBB0BA8D3D73935D3CAE532B3AE236C83E144DBDBB623E383E8692683CC4E59E3E251AEABB65EC8EBD7CBD0DBDB40BA4BC45A8383ED9668EBC4253D0BC727D3EBC10CEAEBBEAB3D0BB6917FA3D77650E3DE5289ABB02AF85BD96AA80BC8FC2DDBCB30484BD6F0ECBBBFF91C43DAC484EBD5DB9093E78F846BD91F06FBBD6B3893EEAF42D3CD62BC0BD84C5873D6A2887BB0F372EBE0AFDC7BBD4F1ABBC03B57A3E66E245BD723BB2BDA2CE223D1CEA
Is it obvious from the above if there's truly an error in the quantization process? What I cannot figure out is why it's still logging the model bytes (the last part 0x24DCBABB3DE4A1BD88A370BD... ultimately overwhelms the terminal), despite me trying to disable tensorflow's logging option.
I faced the same problem while using tensorflow 2.3.0.
I then used tf-nightly for running the optimization and it worked well.
Try saving your converted model with
open(tflite_model_quant_file).write(tflite_model)
Related
Using the example at
https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers
I created a model with my own data. I want to save it in Tensorflow lite format. I am saving as SavedModel, but while converting, I encountered many error codes. The last error code I encountered;
WARNING:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
INFO:tensorflow:Assets written to: /tmp/test_saved_model/assets
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
212 model body, the input/output will be quantized as well.
--> 213 inference_type: Data type for the activations. The default value is int8.
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
4 frames
Exception: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
During handling of the above exception, another exception occurred:
ConverterError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
215 whether to add NumericVerify ops into the debug mode quantized model.
--> 216
217 Returns:
218 Quantized model in serialized form (e.g. a TFLITE model) with floating-point
ConverterError: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount#__inference__wrapped_model_9475" at "StatefulPartitionedCall#__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
code;
# Save the model into temp directory
export_dir = "/tmp/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/tmp/save_model_tflite.tflite')
tflite_model_file.write_bytes(tflite_model)
What is the cause of this error code? My goal is to embed this model with react native in the app. Thank you.
Looking at your trace, seems like you have some HashTable ops. You need to set converter.allow_custom_ops = True in order to convert this model.
export_dir = "/content/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.allow_custom_ops = True
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/content/save_model_tflite.tflite')
tflite_model_files.write_bytes(tflite_model)
excuse my english.
I've been trying to handle Estimators API of tensorflow (v2.x), but when i'm trying to convert a model from tf.estimators to tflite with this code :
import tensorflow as tf
import numpy as np
feature_name = "features"
feature_columns = [tf.feature_column.numeric_column(feature_name, shape=[2])]
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
n_classes=2,
model_dir="Z:\\tests\\iris")
feature_spec = {'features': tf.io.FixedLenFeature(shape=[2], dtype=np.float32)}
serving_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_saved_model(export_dir_base='Z:\\tests\\iris\\', serving_input_receiver_fn=serving_fn)
saved_model_obj = tf.saved_model.load("Z:\\tests\\iris\\1613055608")
concrete_func = saved_model_obj.signatures['serving_default']
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
print(saved_model_obj.signatures.keys())
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()
with open('Z:\\tests\\model.tflite_estimators', 'wb') as f:
f.write(tflite_model)
I got the following error :
ConverterError: C:\Users\\.....\tensorflow\python\saved_model\load.py:909:0: error: 'tf.ParseExampleV2' op is neither a custom op nor a flex op
C:\Users\\.....\tensorflow\python\saved_model\load.py:859:0: note: called from
P:\\.....\sanstitre3.py:19:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py:465:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py:578:0: note: called from
<ipython-input-115-f30bf3b642d5>:1:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3343:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3263:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3072:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\async_helpers.py:68:0: note: called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.ParseExampleV2 {dense_shapes = [#tf.shape<2>], device = "", num_sparse = 0 : i64, result_segment_sizes = dense<[0, 0, 0, 1, 0, 0]> : vector<6xi32>}
Some guy on the internet already proposed to add those 2 lines under converter.experimental_new_converter = True :
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
It compiles without errors, just warnings, but when i put the TFLite model on my STM32, it gaves me the error TOOL ERROR/ Unknown layer type FlexParseExampleV2, stopping.
Can someone help me on this ?
Have a nice day
TensorFlow Lite Micro doesn't support Flex delegate, so Selece TF ops can't be run on MCUs. You can try restructuring your model with (for example) keras sequential API instead to make it converted only with TFLite ops.
context: https://github.com/tensorflow/tensorflow/issues/34350#issuecomment-579027135
I'm using tensorflow==1.15.3 and I'm hitting a segmentation fault attempting int8 post-training quantization. The documentation for the 1.15 version of the TFLiteConverter can be found here.
I found a similar issue on github, but their solution to provide --add_postprocessing_op=true has not solved the segmentation fault.
I've debugged it using PDB and found exactly where it crashes. It never reaches my representative_dataset function. It faults when running CreateWrapperCPPFromBuffer(model_content):
> .../python3.6/site-packages/tensorflow_core/lite/python/optimize/calibrator.py(51)__init__()
-> .CreateWrapperCPPFromBuffer(model_content))
(Pdb) s
Fatal Python error: Segmentation fault
Current thread 0x00007ff40ee9f740 (most recent call first):
File ".../python3.6/site-packages/tensorflow_core/lite/python/optimize/calibrator.py", line 51 in __init__
File ".../python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 236 in _calibrate_quantize_model
File ".../python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 993 in convert
File ".../convert_model_to_tflite_int8.py", line 97 in <module>
File "<string>", line 1 in <module>
File "/usr/lib/python3.6/bdb.py", line 434 in run
File "/usr/lib/python3.6/pdb.py", line 1548 in _runscript
File "/usr/lib/python3.6/pdb.py", line 1667 in main
File "/usr/lib/python3.6/pdb.py", line 1694 in <module>
File "/usr/lib/python3.6/runpy.py", line 85 in _run_code
File "/usr/lib/python3.6/runpy.py", line 193 in _run_module_as_main
[1] 17668 segmentation fault (core dumped) python -m pdb convert_model_to_tflite_int8.py --add_postprocessing_op=true
Here is my conversion code:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file=pb_model_path,
input_arrays=["device_0/input_node_name:1"],
output_arrays=["device_0/output_node_name"],
input_shapes={"device_0/input_node_name:1": [100, 16384]}
)
converter.allow_custom_ops = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
def test():
pdb.set_trace()
print(' ! ! ! representative_dataset_gen ! ! ! ')
zeros = np.zeros(shape=(1, 100, 16384), dtype='int8')
ds = tf.data.Dataset.from_tensor_slices((zeros)).batch(1)
for input_value in ds.take(1):
yield [input_value]
converter.representative_dataset = test
pdb.set_trace()
tflite_model = converter.convert()
tflite_model_size = open(model_name, 'wb').write(tflite_model)
print('TFLite Model is %d bytes' % tflite_model_size)
FWIW my model conversion works for tf.float16 (not using representative_dataset there, though).
Upgrading my tf version to 2.3 solved the segmentation fault. My model code isn't compatible with tf==2.x yet, but luckily the conversion code is independent from that so the upgrade went smoothly.
I am beginner in tensorflow , I am trying to add summaries to a code of neural network from this link https://pythonprogramming.net/rnn-tensorflow-python-machine-learning-tutorial/
I got an error but I could not know what is wrong?
here is the code
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist=input_data.read_data_sets("/tmp/data",one_hot=True)
n_nodes_hl1=500
n_nodes_hl2=500
n_nodes_hl3=500
n_classes=10
batch_size=100
x=tf.placeholder("float",[None,784])
y=tf.placeholder("float")
def neural_net(data):
hidden_1_layer={"weight":tf.Variable(tf.random_normal([784,n_nodes_hl1])),"bias":tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer={"weight":tf.Variable(tf.random_normal([n_nodes_hl1,n_nodes_hl2])),"bias":tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer={"weight":tf.Variable(tf.random_normal([n_nodes_hl2,n_nodes_hl3])),"bias":tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer={"weight":tf.Variable(tf.random_normal([n_nodes_hl3,n_classes])),"bias":tf.Variable(tf.random_normal([n_classes]))}
l1=tf.add(tf.matmul(data,hidden_1_layer["weight"]),hidden_1_layer["bias"])
l1=tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1,hidden_2_layer["weight"]), hidden_2_layer["bias"])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2,hidden_3_layer["weight"]), hidden_3_layer["bias"])
l3 = tf.nn.relu(l3)
output = tf.matmul(l3,output_layer["weight"])+ output_layer["bias"]
return output
def train_net(x):
prediction=neural_net(x)
cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction,y),name='cost')
optimizer=tf.train.AdamOptimizer().minimize(cost)
hm_epoch=3
for value in [x,y,prediction,cost]:
tf.summary.scalar([value.op.name],value)
summaries=tf.summary.merge_all()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
summarywriter=tf.summary.FileWriter("layers",sess.graph)
for epoch in range(hm_epoch):
epoch_loss=0
for i in range(int(mnist.train.num_examples/batch_size)):
epoch_x,epoch_y=mnist.train.next_batch(batch_size)
summarywriter.add_summary(sess.run(summaries,feed_dict={x:epoch_x,y:epoch_y}),i)
epoch_loss+=c
print('epoch ',epoch,' completed out of ',hm_epoch," loss ",epoch_loss)
correct=tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy=tf.reduce_mean(tf.cast(correct,'float'))
print('accuracy ',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
train_net(x)
here is the error
File "C:/Users/PC-Sara/AppData/Local/Programs/Python/Python35/tf-layers.py", line 69, in <module>
train_net(x)
File "C:/Users/PC-Sara/AppData/Local/Programs/Python/Python35/tf-layers.py", line 46, in train_net
tf.summary.scalar([value.op.name],value)
File "C:\Users\PC-Sara\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\summary\summary.py", line 114, in scalar
name = _clean_tag(name)
File "C:\Users\PC-Sara\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\summary\summary.py", line 86, in _clean_tag
new_name = _INVALID_TAG_CHARACTERS.sub('_', name)
TypeError: expected string or bytes-like object
tf.summary.scalar expects a name as the first argument, not an array. This should work instead:
tf.summary.scalar(value.op.name, value)
I am trying to compute the gradients of a complex function in Tensorflow, but I have some trouble.
Here is my code:
import numpy as np
import tensorflow as tf
def CompSEQ(seq, rho0):
def EvolveRHO(prev, input):
return tf.mul(tf.complex(input,0.0),tf.matmul(rho0,prev))
def ComputeP(p, rho):
return p * tf.real(tf.trace(rho))
rhos = tf.scan(EvolveRHO, seq, initializer=rho0)
p = tf.scan(ComputeP, rhos, initializer=tf.constant(1.0))
return tf.gather(p,[tf.size(seq)-1])[0]
N = 4
seq = tf.placeholder(tf.float32, shape=[5])
x = tf.Variable(tf.zeros([2*N*N], dtype=tf.float32))
seqP = CompSEQ(seq, tf.complex(tf.reshape(x[0:N*N],[N,N]),
tf.reshape(x[N*N:2*N*N],[N,N])))
#seqPp = tf.gradients([seqP], [x]) # THIS LINE CAUSES THE PROBLEM!!!
sess = tf.Session()
sess.run(tf.initialize_all_variables())
v = np.random.rand(2*N*N).astype(np.float32)
s0 = np.random.rand(5).astype(np.float32)
p = sess.run(seqP, feed_dict={seq:s0, x:v})
print('seqP',p);
I use an input float32 vector x that will be transformed into a complex matrix. All the computations are performed using complex numbers and the last tf.scan in the CompSEQ function transforms all the results into float32 by taking the real part.
If I comment the call to tf.gradients (as in the code) everything works fine, but when I try to compute the gradients I get the following error:
Traceback (most recent call last):
File "error.1.py", line 24, in <module>
seqPp = tf.gradients([seqP], [x]) # THIS LINE CAUSES THE PROBLEM!!!
File "/Users/tamburin/Library/Python/2.7/lib/python/site-packages/tensorflow/python/ops/gradients.py", line 486, in gradients
_VerifyGeneratedGradients(in_grads, op)
File "/Users/tamburin/Library/Python/2.7/lib/python/site-packages/tensorflow/python/ops/gradients.py", line 264, in _VerifyGeneratedGradients
dtypes.as_dtype(inp.dtype).name))
ValueError: Gradient type float32 generated for op name: "scan/while/Switch_1"
op: "Switch"
input: "scan/while/Merge_1"
input: "scan/while/LoopCond"
attr {
key: "T"
value {
type: DT_COMPLEX64
}
}
attr {
key: "_class"
value {
list {
s: "loc:#scan/while/Merge_1"
}
}
}
does not match input type complex64
Converting all the computations into float32 variables resolves the problem, but I need to maintain the computation on complex variables (this is a simplified example w.r.t. my real problem).