How to solve Tensorflow.js Converter error? - tensorflow

I'm trying to convert frozen graph to json file. I use this command:
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names="SemanticPredictions" --saved_model_tags=serve frozen_inference_graph.pb mymodal
But it gives this error:
Traceback (most recent call last):
File "d:\programdata\anaconda3\envs\tensorflow0\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\ProgramData\Anaconda3\envs\tensorflow0\Scripts\tensorflowjs_converter.exe\__main__.py", line 7, in <module>
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 645, in pip_main
main([' '.join(sys.argv[1:])])
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 649, in main
convert(argv[0].split(' '))
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 632, in convert
strip_debug_ops=args.strip_debug_ops)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 379, in convert_tf_frozen_model
strip_debug_ops=strip_debug_ops)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 133, in optimize_graph
graph.add_to_collection('train_op', graph.get_operation_by_name(name))
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3633, in get_operation_by_name
return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3505, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3565, in _as_graph_element_locked
"graph." % repr(name))
KeyError: "The name 'SemanticPredictions' refers to an Operation not in the graph."
I don't why it gives KeyError: "The name 'SemanticPredictions' refers to an Operation not in the graph." error.

Related

the error message while running model_test.py for tensorflow deeplab

I have been trying to test the installation of deeplab by following this
# From tensorflow/models/research/
python deeplab/model_test.py
However, I got the following error message, in specific,
2018-04-25 10:54:23.488868: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at mkl_concat_op.cc:784 : Aborted: Operation received an exception:Status: 3, message: could not create a concat primitive descriptor, in file tensorflow/core/kernels/mkl_concat_op.cc:781
E...
======================================================================
ERROR: testForwardpassDeepLabv3plus (__main__.DeeplabModelTest)
----------------------------------------------------------------------
The complete traceback is as follows
2018-04-25 10:54:23.488868: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at mkl_concat_op.cc:784 : Aborted: Operation received an exception:Status: 3, message: could not create a concat primitive descriptor, in file tensorflow/core/kernels/mkl_concat_op.cc:781
E...
======================================================================
ERROR: testForwardpassDeepLabv3plus (__main__.DeeplabModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call
return fn(*args)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1420, in _call_tf_sessionrun
status, run_metadata)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.AbortedError: Operation received an exception:Status: 3, message: could not create a concat primitive descriptor, in file tensorflow/core/kernels/mkl_concat_op.cc:781
[[Node: concat = _MklConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _kernel="MklOp", _device="/job:localhost/replica:0/task:0/device:CPU:0"](ResizeBilinear, aspp0/Relu, concat/axis, DMT/_283, aspp0/Relu:1, DMT/_284)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "deeplab/model_test.py", line 108, in testForwardpassDeepLabv3plus
outputs_to_scales_to_logits = sess.run(outputs_to_scales_to_logits)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
run_metadata)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.AbortedError: Operation received an exception:Status: 3, message: could not create a concat primitive descriptor, in file tensorflow/core/kernels/mkl_concat_op.cc:781
[[Node: concat = _MklConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _kernel="MklOp", _device="/job:localhost/replica:0/task:0/device:CPU:0"](ResizeBilinear, aspp0/Relu, concat/axis, DMT/_283, aspp0/Relu:1, DMT/_284)]]
Caused by op 'concat', defined at:
File "deeplab/model_test.py", line 120, in <module>
tf.test.main()
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/platform/test.py", line 76, in main
return _googletest.main(argv)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/platform/googletest.py", line 99, in main
benchmark.benchmarks_main(true_main=main_wrapper)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/platform/benchmark.py", line 338, in benchmarks_main
true_main()
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/platform/googletest.py", line 98, in main_wrapper
return app.run(main=g_main, argv=args)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/platform/googletest.py", line 69, in g_main
return unittest_main(argv=argv)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/main.py", line 95, in __init__
self.runTests()
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/main.py", line 256, in runTests
self.result = testRunner.run(self.test)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/runner.py", line 176, in run
test(result)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/suite.py", line 122, in run
test(result)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/suite.py", line 122, in run
test(result)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/case.py", line 653, in __call__
return self.run(*args, **kwds)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/unittest/case.py", line 605, in run
testMethod()
File "deeplab/model_test.py", line 105, in testForwardpassDeepLabv3plus
image_pyramid=[1.0])
File "/data/dsp_emerging/ugwz/virtualE/deeplab/models/research/deeplab/model.py", line 296, in multi_scale_logits
fine_tune_batch_norm=fine_tune_batch_norm)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/models/research/deeplab/model.py", line 461, in _get_logits
fine_tune_batch_norm=fine_tune_batch_norm)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/models/research/deeplab/model.py", line 424, in _extract_features
concat_logits = tf.concat(branch_logits, 3)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1181, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 949, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/data/dsp_emerging/ugwz/virtualE/deeplab/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
AbortedError (see above for traceback): Operation received an exception:Status: 3, message: could not create a concat primitive descriptor, in file tensorflow/core/kernels/mkl_concat_op.cc:781
[[Node: concat = _MklConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _kernel="MklOp", _device="/job:localhost/replica:0/task:0/device:CPU:0"](ResizeBilinear, aspp0/Relu, concat/axis, DMT/_283, aspp0/Relu:1, DMT/_284)]]
----------------------------------------------------------------------
Ran 5 tests in 23.571s
FAILED (errors=1)
Roll back to Tensorflow 1.6
This issue is still being addressed in versions 1.7 and above.
https://github.com/tensorflow/tensorflow/issues/17494
In Google Colab, in Runtime type Python2 or Python3, with GPU, I run without any error using commands:
!git clone https://github.com/tensorflow/models.git
%env PYTHONPATH=/env/python/:/content/models/research/:/content/models/research/slim
!python /content/models/research/deeplab/model_test.py

Tensorflow TF_records Generate Error

When I try to generate TF record, I'm getting the following error message:
Traceback (most recent call last):
File "generate_tfrecord.py", line 112, in <module>
tf.app.run()
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/platform/app.
py", line 124, in run
_sys.exit(main(argv))
File "generate_tfrecord.py", line 98, in main
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/lib/io/tf_rec
ord.py", line 106, in __init__
compat.as_bytes(path), compat.as_bytes(compression_type), status)
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/framework/err
ors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: ; No such file or
directory
The command that I try to run is:
python generate_tfrecord.py --csv_input=data/Train_labels.csv
--output_path=data/train.records
Any ideas to solve this issue?
You are supplying output path as data/train.records instead of data/train.tfrecord

Tensorflow training error in last step

I am trying to train my own model for tensorflow object detection, I followed this and this tutorials and in last step I tried to run this command
> python train.py --logtostderr --train_dir=training/ --
pipeline_config_path=training/ssd_mobilenet_v1_pets.config
but I get this error
> Traceback (most recent call last):
File "train.py", line 163, in <module>
tf.app.run()
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\platform\a
pp.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train.py", line 91, in main
FLAGS.pipeline_config_path)
File "C:\Libraries\models-master\research\object_detection\utils\config_util.p
y", line 43, in get_configs_from_pipeline_file
text_format.Merge(proto_str, pipeline_config)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 703, in _MergeField
(message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 195:3 : Message type "object_detection.p
rotos.TrainEvalPipelineConfig" has no field named "shuffle".
how can I solve it ?
I fix that problem with changing .config file. I used ssd_mobilenet_v1_coco.config instead of ssd_mobilenet_v1_pets.config

unorderable types: str() < tuple() when train pet detector by google object detection api

I train pet detector by google object detection api and get error as fellow:Does it mean sorted fun does not support the dict's key type is tuple and the object detection api still does not support python3? 
Traceback (most recent call last):
File "D:\Program Files\JetBrains\PyCharm 2017.1.1\helpers\pydev\pydevd.py", line 1578, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "D:\Program Files\JetBrains\PyCharm 2017.1.1\helpers\pydev\pydevd.py", line 1015, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Program Files\JetBrains\PyCharm 2017.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "E:/Work/Lib/tensorflow/models/object_detection/train.py", line 198, in <module>
tf.app.run()
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "E:/Work/Lib/tensorflow/models/object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "E:\Work\Lib\tensorflow\models\object_detection\trainer.py", line 184, in train
data_augmentation_options)
File "E:\Work\Lib\tensorflow\models\object_detection\trainer.py", line 77, in _create_input_queue
prefetch_queue_capacity=prefetch_queue_capacity)
File "E:\Work\Lib\tensorflow\models\object_detection\core\batcher.py", line 93, in __init__
num_threads=num_batch_queue_threads)
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\training\input.py", line 919, in batch
name=name)
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\training\input.py", line 697, in _batch
tensor_list = _as_tensor_list(tensors)
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\training\input.py", line 385, in _as_tensor_list
return [tensors[k] for k in sorted(tensors)]
TypeError: unorderable types: str() < tuple()
I ran into the same problem. I traced the issue down to a python 3 compat issue in TensorFlow. I have submitted a fix for it here: https://github.com/tensorflow/tensorflow/pull/11039

Error while running TensorFlow wide_n_deep Tutorial

I encountered the error:
AttributeError: 'NoneType' object has no attribute 'bucketize'
The full error is as follows:
Traceback (most recent call last):
File "wide_n_deep_tutorial_1.py", line 214, in <module>
train_and_eval()
File "wide_n_deep_tutorial_1.py", line 203, in train_and_eval
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 711, in fit
max_steps=max_steps)
File "C:\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 191, in new_func
return func(*args, **kwargs)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 355, in fit
max_steps=max_steps)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 699, in _train_model
train_ops = self._get_train_ops(features, labels)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1052, in _get_train_ops
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.TRAIN)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1019, in _call_model_fn
params=self.params)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 504, in _dnn_linear_combined_model_fn
scope=scope)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column_ops.py", line 526, in weighted_sum_from_feature_columns
transformed_tensor = transformer.transform(column)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column_ops.py", line 869, in transform
feature_column.insert_transformed_feature(self._columns_to_tensors)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column.py", line 1489, in insert_transformed_feature
name="bucketize")
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\ops\bucketization_op.py", line 48, in bucketize
return _bucketization_op.bucketize(input_tensor, boundaries, name=name)
AttributeError: 'NoneType' object has no attribute 'bucketize'
I got the same issue, it seems that on windows, we just got None, sourcecode,
try to run this code on linux, or try to remove the bucketization and the column crossing, for example. change the line:
flags.DEFINE_string("model_type","wide_n_deep","valid model types:{'wide','deep', 'wide_n_deep'")
to
flags.DEFINE_string("model_type","deep","valid model types:{'wide','deep', 'wide_n_deep'")
follow this issue for update: issue