I am trying to train my own model for tensorflow object detection, I followed this and this tutorials and in last step I tried to run this command
> python train.py --logtostderr --train_dir=training/ --
pipeline_config_path=training/ssd_mobilenet_v1_pets.config
but I get this error
> Traceback (most recent call last):
File "train.py", line 163, in <module>
tf.app.run()
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\platform\a
pp.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train.py", line 91, in main
FLAGS.pipeline_config_path)
File "C:\Libraries\models-master\research\object_detection\utils\config_util.p
y", line 43, in get_configs_from_pipeline_file
text_format.Merge(proto_str, pipeline_config)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Program Files\Python36\lib\site-packages\google\protobuf\text_format.
py", line 703, in _MergeField
(message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 195:3 : Message type "object_detection.p
rotos.TrainEvalPipelineConfig" has no field named "shuffle".
how can I solve it ?
I fix that problem with changing .config file. I used ssd_mobilenet_v1_coco.config instead of ssd_mobilenet_v1_pets.config
Related
I'm trying to convert frozen graph to json file. I use this command:
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names="SemanticPredictions" --saved_model_tags=serve frozen_inference_graph.pb mymodal
But it gives this error:
Traceback (most recent call last):
File "d:\programdata\anaconda3\envs\tensorflow0\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\ProgramData\Anaconda3\envs\tensorflow0\Scripts\tensorflowjs_converter.exe\__main__.py", line 7, in <module>
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 645, in pip_main
main([' '.join(sys.argv[1:])])
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 649, in main
convert(argv[0].split(' '))
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 632, in convert
strip_debug_ops=args.strip_debug_ops)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 379, in convert_tf_frozen_model
strip_debug_ops=strip_debug_ops)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 133, in optimize_graph
graph.add_to_collection('train_op', graph.get_operation_by_name(name))
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3633, in get_operation_by_name
return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3505, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3565, in _as_graph_element_locked
"graph." % repr(name))
KeyError: "The name 'SemanticPredictions' refers to an Operation not in the graph."
I don't why it gives KeyError: "The name 'SemanticPredictions' refers to an Operation not in the graph." error.
I got tensorflow and object detection API on my machine.
Test run shows that everything works.
~ $ cd models/research
research $ protoc object_detection/protos/*.proto --python_out=.
research $ export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
research $ python3 object_detection/builders/model_builder_test.py
...............
----------------------------------------------------------------------
Ran 15 tests in 0.144s
OK
Then I tried to retrain a model and got the protobuf error
research $ cd object_detection
object_detection $ python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
WARNING:tensorflow:From /Users/me/models/research/object_detection/trainer.py:257: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
Traceback (most recent call last):
File "/Users/me/models/research/object_detection/utils/label_map_util.py", line 135, in load_labelmap
text_format.Merge(label_map_string, label_map)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 735, in _MergeField
merger(tokenizer, message, field)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 823, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 722, in _MergeField
tokenizer.Consume(':')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1087, in Consume
raise self.ParseError('Expected "%s".' % token)
google.protobuf.text_format.ParseError: 3:10 : Expected ":".
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1083, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1105, in InternalParse
(tag_bytes, new_pos) = local_ReadTag(buffer, pos)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/decoder.py", line 181, in ReadTag
while six.indexbytes(buffer, pos) & 0x80:
TypeError: unsupported operand type(s) for &: 'str' and 'int'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "/Users/me/models/research/object_detection/trainer.py", line 264, in train
train_config.prefetch_queue_capacity, data_augmentation_options)
File "/Users/me/models/research/object_detection/trainer.py", line 59, in create_input_queue
tensor_dict = create_tensor_dict_fn()
File "train.py", line 121, in get_next
dataset_builder.build(config)).get_next()
File "/Users/me/models/research/object_detection/builders/dataset_builder.py", line 155, in build
label_map_proto_file=label_map_proto_file)
File "/Users/me/models/research/object_detection/data_decoders/tf_example_decoder.py", line 245, in __init__
use_display_name)
File "/Users/me/models/research/object_detection/utils/label_map_util.py", line 152, in get_label_map_dict
label_map = load_labelmap(label_map_path)
File "/Users/me/models/research/object_detection/utils/label_map_util.py", line 137, in load_labelmap
label_map.ParseFromString(label_map_string)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/message.py", line 185, in ParseFromString
self.MergeFromString(serialized)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1089, in MergeFromString
raise message_mod.DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.
object_detection $
I tried a bunch of similar problems' solutions, but they didn't work for my case. For example this one suggest to encode pbtxt file with ASCII.
Python 2 gives an error too. Here is it's last line
google.protobuf.message.DecodeError: Unexpected end-group tag.
Context:
macOS 10.13.4
Local run on CPU
Python 3.6.4
protobuf 3.5.1
libprotoc 3.4.0
tensorflow 1.8.0
Google Cloud SDK 200.0.0
bq 2.0.33
core 2018.04.30
gsutil 4.31
System information
Running Python 3.6.4 on Windows
Describe the problem
I'm trying to run Tensorflow's lm_1b on sample mode, by inputting:
$ bazel-bin/lm_1b/lm_1b_eval --mode sample --prefix "I love that I" --pbtxt data/vocab-2016-09-10.txt --vocab_file data/vocab-2016-09-10.txt --ckpt 'data/ckpt-*'
But I get the error:
google.protobuf.text_format.ParseError: 1:1 : Expected identifier or number, got <.
Any help would really be appreciated
Source code / logs
Recovering graph.
Traceback (most recent call last):
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 308, in <module>
tf.app.run()
File "C:\Users\snmsa\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 298, in main
_SampleModel(FLAGS.prefix, vocab)
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 174, in _SampleModel
sess, t = _LoadModel(FLAGS.pbtxt, FLAGS.ckpt)
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 89, in _LoadModel
text_format.Merge(s, gd)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 679, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 1152, in ConsumeIdentifierOrNumber
raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 1:1 : Expected identifier or number, got <.
Your command line is wrong. It should be:
bazel-bin/lm_1b/lm_1b_eval --mode sample \
--prefix "I love that I" \
--pbtxt data/graph-2016-09-10.pbtxt \
...
You are passing a vocabulary file --pbtxt data/vocab-2016-09-10.txt where a serialized GraphDef file is expected.
When I try to generate TF record, I'm getting the following error message:
Traceback (most recent call last):
File "generate_tfrecord.py", line 112, in <module>
tf.app.run()
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/platform/app.
py", line 124, in run
_sys.exit(main(argv))
File "generate_tfrecord.py", line 98, in main
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/lib/io/tf_rec
ord.py", line 106, in __init__
compat.as_bytes(path), compat.as_bytes(compression_type), status)
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/framework/err
ors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: ; No such file or
directory
The command that I try to run is:
python generate_tfrecord.py --csv_input=data/Train_labels.csv
--output_path=data/train.records
Any ideas to solve this issue?
You are supplying output path as data/train.records instead of data/train.tfrecord
I train pet detector by google object detection api and get error as fellow:Does it mean sorted fun does not support the dict's key type is tuple and the object detection api still does not support python3?
Traceback (most recent call last):
File "D:\Program Files\JetBrains\PyCharm 2017.1.1\helpers\pydev\pydevd.py", line 1578, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "D:\Program Files\JetBrains\PyCharm 2017.1.1\helpers\pydev\pydevd.py", line 1015, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Program Files\JetBrains\PyCharm 2017.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "E:/Work/Lib/tensorflow/models/object_detection/train.py", line 198, in <module>
tf.app.run()
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "E:/Work/Lib/tensorflow/models/object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "E:\Work\Lib\tensorflow\models\object_detection\trainer.py", line 184, in train
data_augmentation_options)
File "E:\Work\Lib\tensorflow\models\object_detection\trainer.py", line 77, in _create_input_queue
prefetch_queue_capacity=prefetch_queue_capacity)
File "E:\Work\Lib\tensorflow\models\object_detection\core\batcher.py", line 93, in __init__
num_threads=num_batch_queue_threads)
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\training\input.py", line 919, in batch
name=name)
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\training\input.py", line 697, in _batch
tensor_list = _as_tensor_list(tensors)
File "D:\Program Files\Python\Python35\lib\site-packages\tensorflow\python\training\input.py", line 385, in _as_tensor_list
return [tensors[k] for k in sorted(tensors)]
TypeError: unorderable types: str() < tuple()
I ran into the same problem. I traced the issue down to a python 3 compat issue in TensorFlow. I have submitted a fix for it here: https://github.com/tensorflow/tensorflow/pull/11039