when i tried running tf_pose im having trouble - tensorflow

im trying to use tf-pose using tensorflow version2.
!git clone https://github.com/gsethi2409/tf-pose-estimation.git > /dev/null
%cd tf-pose-estimation
!pip3 install -r requirements.txt
this from where i have cloned. but when i run the below code. it is showing an error.
!python run.py --model=mobilenet_thin --resize=432x368 --image=./images/p1.jpg
Traceback (most recent call last): File "run.py", line 39, in
e = TfPoseEstimator(get_graph_path(args.model), target_size=(w, h)) File "/content/tf-pose-estimation/tf_pose/estimator.py", line
337, in init
self.tensor_image = self.graph.get_tensor_by_name('TfPoseEstimator/image:0') File
"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py",
line 3902, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False) File
"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py",
line 3726, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File
"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py",
line 3768, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name))) KeyError: "The name 'TfPoseEstimator/image:0' refers to a Tensor which does not exist. The
operation, 'TfPoseEstimator/image', does not exist in the graph."

In tf_pose/estimatory.py under the line that imports tensorflow add the following line
tf.compat.v1.disable_eager_execution()
link

Related

Tensorflow in Raspberry Pi - memory error

I am trying to install tensorflow in raspberry pi 4 with the next command:
pip install tensorflow
The next error occurs:
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting tensorflow
Downloading https://www.piwheels.org/simple/tensorflow/tensorflow-1.14.0-cp37-none-linux_armv7l.whl (79.6MB)
100% |████████████████████████████████| 79.6MB 8.8MB/s
Exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 143, in main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 338, in run
resolver.resolve(requirement_set)
File "/usr/lib/python3/dist-packages/pip/_internal/resolve.py", line 102, in resolve
self._resolve_one(requirement_set, req)
File "/usr/lib/python3/dist-packages/pip/_internal/resolve.py", line 256, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "/usr/lib/python3/dist-packages/pip/_internal/resolve.py", line 209, in _get_abstract_dist_for
self.require_hashes
File "/usr/lib/python3/dist-packages/pip/_internal/operations/prepare.py", line 283, in prepare_linked_requirement
progress_bar=self.progress_bar
File "/usr/lib/python3/dist-packages/pip/_internal/download.py", line 836, in unpack_url
progress_bar=progress_bar
File "/usr/lib/python3/dist-packages/pip/_internal/download.py", line 677, in unpack_http_url
unpack_file(from_path, location, content_type, link)
File "/usr/lib/python3/dist-packages/pip/_internal/utils/misc.py", line 600, in unpack_file
flatten=not filename.endswith('.whl')
File "/usr/lib/python3/dist-packages/pip/_internal/utils/misc.py", line 489, in unzip_file
data = zip.read(name)
File "/usr/lib/python3.7/zipfile.py", line 1429, in read
return fp.read()
File "/usr/lib/python3.7/zipfile.py", line 885, in read
buf += self._read1(self.MAX_N)
File "/usr/lib/python3.7/zipfile.py", line 975, in _read1
data = self._decompressor.decompress(data, n)
MemoryError
I have tried to install it with the next command that I've seen in the internet fixes the error, but it didn't.
pip install --no-cache-dir tensorflow
Any clue on what could I do??
Thanks in advance.

Apache BEAM pipeline fails when writing TF Records - AttributeError: 'str' object has no attribute 'iteritems'

The issue started appearing over the weekend. For some reason, it feels to be a DataFlow issue.
Previously, I was able to execute the script and write TF records just fine. However, now, I am unable to initialize the computation graph to process the data.
The traceback is:
Traceback (most recent call last):
File "my_script.py", line 1492, in <module>
MyBeamClass()
File "my_script.py", line 402, in __init__
self.run()
File "my_script.py", line 514, in run
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/pipeline.py", line 426, in __exit__
self.run().wait_until_finish()
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1238, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 649, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 176, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 531, in apache_beam.runners.worker.operations.DoOperation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 532, in apache_beam.runners.worker.operations.DoOperation.start
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 533, in apache_beam.runners.worker.operations.DoOperation.start
super(DoOperation, self).start()
File "apache_beam/runners/worker/operations.py", line 202, in apache_beam.runners.worker.operations.Operation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 206, in apache_beam.runners.worker.operations.Operation.start
self.setup()
File "apache_beam/runners/worker/operations.py", line 480, in apache_beam.runners.worker.operations.DoOperation.setup
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 485, in apache_beam.runners.worker.operations.DoOperation.setup
pickler.loads(self.spec.serialized_fn))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 247, in loads
return dill.loads(s)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 317, in loads
return load(file, ignore)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 305, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1232, in load_build
for k, v in state.iteritems():
AttributeError: 'str' object has no attribute 'iteritems'
I am using tensorflow==1.13.1 and tensorflow-transform==0.9.0 and apache_beam==2.7.0
with beam.Pipeline(options=self.pipe_opt) as p:
with beam_impl.Context(temp_dir=self.google_cloud_options.temp_location):
# rest of the script
_ = (
transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
I was experiencing the same error.
It seems to be triggered by a mismatch in the tensorflow-transform versions of your local (or master) machine and the workers one (specified in the setup.py file).
In my case I was running tensorflow-transform==0.13 on my local machine whereas the workers were running 0.8.
Downgrading the local version to 0.8 fixed the issue.

Anaconda Pandas breaks on reading hdf file on Python 3.6.x

I am using an Anaconda environment with Python 3.6.8, created with conda create -n temp pandas pytables h5py python=3.6.8. When I try to read a .h5 file like:
f = pd.read_hdf(filename, key)
I get an ValueError exception:
Traceback (most recent call last):
File "read_data.py", line 6, in <module>
f = pd.read_hdf(filename, key)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 394, in read_hdf
return store.select(key, auto_close=auto_close, **kwargs)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 741, in select
return it.get_result()
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 1483, in get_result
results = self.func(self.start, self.stop, where)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 2928, in read
ax = self.read_index('axis%d' % i, start=_start, stop=_stop)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 2523, in read_index
_, index = self.read_index_node(getattr(self.group, key), **kwargs)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 2621, in read_index_node
data = node[start:stop]
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/tables/vlarray.py", line 685, in __getitem__
return self.read(start, stop, step)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/tables/vlarray.py", line 821, in read
listarr = self._read_array(start, stop, step)
File "tables/hdf5extension.pyx", line 2155, in tables.hdf5extension.VLArray._read_array
ValueError: cannot set WRITEABLE flag to True of this array
This problem goes away if I use an environment with python 3.7, or 3.5. However, I need to use python 3.6.
How can I resolve this error?
I downgraded numpy to 1.14.3 with below command, and it worked for me:
pip3 install numpy==1.14.3

Protobuf errors while using Tensorflow Object Detection API locally

I got tensorflow and object detection API on my machine.
Test run shows that everything works.
~ $ cd models/research
research $ protoc object_detection/protos/*.proto --python_out=.
research $ export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
research $ python3 object_detection/builders/model_builder_test.py
...............
----------------------------------------------------------------------
Ran 15 tests in 0.144s
OK
Then I tried to retrain a model and got the protobuf error
research $ cd object_detection
object_detection $ python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
WARNING:tensorflow:From /Users/me/models/research/object_detection/trainer.py:257: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
Traceback (most recent call last):
File "/Users/me/models/research/object_detection/utils/label_map_util.py", line 135, in load_labelmap
text_format.Merge(label_map_string, label_map)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 735, in _MergeField
merger(tokenizer, message, field)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 823, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 722, in _MergeField
tokenizer.Consume(':')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1087, in Consume
raise self.ParseError('Expected "%s".' % token)
google.protobuf.text_format.ParseError: 3:10 : Expected ":".
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1083, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1105, in InternalParse
(tag_bytes, new_pos) = local_ReadTag(buffer, pos)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/decoder.py", line 181, in ReadTag
while six.indexbytes(buffer, pos) & 0x80:
TypeError: unsupported operand type(s) for &: 'str' and 'int'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "/Users/me/models/research/object_detection/trainer.py", line 264, in train
train_config.prefetch_queue_capacity, data_augmentation_options)
File "/Users/me/models/research/object_detection/trainer.py", line 59, in create_input_queue
tensor_dict = create_tensor_dict_fn()
File "train.py", line 121, in get_next
dataset_builder.build(config)).get_next()
File "/Users/me/models/research/object_detection/builders/dataset_builder.py", line 155, in build
label_map_proto_file=label_map_proto_file)
File "/Users/me/models/research/object_detection/data_decoders/tf_example_decoder.py", line 245, in __init__
use_display_name)
File "/Users/me/models/research/object_detection/utils/label_map_util.py", line 152, in get_label_map_dict
label_map = load_labelmap(label_map_path)
File "/Users/me/models/research/object_detection/utils/label_map_util.py", line 137, in load_labelmap
label_map.ParseFromString(label_map_string)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/message.py", line 185, in ParseFromString
self.MergeFromString(serialized)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1089, in MergeFromString
raise message_mod.DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.
object_detection $
I tried a bunch of similar problems' solutions, but they didn't work for my case. For example this one suggest to encode pbtxt file with ASCII.
Python 2 gives an error too. Here is it's last line
google.protobuf.message.DecodeError: Unexpected end-group tag.
Context:
macOS 10.13.4
Local run on CPU
Python 3.6.4
protobuf 3.5.1
libprotoc 3.4.0
tensorflow 1.8.0
Google Cloud SDK 200.0.0
bq 2.0.33
core 2018.04.30
gsutil 4.31

ParseError 1:1 Using Tensorflow with Bazel

System information
Running Python 3.6.4 on Windows
Describe the problem
I'm trying to run Tensorflow's lm_1b on sample mode, by inputting:
$ bazel-bin/lm_1b/lm_1b_eval --mode sample --prefix "I love that I" --pbtxt data/vocab-2016-09-10.txt --vocab_file data/vocab-2016-09-10.txt --ckpt 'data/ckpt-*'
But I get the error:
google.protobuf.text_format.ParseError: 1:1 : Expected identifier or number, got <.
Any help would really be appreciated
Source code / logs
Recovering graph.
Traceback (most recent call last):
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 308, in <module>
tf.app.run()
File "C:\Users\snmsa\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 298, in main
_SampleModel(FLAGS.prefix, vocab)
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 174, in _SampleModel
sess, t = _LoadModel(FLAGS.pbtxt, FLAGS.ckpt)
File "\\?\C:\Users\snmsa\AppData\Local\Temp\Bazel.runfiles_9sq54ngc\runfiles\__main__\lm_1b\lm_1b_eval.py", line 89, in _LoadModel
text_format.Merge(s, gd)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 679, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File "C:\Users\snmsa\Anaconda3\lib\site-packages\google\protobuf\text_format.py", line 1152, in ConsumeIdentifierOrNumber
raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 1:1 : Expected identifier or number, got <.
Your command line is wrong. It should be:
bazel-bin/lm_1b/lm_1b_eval --mode sample \
--prefix "I love that I" \
--pbtxt data/graph-2016-09-10.pbtxt \
...
You are passing a vocabulary file --pbtxt data/vocab-2016-09-10.txt where a serialized GraphDef file is expected.