I am trying to run the tensorflow DeepLab tutorial on the cityscapes dataset. I downloaded the gtFine dataset and cloned into the cityscapesScripts folder, and set up the directories as recommended in the tutorial. When I ran the following line from the tutorial,
sh convert_cityscapes.sh,
I received an error message stating "Failed to find all Cityscapes modules".
I checked the cityscapesScripts documentation and I think I am missing the labels module, which is likely causing the error. Where can I clone or download the missing module(s)?
In the dependencies for sh convert_cityscapes.sh, there's a file with invalid syntax.
You can get it to work on Python3 by commenting out the line
print type(obj).name
from cityscapeScripts/helpers/annotation.py line 238
Related
I got "module '0b1a516c7ccf3157373118bcf0f434168745c8a4' has no attribute 'entropy_decode_index' error after a clean intall of tensorflow federated (TFF) on Ubuntu 22.04. System: AMD 6900HS, Nvidia3050ti. The first "import tensorflow_federated" line fails.
There is not even a single entry on google concerning this error message and I am shocked.
The detailed error message is:
File "/home/egosis/venv/lib/python3.9/site-packages/tensorflow_compression/python/ops/init.py", line 17, in
from tensorflow_compression.python.ops.gen_ops import *
AttributeError: module '0b1a516c7ccf3157373118bcf0f434168745c8a4' has no attribute 'entropy_decode_index'
Every answer is gladly appreciated.
I tried installing TFF v0.46.0, v0.45.0 and v0.44.0 of tff but it did not help.
Recently, I started one of the tutorial for GANs created by TensorFlow.
See link: https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/dcgan-mnist-4.2.1.py
But in the initializer (i.e. if name == 'main') I am having to issues to run it, since I run it
from Colab and is a .py program. So that, it says that error:
usage: ipykernel_launcher.py [-h] [-g GENERATOR]
ipykernel_launcher.py: error: unrecognized arguments: -f
/root/.local/share/jupyter/runtime/kernel-bcd12960-b051-4b8d-b6b0-3ff02367dbcc.json
An exception has occurred, use %tb to see the full traceback.
Would anyone know how to debug it for .ipynb files?
Thank you in advance
I recently upgraded one of my small ubuntu (16.04) servers from tensorflow-gpu 1.4 to tensorflow-gpu 1.5 for working with the object detection API. I have git cloned the latest version API that is supposed to work with tensorflow 1.5.
CUDA/cudNN and other tensorflow programs are up and running after the upgrade, and all test-scripts in the object detection API are running fine.
Despite this, when I attempt to run train.py it fails immediately with the following error:
File "/home/arvid/ownCloud/tensorflow/models/research/object_detection/train.py", line 167, in <module> tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 124, in run _sys.exit(main(argv))
File "/home/arvid/ownCloud/tensorflow/models/research/object_detection/train.py", line 107, in main overwrite=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/lib/io/file_io.py", line 385, in copy compat.as_bytes(oldpath), compat.as_bytes(newpath), overwrite, status)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__ c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: ; No such file or directory
This error arise when some input file is missing, but the problem here is that no file is specified in the error.
Usually the missing file is presented between the comma and the semicolon, but in this error it is just a blank space.
I can reproduce the same error on my working server running tensorflow 1.4 by inserting a space between --train_dir= and the path:
--train_dir= {some_path}
But that is not the case here!
Additional info: when I run train.py the 'train' directory is created at the location I specify, so tensorflow seems to be able to identify paths etc..
Any input on how to debug this would be greatly appreciated!!
(Ok, I'm feeling a bit stupid right now...)
The solution was simple - the name of the flags for train.py changed with the update...
It used to be:
--pipeline_config={some_path}
But now it's:
--pipeline_config_path={some_path}
Still, it would be useful with a more informative error message...
Romove some spaces between --train_dir= {some_path} and --pipeline_config_path= {some_path} .
It works for me.
I installed the object detection API correctly using this https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md and I checked by running model_builder_test.py
This gave me a OK result.
Then I moved on to running the train.py on my dataset using the following command
python train.py --logtostderr --pipeline_config_path=pipeline.config --train_dir=train_file
And I am getting the error ImportError: cannot import name 'preprocessor_pb2'
This particular preprocessor_pb2.py exists in the path it is looking for i.e
C:\Users\SP-TestMc\Anaconda3\envs\tensorflow\models-master\models-master\research\object_detection\protos
What could be the reason for this error then?
See https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md#protobuf-compilation. Sounds like you need to compile the protos before using the Python scripts.
I am learning transfer learning according to How to Retrain Inception's Final Layer for New Categories however, when I build 'retrain.py' using bazel, the following error ocures:
The error message is:
python configuration error:'PYTHON_BIN_PATH' environment variable is not set and referenced by '//third_party/py/numpy:headers'
I am so sorry, I have done my best to display the error image.unfortunately, I failed.
I use python2.7, anaconda2 and bazel0.6.1, tensorflow1.3.
appreciate for your any reply !