I am trying to learn more about tensorflow and building a custom model with it.
I follow an instruction in which i install a docker container with the following steps:
git clone https://github.com/tensorflow/models.git
cd models
docker build -f research/object_detection/dockerfiles/tf2/Dockerfile -t od .
docker run -it od
All works ok.
Next step is to run a test in the container script:
python object_detection/builders/model_builder_tf2_test.py
This fails and outputs:
Traceback (most recent call last):
File "object_detection/builders/model_builder_tf2_test.py", line 22, in <module>
import tensorflow.compat.v1 as tf
File "/home/tensorflow/.local/lib/python3.6/site-packages/tensorflow/__init__.py", line 438, in <module>
_ll.load_library(_main_dir)
File "/home/tensorflow/.local/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 154, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow/core/kernels/libtfkernel_sobol_op.so: undefined symbol: _ZN10tensorflow8OpKernel11TraceStringEPNS_15OpKernelContextEb
So there is a question about this in stackoverflow see:
Also a followup at:
It says, if i understand correctly, to uninstall and install tensorflow again, but it does not work.
The version is 2.6.1. If i try to downgrade i run into all kind of other problems.
I am stuck ;-(, any clues ?
I am working on a VPS, is it possible to run the tensorflow docker on 'normal' hardware ?
Related
On a Raspberry Pi OS Bullseye system, I tried to install numpy with pipenv using a specific python version and got this:
$ pipenv --python /opt/python/3.7/bin/python3 install numpy --verbose
Creating a virtualenv for this project…
Using /opt/python/3.7/bin/python3 (3.7.9) to create virtualenv…
⠋created virtual environment CPython3.7.9.final.0-32 in 410ms
creator CPython3Posix(dest=/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/pi/.local/share/virtualenv)
added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
Virtualenv location: /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC
Installing numpy…
⠙Installing 'numpy'
$ "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip" install --verbose "numpy" -i https://pypi.org/simple --exists-action w
⠙
Error: An error occurred while installing numpy!
Traceback (most recent call last):
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip", line 5, in <module>
from pip._internal.cli.main import main
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/cmdoptions.py", line 23, in <module>
from pip._vendor.packaging.utils import canonicalize_name
ModuleNotFoundError: No module named 'pip._vendor.packaging'
Looking at the verbose output i see that the path to pip used by pipenv is /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip.
Calling this pip directly indeed leads to the same error:
$ /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip --version
Traceback (most recent call last):
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip", line 5, in <module>
from pip._internal.cli.main import main
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/cmdoptions.py", line 23, in <module>
from pip._vendor.packaging.utils import canonicalize_name
ModuleNotFoundError: No module named 'pip._vendor.packaging'
Which python is used in that case? Looking at the shebang line it would seem it's the one I passed to pipenv initially:
$ head -n 1 /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip
#!/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/python
$ ls -l /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/python
lrwxrwxrwx 1 pi pi 27 Dec 11 11:00 /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/python -> /opt/python/3.7/bin/python3
But when I explicitly use that exact interpreter there is no error:
$ /opt/python/3.7/bin/python3 /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip --version
pip 20.1.1 from /opt/python/3.7/lib/python3.7/site-packages/pip (python 3.7)
The difference seems to be that in the case it goes wrong, the pip installation in /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip is used while in the working case it's the one in /opt/python/3.7/lib/python3.7/site-packages/pip.
But why? My understanding of the shebang is that it points to the interpreter that's to be used. In the working example all i do is call that interpreter explicitly myself. Why is there a difference in behaviour?
And also, why did pipenv even install its own pip in /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip ? Why didn't it reuse the pip that comes with the python version I passed? And if that's just how pipenv works, why is its pip broken? What's going on? And how can I fix it?
EDIT
When i use my system python 3.9 installation it works fine.
I'm trying to train TF2 for object detection. When I run model_main_tf2.py, I get the following error:
Traceback (most recent call last):
File "C:\Python_venv\trained_models\model_main_tf2.py", line 32, in <module>
from object_detection import model_lib_v2
ImportError: cannot import name 'model_lib_v2' from 'object_detection' (c:\Python_venv\tensorflow\lib\site-packages\object_detection\__init__.py)
How do I install model_lib_v2?
I tried reinstalling TF and reinstalling TensorFlow-object-detection-API but no luck. I went all over the internet looking for answers.
I found:
https://github.com/tensorflow/models/issues/7920
But they don't say how to install model_lib_v2
Unfortunately I cannot use TF1, the goal is to use TF2.
see https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md
git clone https://github.com/tensorflow/models.git
## Python Package Installation
cd models/research
### Compile protos.
protoc object_detection/protos/*.proto --python_out=.
### Install TensorFlow Object Detection API.
cp object_detection/packages/tf2/setup.py .
python3 -m pip install --user --use-feature=2020-resolver .
You should be able to just go to line 32 in "C:\Python_venv\trained_models\model_main_tf2.py" and remove "from object_detection", since you are already inside the package.
replace:
"from object_detection import model_lib_v2"
by:
"import model_lib_v2"
I am trying to install the Text add-on (version 0.7.3) for Orange (version 3.23) on my Win10, but I am getting the following error during the building "ufal_udpipe":
Command failed: python python -m pip install --constraint 'C:\Users\Jakub\AppData\Local\Temp\tmpb4fneogu.txt' Orange3-Text exited with non zero status.
ERROR: Command errored out with exit status 1: 'C:\Users\Jakub\AppData\Local\Orange\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Jakub\\AppData\\Local\\Temp\\pip-install-r71cit89\\ufal.udpipe\\setup.py'"'"'; __file__='"'"'C:\\Users\\Jakub\\AppData\\Local\\Temp\\pip-install-r71cit89\\ufal.udpipe\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Jakub\AppData\Local\Temp\pip-record-gtt8e801\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.
I also tried to install Microsoft Visual C++, because it´s required in the log, but the problem is the same after the installation.
I tried to install add-on throught the Anaconda Prompt:
conda config --add channels conda-forge
conda install orange3-text
This process failed, because:
Preparing transaction: done
Verifying transaction: done
Executing transaction: failed
ERROR conda.core.link:_execute(502): An error occurred while installing package 'conda-forge::commonmark-0.9.0-py_0'.
CondaError: Cannot link a source that does not exist. C:\Users\Jakub\Miniconda3\Scripts\conda.exe
Running `conda clean --packages` may resolve your problem.
Attempting to roll back.
Rolling back transaction: done
The result of conda clean command is following:
Traceback (most recent call last):
File "C:\Users\Jakub\Miniconda3\Scripts\conda-script.py", line 10, in <module>
sys.exit(main())
File "C:\Users\Jakub\Miniconda3\lib\site-packages\conda\cli\main.py", line 112, in main
from ..exceptions import conda_exception_handler
File "C:\Users\Jakub\Miniconda3\lib\site-packages\conda\exceptions.py", line 18, in <module>
from .common.io import timeout
File "C:\Users\Jakub\Miniconda3\lib\site-packages\conda\common\io.py", line 28, in <module>
from .._vendor.tqdm import tqdm
File "C:\Users\Jakub\Miniconda3\lib\site-packages\conda\_vendor\tqdm\__init__.py", line 8, in <module>
from ._tqdm import tqdm
File "C:\Users\Jakub\Miniconda3\lib\site-packages\conda\_vendor\tqdm\_tqdm.py", line 13, in <module>
from ._utils import _supports_unicode, _environ_cols_wrapper, _range, _unich, \
File "C:\Users\Jakub\Miniconda3\lib\site-packages\conda\_vendor\tqdm\_utils.py", line 31, in <module>
colorama.init()
AttributeError: module 'colorama' has no attribute 'init'
Reinstall of Orange and Anaconda didn´t help.
Full extract of logs is here: Google Disk
Thank you for your help!
Jakub
I encountered just the first line of your error. I checked if I had python and anaconda installed properly or not (and I had). I then kept trying to install and in the 4th go it installed without an issue- now my guess is maybe network issues caused the error to occur.
Another thing suggested to me by their support was- "Yes, we are aware of the issue. Somewhere in the process we accidentally disabled the necessary setting. Go to Options - Settings, find the Add-ons tab and select 'Install add-ons with conda'. This should install Text successfully."
Okay, so I found the issue fix (at least for me - comment if it works for you).
How to fix it:
Download Colorama
Copy the "Colorama" from the download then replace the folder named "colorama" in C:\Program Files\Anaconda3\Lib\site-packages
I suspect this issue occurs because of (issue 8842)
I tried running tensorflow object detection API on Colab according to
Inline Link
I got such an error at the first Install required packages.
How can I solve it?
Background : Python2 , GPU
/root
fatal: destination path 'models' already exists and is not an empty directory.
/root/models/research
Traceback (most recent call last):
File "object_detection/builders/model_builder_test.py", line 23, in <module>
from object_detection.builders import model_builder
ImportError: No module named object_detection.builders
I'm not clear about from which directory you are executing the command.
if you executing it from content directory then go to model and then to research directory.
%cd ~/models/research
!python object_detection/builders/model_builder_test.py
If you don't have model directory clone it by using
!git clone --quiet https://github.com/tensorflow/models.git
I am following this ml-engine guide. I did setup my gcloud and created vm also. For tensorflow, I am using Anaconda 3 to create my python environment. I created new environment with python=3.6. But when I fire this
gcloud ml-engine local train --module-name trainer.task --package-path trainer -- --train-files c:\Anaconda3\mytensorflowcode\cloudml-samples-master\census\estimator\data\adult.data.csv --eval-files c:\Anaconda3\mytensorflowcode\cloudml-samples-master\census\estimator\data\adult.test.csv --train-steps 1000 --job-dir c:\Anaconda3\mytensorflowcode\cloudml-samples-master\census\estimator\output --eval-steps 100
I am getting following error
Traceback (most recent call last):
File "D:\gcsdk174\google-cloud-sdk\platform\bundledpython\lib\runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "D:\gcsdk174\google-cloud-sdk\platform\bundledpython\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Anaconda3\mytensorflowcode\cloudml-samples-master\census\estimator\trainer\task.py", line 4, in <module>
import model
File "trainer\model.py", line 20, in <module>
import tensorflow as tf
ImportError: No module named tensorflow
I could able to install tensorflow successfully with pip install -r ../requirements.txt command as per the guide.
Can anybody point out, what I am doing wrong?
Update: this issue should now be fixed with the most recent version of gcloud. Can you give it a try and see if it works for you? First do:
gcloud components update
What's happening is that gcloud is (silently) requiring py2.7, which is causing your import error. This is a bug that we will fix soon. (It's particularly problematic for Windows, since TF doesn't support a 2.7 install for windows). We'll update here when it's fixed.
In the meantime, the best option is probably to test locally by just running your python script directly (unless you are trying to test distributed training locally).
If you are trying to test distributed training locally, then your best temporary option is probably to use Docker and the TensorFlow docker container.