I'm working with an example program that uses the MNIST dataset.
It tries to load the dataset using this line:
dataset = tfds.load(name='mnist', split=split)
However, this yields the following error:
2020-07-30 12:08:17.926262: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
Traceback (most recent call last):
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/py_utils.py", line 399, in try_reraise
yield
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/registered.py", line 244, in builder
return builder_cls(name)(**builder_kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/api_utils.py", line 69, in disallow_positional_args_dec
return fn(*args, **kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_builder.py", line 206, in __init__
self.info.initialize_from_bucket()
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_info.py", line 423, in initialize_from_bucket
data_files = gcs_utils.gcs_dataset_info_files(self.full_name)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 71, in gcs_dataset_info_files
return gcs_listdir(posixpath.join(GCS_DATASET_INFO_DIR, dataset_dir))
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 64, in gcs_listdir
if is_gcs_disabled() or not tf.io.gfile.exists(root_dir):
File "/home/tflynn/.local/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py",
line 267, in file_exists_v2
_pywrap_file_io.FileExists(compat.as_bytes(path))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error executing an HTTP
request: libcurl code 77 meaning 'Problem with the SSL CA cert (path? access rights?)',
error details: error setting certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
when reading metadata of gs://tfds-data/dataset_info/mnist/3.0.1
I've searched on google, but couldn't find any other instances of this error with tensorflow. The node is connected to the internet, if that makes a difference.
I've had the same issue on an Fedora32 system. The directory /etc/ssl/certs/ exists and there is a file ca-bundle.crt. The following command solved the problem:
sudo ln -s /etc/ssl/certs/ca-bundle.crt /etc/ssl/certs/ca-certificates.crt
Probably same as [1]. Run
apt-get update
apt-get install -y ca-certificates
if on linux before executing your code or commands of similar effect on your OS.
[1] https://github.com/tensorflow/serving/issues/1022
If you are unable to create a symbolic link or install using apt, you can try to upgrade to a more recent version of tfds. This issue is not present in the nightly build version 3.2.1.
pip install tfds-nightly: Released every day, contains the last versions of the datasets.
According to the TensorFlow/datasets GitHub repository, one commenter suggests to downgrade to 3.0.0, however; I have not tried this to see if it works.
The error is saying CURL couldn't find the file on your computer with root SSL certificates. On my machine this file is stored at /etc/ssl/certs/ca-bundle.crt.
You can override the path CURL looks for by setting the CURL_CA_BUNDLE environment variable. For example, either add this to the top of your python script/notebook:
import os
os.environ['CURL_CA_BUNDLE'] = "/etc/ssl/certs/ca-bundle.crt"
or you could set the environment variable in a shell, e.g.
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-bundle.crt
Related
I'm trying to compile CEF locally on my Ubuntu 20.10 machine, but my automate-git.py can't finish due to a strange error while running hooks:
Apply runhooks.patch in /home/user/code/chromium_git/chromium/src
9 5 build/toolchain/win/setup_toolchain.py
11 0 build/vs_toolchain.py
... successfully applied.
-------- Running "gclient runhooks --jobs 16" in "/home/user/code/chromium_git/chromium"...
Running hooks: 5% ( 6/101) nacltools
________ running 'vpython src/build/download_nacl_toolchains.py --mode nacl_core_sdk sync --extract' in '/home/user/code/chromium_git/chromium'
INFO: --Syncing arm_trusted to revision 2--
INFO: Downloading package archive: emulator_arm_trusted_precise.tgz (1/1)
package_version: Could not download URL (https://storage.googleapis.com/nativeclient-archive2/toolchain/2/emulator_arm_trusted_precise.tgz): <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Error: Command 'vpython src/build/download_nacl_toolchains.py --mode nacl_core_sdk sync --extract' returned non-zero exit status 1 in /home/user/code/chromium_git/chromium
Traceback (most recent call last):
File "../automate/automate-git.py", line 1385, in <module>
run("gclient runhooks --jobs 16", chromium_dir, depot_tools_dir)
File "../automate/automate-git.py", line 69, in run
return subprocess.check_call(
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['gclient', 'runhooks', '--jobs', '16']' returned non-zero exit status 2.
On the restart it it succeeds, though, but there are compilation errors in future. I pasted check_certificate = off in ~/.wgetrc and insecure in ~/.curlrc, but no luck yet. What do I do?
I solved this issue by setting up a VM and compiling CEF inside it, and it all magically started working, so I guess it was my system's issue.
I am new to tensorflow. I am studying the project deepspeechhttps://github.com/mozilla/DeepSpeech, but when I run evauate.py, I get the error:
ValueError: Scorer initialization failed with error code 1
the details as follows. Can anybody can help me resolve this issue, thanks!
Stack information:
File "/home/zhangp/Desktop/sr/DeepSpeech-master/evaluate.py", line 49, in evaluate
FLAGS.scorer_path, Config.alphabet)
File "/home/zhangp/.local/lib/python3.6/site-packages/ds_ctcdecoder/__init__.py", line 41, in __init__
raise ValueError('Scorer initialization failed with error code {}'.format(err))
ValueError: Scorer initialization failed with error code 1
Im not the most python / deepspeech expert out there but from their mozilla discourse and after facing the same issue. This is whats happening and its fix:
"It means the scorer hasn’t been installed You need to install git-lfs and do git lfs pull to download it.".
On MacOS git-lfs is installed via brew or apt-get install on linux, and then run "git lfs pull".
Buildroot version: 2017-02
I need to integrate python package 'chardet' to my build. The chardet require the 'pytest-runner'package.
Case 1
The 'chardet' is marked to be integrated into build. The 'pytest-runner' is not pre-feteched and the 'chardet' package updated to the latest version (3.0.3) before the build with the scanpypi script. When the make is run the following error message is shown indicating problems with the 'pytest-runner':
>>> python-chardet 3.0.3 Building
(cd /home/nnnn/bldr_lab/buildroot/output/build/python-chardet-3.0.3//;
PATH="/home/nnnn/bldr_lab/buildroot/output/host/bin:/home/nnnn/bldr_lab
/buildroot/output/host/sbin:/home/nnnn/bldr_lab/buildroot/output/host/usr
/bin:/home/nnnn/bldr_lab/buildroot/output/host/usr/sbin:/home/nnnn/x-tools
/arm-cortex_a8-linux-gnueabihf/bin:/home/nnnn/bin:/home/nnnn/.local
/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr
/games:/usr/local/games" PYTHONPATH="/home/nnnn/bldr_lab/buildroot/output
/target/usr/lib/python2.7/sysconfigdata/:/home/nnnn/bldr_lab/buildroot
/output/target/usr/lib/python2.7/site-packages/" _python_sysroot=/home
/nnnn/bldr_lab/buildroot/output/host/usr/arm-buildroot-linux-gnueabihf
/sysroot _python_prefix=/usr _python_exec_prefix=/usr /home/nnnn/bldr_lab
/buildroot/output/host/usr/bin/python setup.py build )
Download error on https://pypi.python.org/simple/pytest-runner/: unknown
url type: https -- Some packages may not be found!
Couldn't find index page for 'pytest-runner' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: unknown url type: https
-- Some packages may not be found!
No local packages or download links found for pytest-runner
Traceback (most recent call last):
File "setup.py", line 52, in <module>
['chardetect = chardet.cli.chardetect:main']})
File "/home/nnnn/bldr_lab/buildroot/output/host/usr/lib/python2.7
/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 268, in
__init__
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 313, in
fetch_build_eggs
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 846, in resolve
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 1091, in best_match
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 1103, in obtain
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 380, in fetch_build_egg
File "build/bdist.linux-x86_64/egg/setuptools/command/easy_install.py", line 633, in easy_install
distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('pytest-runner')
package/pkg-generic.mk:216: recipe for target '/home/nnnn/bldr_lab
/buildroot/output/build/python-chardet-3.0.3/.stamp_built' failed
make[1]: *** [/home/nnnn/bldr_lab/buildroot/output/build/python-chardet-
3.0.3/.stamp_built] Error 1
Makefile:79: recipe for target '_all' failed
make: *** [_all] Error 2
case 2 When I try to create the 'pytest-runner' with the scanpypi the following error message is shown:
nnnn#xxxx:~/bldr2/buildroot$ ./support/scripts/scanpypi pytest-runner -o package
buildroot package name for pytest-runner: python-pytest-runner
Package: python-pytest-runner
Fetching package pytest-runner
Downloading package pytest-runner from https://pypi.python.org/packages/9e/4d/08889e5e27a9f5d6096b9ad257f4dea1faabb03c5ded8f665ead448f5d8a/pytest-runner-2.11.1.tar.gz...
Error: Could not install package pytest-runner
When I download the 'pytest-runner-2.11.1.tar.gz' from pypi manually it looks as normal pypi tar file. Any idea what is root cause and how to solve the problem?
-timo-
I am getting this error while running the sample file given with TensorFlow, in the imagenet model,
File "classify_image.py", line 154, in run_inference_on_image
if not tf.gfile.Exists(image):
AttributeError: 'module' object has no attribute 'gfile'
I have tried installing using both, from pip as well as source, on virtualserver as well, still I get this error.
There are two ways to go about solving this problem:
Option 1
This is an expansion on Yaroslav's comment above, and is easier than option 2.
Modify classify_image.py as follows:
Replace all instances of tf.gfile.Exists to os.path.exists, and
replace all instances of tf.gfile.GFile and tf.gfile.FastGFile to open
Then run the modified classify_image.py, and it should work.
Option 2
Update tensorflow to the latest version that includes gfile as described here
However, after you do that, you might encounter the following error when you try to run classify_image.py:
$ python classify_image.py
>> Downloading inception-2015-12-05.tgz 100.0%
Succesfully downloaded inception-2015-12-05.tgz 88931400 bytes.
[libprotobuf ERROR google/protobuf/src/google/protobuf/io/coded_stream.cc:207] A protocol message was rejected because it was too big (more than 67108864 bytes). To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
Traceback (most recent call last):
File "classify_image.py", line 213, in <module>
tf.app.run()
File "/Users/USERNAME/.virtualenvs/mlnd/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "classify_image.py", line 209, in main
run_inference_on_image(image)
File "classify_image.py", line 159, in run_inference_on_image
create_graph()
File "classify_image.py", line 141, in create_graph
graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
To fix this, you can change a line in the source code as described here and then recompile tensorflow on your machine.
Option 2 might be a bit of work, especially if you're on a Mac.
The following steps helped me resolve the issue
update tensorflow as described here https://www.tensorflow.org/versions/r0.7/get_started/os_setup.html
2.For me this resulted in protobuf error. To resolve I ran
$sudo pip uninstall protobuf
$sudo pip install --upgrade
https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
I'm using CKAN as my open data portal and have successfully installed CKAN Quality Assurance Extension according to instructions at https://github.com/ckan/ckanext-qa/. I'm currently facing some problem with this step:
This step can be performed by running the associated paster command
from the ckanext-qa directory.
$ paster qa update|clean [package name/id] --config=<path to ckan config file>
I am getting this error:
/usr/lib/ckan/default/src/ckanext-qa-master$ paster qa update|clean --config=/etc/ckan/default
No command 'clean' found, did you mean:
Command 'uclean' from package 'svn-buildpackage' (universe)
Command 'clear' from package 'ncurses-bin' (main)
clean: command not found
Traceback (most recent call last):
File "/usr/lib/ckan/default/bin/paster", line 9, in <module>
load_entry_point('PasteScript==1.7.5', 'console_scripts', 'paster')()
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 104, in run
invoke(command, command_name, options, args[1:])
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 143, in invoke
exit_code = runner.run(args)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 238, in run
result = self.command()
File "/usr/lib/ckan/default/src/ckanext-qa-master/ckanext/qa/commands.py", line 50, in command
self._load_config()
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 91, in _load_config
conf = self._get_config()
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 86, in _get_config
raise AssertionError('Config filename %r does not exist.' % self.filename)
AssertionError: Config filename '/usr/lib/ckan/default/src/ckanext-qa-master/development.ini' does not exist.
My ckanext-qa directory is /usr/lib/ckan/default/src/ckanext-qa-master and my ckan config file is located at /etc/ckan/default. Did I run the command correctly?
The command that you have run has 2 mistakes in it. First of all "update|clean" stands for "update or clean". Also, you didn't the specify the correct config file path. See the correct update and clean commands below:
paster qa update --config=/etc/ckan/default/development.ini
paster qa clean --config=/etc/ckan/default/development.ini
Additionally, there are 2 ways to run extension specific paster commands:
Navigate to ckanext-qa directory and execute the command:
paster qa update --config=/etc/ckan/default/development.ini
Explicitly specify the extension name and then run the command
paster --plugin=ckanext-qa qa update --config=/etc/ckan/default/development.ini