Python3 replacing Python errors with different output - error-handling

I have one issue that I would like fixed but I am not able to. I have a small script that requires SSH key to work properly. When the SSH key is not properly loaded, the following Python Error appears:
SSH: Permission denied (publickey). Traceback (most recent call last):
File "/path/to/python3file.py", line 117, in
func.func_check() File "/path/to/python3file.py", line 18, in func_check
ssh = subprocess.check_output(["ssh", "-p22", "{}#{}".format("user", self.host), command]) File
"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py",
line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File
"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py",
line 512, in run
raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ssh', '-p22', 'user#chost',
' script func user']' returned non-zero exit status 255.
Basically, I want to replace the entire error code from the above with something more user-friendly like:
Please import SSH key
Is that even possible?
Thank you.

Use Try Except, and specify the type of error you are handling. In this case it is IOError.
try:
print('my code here')
except IOError, e:
print('Please import SSH key')

Related

Random Tables in a dag fail in the create table stage with filenotfound : beeline error

The create table stage -
CREATE TABLE IF NOT EXISTS `aa_db_aaa_prod.BASE` (`UpdatedByName` STRING, `UpdatedOn` BIGINT, `UpdatedOnTimeZoneOffset` INTEGER);
ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-
packages/cloudera/cdp/airflow/operators/cdw_operator.py", line 108, in execute
self.hook.run_cli(hql=self.hql, schema=self.schema, hive_conf=self.hiveconfs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-
packages/cloudera/cdp/airflow/hooks/cdw_hook.py", line 204, in run_cli
sub_process = subprocess.Popen(
File "/opt/bitnami/python/lib/python3.8/subprocess.py", line 858, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/bitnami/python/lib/python3.8/subprocess.py", line 1704, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'beeline'
However, my confusion is, this is not new installation and nothing has changed. Also, other tables in the dag are successful. Beeline connection through the cli on airflow server with same user is successful. I cant understand the cause for this error ? Any leads on this please ?
The cause was, /repos/cloudera/parcels/CDH/bin/beeline got removed on one of the airflow node, which was causing this issue.

Problem with building Chromium for Windows on Mac host

I am trying to cross build chromium on mac host. I have followed the instructions from https://chromium.googlesource.com/chromium/src.git/+/HEAD/docs/win_cross.md and everything seems fine. But when I run autoninja -C out/win chrome I face with the following error. Any help would be appreciated.
ninja: Entering directory `out/win'
[0/1] Regenerating ninja files
ERROR at //build/config/win/visual_studio_version.gni:27:7: Script returned non-zero exit code.
exec_script("../../vs_toolchain.py", [ "get_toolchain_dir" ], "scope")
^----------
Current dir: /chromium/src/out/win/
Command: python3 /chromium/src/build/vs_toolchain.py get_toolchain_dir
Returned 1.
stderr:
Traceback (most recent call last):
File "/chromium/src/build/vs_toolchain.py", line 587, in <module>
sys.exit(main())
File "/chromium/src/build/vs_toolchain.py", line 583, in main
return commands[sys.argv[1]](*sys.argv[2:])
File "/chromium/src/build/vs_toolchain.py", line 561, in GetToolchainDir
win_sdk_dir = SetEnvironmentAndGetSDKDir()
File "/chromium/src/build/vs_toolchain.py", line 554, in SetEnvironmentAndGetSDKDir
return NormalizePath(os.environ['WINDOWSSDKDIR'])
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/os.py", line 679, in __getitem__
raise KeyError(key) from None
KeyError: 'WINDOWSSDKDIR'
See //third_party/angle/gni/angle.gni:35:5: whence it was imported.
import("//build/config/win/visual_studio_version.gni")
^----------------------------------------------------
See //chrome/elevation_service/BUILD.gn:7:1: whence it was imported.
import("//build/toolchain/win/midl.gni")
^--------------------------------------
See //BUILD.gn:660:7: which caused the file to be included.
"//chrome/elevation_service:elevation_service_unittests",
^-------------------------------------------------------
FAILED: build.ninja
../../buildtools/mac/gn --root=../.. -q --ide=vs --regeneration gen .
ninja: error: rebuilding 'build.ninja': subcommand failed

Tensorflow dataloading issue

I'm working with an example program that uses the MNIST dataset.
It tries to load the dataset using this line:
dataset = tfds.load(name='mnist', split=split)
However, this yields the following error:
2020-07-30 12:08:17.926262: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
Traceback (most recent call last):
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/py_utils.py", line 399, in try_reraise
yield
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/registered.py", line 244, in builder
return builder_cls(name)(**builder_kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/api_utils.py", line 69, in disallow_positional_args_dec
return fn(*args, **kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_builder.py", line 206, in __init__
self.info.initialize_from_bucket()
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_info.py", line 423, in initialize_from_bucket
data_files = gcs_utils.gcs_dataset_info_files(self.full_name)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 71, in gcs_dataset_info_files
return gcs_listdir(posixpath.join(GCS_DATASET_INFO_DIR, dataset_dir))
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 64, in gcs_listdir
if is_gcs_disabled() or not tf.io.gfile.exists(root_dir):
File "/home/tflynn/.local/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py",
line 267, in file_exists_v2
_pywrap_file_io.FileExists(compat.as_bytes(path))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error executing an HTTP
request: libcurl code 77 meaning 'Problem with the SSL CA cert (path? access rights?)',
error details: error setting certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
when reading metadata of gs://tfds-data/dataset_info/mnist/3.0.1
I've searched on google, but couldn't find any other instances of this error with tensorflow. The node is connected to the internet, if that makes a difference.
I've had the same issue on an Fedora32 system. The directory /etc/ssl/certs/ exists and there is a file ca-bundle.crt. The following command solved the problem:
sudo ln -s /etc/ssl/certs/ca-bundle.crt /etc/ssl/certs/ca-certificates.crt
Probably same as [1]. Run
apt-get update
apt-get install -y ca-certificates
if on linux before executing your code or commands of similar effect on your OS.
[1] https://github.com/tensorflow/serving/issues/1022
If you are unable to create a symbolic link or install using apt, you can try to upgrade to a more recent version of tfds. This issue is not present in the nightly build version 3.2.1.
pip install tfds-nightly: Released every day, contains the last versions of the datasets.
According to the TensorFlow/datasets GitHub repository, one commenter suggests to downgrade to 3.0.0, however; I have not tried this to see if it works.
The error is saying CURL couldn't find the file on your computer with root SSL certificates. On my machine this file is stored at /etc/ssl/certs/ca-bundle.crt.
You can override the path CURL looks for by setting the CURL_CA_BUNDLE environment variable. For example, either add this to the top of your python script/notebook:
import os
os.environ['CURL_CA_BUNDLE'] = "/etc/ssl/certs/ca-bundle.crt"
or you could set the environment variable in a shell, e.g.
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-bundle.crt

why runtime error happen? After import Mecab

what is problem?
I use python3 windows10 environment is Anaconda
m=MeCab.Tagger("-Ochasen")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\a.sakata\Anaconda3\lib\site-packages\MeCab.py", line 253, in __init__
_MeCab.Tagger_swiginit(self, _MeCab.new_Tagger(*args))
RuntimeError
Your dicrc probably doesn't include the chasen format. This causes the Mecab C lib to die with an error, which results in the runtime error in Python.
I get the same error, and if I run mecab on the command line I get this output:
$ mecab -Ochasen
writer.cpp(63) [!tmp.empty()] unkown format type [chasen]
If you don't get an error on the command line the cause might be something else.

JHBuild runtime error "Failed to close %s stream" (MacOS)

I started a JHBuild with the wrong arguments (forgot 'build') and hit control-C at what appears to have been the wrong moment.
Now when I try any JHBuild command, e.g. jhbuild bootstrap, I get:
Traceback (most recent call last):
File "/Users/gnucashdev/Source/jhbuild/jhbuild/config.py", line 197, in load
execfile(filename, config)
File "/Users/gnucashdev/.jhbuildrc", line 408, in <module>
execfile(_userrc)
File "/Users/gnucashdev/.jhbuildrc-custom", line 22, in <module>
setup_sdk()
File "/Users/gnucashdev/.jhbuildrc", line 260, in setup_sdk
gcc = _popen("xcrun -f gcc")
File "/Users/gnucashdev/.jhbuildrc", line 41, in _popen
raise RuntimeError, "Failed to close %s stream" % cmd_arg
RuntimeError: Failed to close xcrun -f gcc stream
jhbuild: could not load config file
I've tried re-installing jhbuild with
./gtk-osx-build-setup.sh
but the next step - i.e.
jhbuild bootstrap
yields the above error. Some file appears to have been compromised, perhaps truncated. But I'm having a hard time figuring out which.
I had the same error. xcrun is returning an error, probably due to an incorrect environment variable. In my case, I was running jhbuild while in a jhbuild shell, which caused the SDKDIR environment variable to contain 2 copies of the path to the SDK directory. Exiting the jhbuild shell fixed the problem.