Update (2019-02-07): the issue has now been fixed, so if you're still running into this, try gcloud components update.
At some point during the past few monthts, my bq tool stopped working. Even a simple thing shows this error:
$ bq show
BigQuery error in show operation: Cannot contact server. Please try again.
Traceback: Traceback (most recent call last):
File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 685, in BuildApiClient
response_metadata, discovery_document = http.request(discovery_url)
File "/opt/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/transport.py", line 176, in new_request
redirections, connection_type)
File "/opt/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/transport.py", line 283, in request
connection_type=connection_type)
File "/opt/google-cloud-sdk/platform/bq/third_party/httplib2/__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/opt/google-cloud-sdk/platform/bq/third_party/httplib2/__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/opt/google-cloud-sdk/platform/bq/third_party/httplib2/__init__.py", line 1288, in _conn_request
conn.connect()
File "/opt/google-cloud-sdk/platform/bq/third_party/httplib2/__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)
I've tried the following:
sudo gcloud components update (version 221.0.0).
sudo pacman -Syu (system update) to get the latest set of SSL certificates. This is Arch Linux, so pretty much always bleeding edge.
sudo gcloud components reinstall.
Uninstalling google-cloud-sdk, wiping out remaining /opt/google-cloud-sdk and reinstalling entirely from AUR.
Adding --httplib2_debuglevel=3 (valid values are not documented, found the value 3 here). This does not give any extra output.
Adding one of --ca_certificates_file=/etc/ca-certificates/extracted/tls-ca-bundle.pem, --ca_certificates_file=/etc/ca-certificates/extracted/ca-bundle.trust.crt and --ca_certificates_file=/etc/ssl/certs/ca-certificates.crt one of which must surely be the bundle of root certificates on my system. The last one of these is used by curl, which can talk to www.googleapis.com just fine.
Poking at the source code to discover that /opt/google-cloud-sdk/platform/bq/third_party/httplib2/cacerts.txt is the cert bundle used by default. If I try this one with curl --cacert ..., it still works.
Setting the GOOGLE_APPLICATION_CREDENTIALS environment variable in this shell. As expected, this also doesn't make a difference; the SSL error occurs before bq has even had a chance to begin the OAuth handshake.
Adding --disable_ssl_validation. This "works" but is obviously not secure.
Anyone else seeing this, or have ideas how to debug/solve?
I'm seeing the exact same issue using Arch Linux as well.
When you issue a bq command on the command line however, I'm pretty sure that the certificate file at /opt/google-cloud-sdk/platform/bq/third_party/httplib2/cacerts.txt is not used, because the flag --ca_certificates_file=/etc/ssl/certs/ca-certificates.crt will is put into the flags automatically in the application bootstrap process. On Arch Linux, this file is a symlink to /etc/ca-certificates/extracted/tls-ca-bundle.pem.
I've tried using curl and openssl s_client with this CA bundle against the API URL being called, which is
https://www.googleapis.com/discovery/v1/apis/bigquery/v2/rest
and it works just fine.
My assumption is, that this is not an issue with missing or expired certificates. My pyopenssl package is at version 18.0.0, so I'm at the newest version here. However, I think this issue is caused by unsupported ciphers or algorithms in the TLS handshake process.
There's a public issue tracker with a similar behavior that you're having. I suggest starring it to keep updated about it as well providing your scenario.
If you're behind a corporate proxy, on comment #8 there's a scenario which the corporate proxy replaces the certificate, and the workaround is provided on comment #16
Hope it helps.
Related
Edit 1: I am using the floobits plugin (latest release), uninstalled and installed again from the package manager. I am not getting a traceback but an error window with the following error message: Unable to join workspace. CERTIFICATE_VERIFY_FAILED unable to get local issuer certificate (_ssl.c:1124).
Edit 2: I had tried the Package Control Upgrade package and Staisfy Dependencies, but that did not help fix it.
I was able to fix it (answer below).
I have been stuck on this issue for days now. When I try to connect to a floobits workspace in sublime text, I get the error message that CERTIFICATE_VERIFY_FAILED unable to get local issuer certificate (_ssl.c:1124).
I searched about this a lot but I don't know what's wrong anymore.
I started by upgrading certifi and pip itself.
Then I read somewhere that I should check if OpenSSl(and the requests library of py) and cURL can open the URL (floobits.com) since the workspace is hosted there. curl returned no errors but OpenSSL (and requests) wasn't able to verify, gave the same error.
So I downloaded the certificates from the website on opening it in chrome. I downloaded all three certificates (for the root, intermediate and the website itself), and appended them to cacert.pem inside the certifi package folder. After that, when I ran it, OpenSSL was able to open it (and the requests library too, got a 200 response code).
But, floobits still wasn't able to connect and gave the same error. I know that there is nothing wrong with floobits.com and the sublime extension since a friend can still open the workspace without problems.
Please tell me what I can do to fix this.
So I managed to 'solve' the problem by bringing Sublime back to a fresh state and then copying my files again. Answering here if anyone gets this problem.
Method:
Go to the Sublime > Preferences > Browse Packages
In this folder, copy and backup the 'User' folder to someplace else.
Delete all the folders in this location (Sublime need not be closed during this but it's best if you do close it)
Copy the backed-up 'User' folder back here (remember the location where you deleted, %AppData%\Roaming\Sublime Text 3\Packages (for Windows))
All set, you now have a fresh installation of sublime.
Sublime will start installing all packages again, wait for it to finish and resume work.
This solved the problem I was having. Unfortunately, I couldn't track down the problem.
I have downloaded YouTube videos using ClipGrab before (v3.8.11), but after updating it (to v3.9.6) I get an error.
First I am informed that an additional dependency "youtube-dl" must be downloaded.
But when I try to do this I get: Error downloading youtube-dl: SSL handshake failed
I tried installing youtube-dl separately using brew install youtube-dl and it appears to have succeeded, but ClipGrab doesn't see this installation, and still gives the SSL error.
I tried going back to my old version of ClipGrab (v3.8.11) but it now gives a different error: Could not retrieve video link. So maybe there is some server issue or other problem not local to the app, that is preventing the old version from working too.
(1) Is there a way to get around the SSL error or make ClipGrab recognize my brew installation of youtube-dl? (2) If there is no way to get the current v3.9.6 working, is there a way to get an older version working?
Thanks
Hopefully this will work for you (it worked for me). I'm on Kubuntu 22.04, I had to manually download yt-dlp:
mkdir -p ~/.local/share/ClipGrab/ClipGrab
wget -P ~/.local/share/ClipGrab/ClipGrab/ \
https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp
After that clipgrab started working
Not sure about Mac using homebrew but with Linux this was my solution.
Running kubuntu 22.04 Same SSL handshake error
Here is the fix that worked for me..
Download the yt-dlp file from:
https://github.com/yt-dlp/yt-dlp#release-files
The yt-dlp file found on that page it is the first file in the recommended listing column.
On your computer:
Allow your file manager to show invisible directories/files. Or navigate with the terminal.
Navigate to home/user/.local/share/ClipGrab/ClipGrab/yt-dlp
Overwrite the existing file which will be there but empty, with the file you downloaded from the above link.
Open ClipGrab and the SSL error will be gone and the app will function.
I did the manual installation on python 3.7.5 on Debian 8, when I will run the script I get this error:
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)>
I saw several questions here in the stackoverflow more regarding MacOS, In my case this error is in Linux.
I had the same issue. Here is what I found helped my problem.
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
Please see here for the original answer from markroxor. Hope it will help your problem as well.
I had
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)>
With python3 3.9.2-3, and other python related packages with the same, or similar, version. On Debian GNU/Linux 11 (Bullseye).
At first, using the suggested
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
managed to solve the problem. Though I do not know if, and which, down sides, or other side effects, this solution has.
Afterwards, I noticed the /etc/ssl/certs/ folder is empty. Installing the ca-certificates package fills in this folder. Which seem to be another solution, in which those 2 python ssl lines are not required. You can see here the detailed list of files the ca-certificates package installed. This article, from 2015, with a last comment from 2017, discusses the location of ca-certificates in various OSs/distributions. I think the ca-certificates package is rather basic, and is usually installed as part of the initial installation of the machine. I do not know how it was missing from that particular machine.
What worked for my MacOs:
Open the finder
Find the version of Python that you are using
Open its folder
Click on the "Install Certificates.command". It will open a terminal and install the certificate.
In my case (Dell computer), the SSL problem was caused by Dell software itself:
reported here. In that case, according to this answer to another question in the SE network, you can solve the problem by running the following command:
sudo cp /usr/lib/x86_64-linux-gnu/libcrypto.so.3 /opt/dell/dcc/libcrypto.so.3
And then, run this:
sudo update-ca-certificates --fresh
It worked for me on a Dell Latitude 7310, LinuxMint21. November 2022.
THIS IS NOT A SOLUTION:
I have encountered that several times, note however that i'm using windows, but i would assume that generally the resolving mehtods should be the same in principle for mac/linux.
What i used to do is to force it to not verify the certificate by using the below:
conda config --set ssl_verify false
Note this is not a solution to the issue, it's just a way to make the code run temporarily, or if you're trying to download a library then that should do the trick until you download it. Note that the suggested below is not usually recommended, if you do it, after running your code/ downloading your library, remember to turn it back on using the below:
conda config --set ssl_verify true
If this happened after you installed a python version manually, inside the python app folder, double click on the "Install Certificates.command" file and it should fix it.
I have a script that uses SCP to pull a file from a remote Linux host on AWS. After running the same code nightly for about 6 months without issue, it started failing today with protocol error: filename does not match request. I reproduced the issue on some simpler filenames below:
$ scp -i $IDENT $HOST_AND_DIR/"foobar" .
# the file is copied successfully
$ scp -i $IDENT $HOST_AND_DIR/"'foobar'" .
protocol error: filename does not match request
# used to work, i swear...
$ scp -i $IDENT $HOST_AND_DIR/"'foobarbaz'" .
scp: /home/user_redacted/foobarbaz: No such file or directory
# less surprising...
The reason for my single quotes was that I was grabbing a file with spaces in the name originally. To deal with the spaces, I had done $HOST_AND_DIR/"'foo bar'" for many months, but starting today, it would only accept $HOST_AND_DIR/"foo\ bar". So, my issue is fixed, but I'm still curious about what's going on.
I Googled the error message, but I don't see any real mentions of it, which surprises me.
Both hosts involved have OpenSSL 1.0.2g in the output of ssh -v localhost, and bash --version says GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Any ideas?
I ended up having a look through the source code and found the commit where this error is thrown:
GitHub Commit
remote->local directory copies satisfy the wildcard specified by the
user.
This checking provides some protection against a malicious server
sending unexpected filenames, but it comes at a risk of rejecting
wanted files due to differences between client and server wildcard
expansion rules.
For this reason, this also adds a new -T flag to disable the check.
They have added a new flag -T that will ignore this new check they've added so it is backwards compatible. However, I suppose we should look and find out why the filenames we're using are flagged as restricted.
In my case, I had [] characters in the filename that needed to be escaped using one of the options listed here. for example:
scp USERNAME#IP_ADDR:"/tmp/foo\[bar\].txt" /tmp
This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.