Unable to establish SSL connection when using wget to download GEDI data from LP DAAC data pool - ssl

I was using wget to download GEDI data from LP DAAC data pool. It always returns an error of "unable to establish SSL connection". I attempted wget in promote or Pycharm and added the "--no-check-certificate" configuration.
The wget is the newest release (1.21.3,64bit).
OS: windows11.
from the following massages, I guess the connection to EarthData is successful because it returns the data downloading link that I can open manually in the browser and then can start downloading. This error could happen in the last step that wget starts accessing the returned link and then downloading.
returned messages:
--2022-08-14 09:51:09-- https://e4ftl01.cr.usgs.gov//GEDI_L1_L2/GEDI/GEDI01_B.002/2019.04.20/GEDI01_B_2019110092939_O01996_01_T03334_02_005_01_V002.h5
Resolving e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)... 2001:49c8:4000:127d::133:130, 152.61.133.130
Connecting to e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)|2001:49c8:4000:127d::133:130|:443... failed: Bad file descriptor.
Connecting to e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)|152.61.133.130|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://urs.earthdata.nasa.gov/oauth/authorize?scope=uid&app_type=401&client_id=ijpRZvb9qeKCK5ctsn75Tg&response_type=code&redirect_uri=https%3A%2F%2Fe4ftl01.cr.usgs.gov%2Foauth&state=aHR0cHM6Ly9lNGZ0bDAxLmNyLnVzZ3MuZ292Ly9HRURJX0wxX0wyL0dFREkvR0VESTAxX0IuMDAyLzIwMTkuMDQuMjAvR0VESTAxX0JfMjAxOTExMDA5MjkzOV9PMDE5OTZfMDFfVDAzMzM0XzAyXzAwNV8wMV9WMDAyLmg1 [following]
--2022-08-14 09:51:55-- https://urs.earthdata.nasa.gov/oauth/authorize?scope=uid&app_type=401&client_id=ijpRZvb9qeKCK5ctsn75Tg&response_type=code&redirect_uri=https%3A%2F%2Fe4ftl01.cr.usgs.gov%2Foauth&state=aHR0cHM6Ly9lNGZ0bDAxLmNyLnVzZ3MuZ292Ly9HRURJX0wxX0wyL0dFREkvR0VESTAxX0IuMDAyLzIwMTkuMDQuMjAvR0VESTAxX0JfMjAxOTExMDA5MjkzOV9PMDE5OTZfMDFfVDAzMzM0XzAyXzAwNV8wMV9WMDAyLmg1
Resolving urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)... 2001:4d0:241a:4081::89, 198.118.243.33
Connecting to urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)|2001:4d0:241a:4081::89|:443... failed: Bad file descriptor.
Connecting to urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)|198.118.243.33|:443... connected.
Unable to establish SSL connection.

Related

https certificate issue in wget

What is the difference between the two https wget request? One is getting download while other is having certificate issue on same machine
-- not working
/home/bsoft>wget https://storage.googleapis.com/minikube/iso/minikube-v1.25.2.iso
--2022-04-29 01:16:42-- https://storage.googleapis.com/minikube/iso/minikube-v1.25.2.iso
Resolving storage.googleapis.com (storage.googleapis.com)... 216.58.221.48, 142.250.194.240, 142.250.206.112, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|216.58.221.48|:443... connected.
ERROR: cannot verify storage.googleapis.com's certificate, issued by ‘emailAddress=certadmin#netskope.com,CN=ca.stlgs.goskope.com,OU=86e3620a322d5cba9f90e0eedfd92cdd,O=bsoft technology,L=Gurugram,ST=IN,C=IN’:
Self-signed certificate encountered.
To connect to storage.googleapis.com insecurely, use `--no-check-certificate'.
/home/bsoft>
-- working
/home/ravi>wget https://speed.hetzner.de/100MB.bin
--2022-04-29 09:57:31-- https://speed.hetzner.de/100MB.bin
Resolving speed.hetzner.de (speed.hetzner.de)... 88.198.248.254, 2a01:4f8:0:59ed::2
Connecting to speed.hetzner.de (speed.hetzner.de)|88.198.248.254|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: ‘100MB.bin.2’
100MB.bin.2 100%[=================================================================================================================================================================>] 100.00M 783KB/s in 98s
2022-04-29 09:59:10 (1.02 MB/s) - ‘100MB.bin.2’ saved [104857600/104857600]
/home/ravi>

failed retrieving file from mirror.erickochen.nl

When I run pacman -Syu to update, it first shows no error, I normally update everything and after that, I run pacman -Syu again, it shows this, what is the reason and any solution?
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
error: failed retrieving file 'core.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5241 ms: Connection timed out
error: failed retrieving file 'extra.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
error: failed retrieving file 'community.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
warning: too many errors from mirror.erickochen.nl, skipping for the remainder of this transaction
:: Starting full system upgrade...
there is nothing to do
Sometimes mirrors go offline, it's recommended to have multiple mirrors so you don't have a single point of failure, as well as keeping mirrors updated. Using reflector is recommended since it also finds fast candidates based on your location.
For the time being, edit /etc/pacman.d/mirrorlist and uncomment a couple of mirrors, then try updating again.

Snowflake SSL peer certificate or SSH remote key was not OK

I am doing data load via ODBC, by extracting data from on prem database into files and then importing those into Snowflake target table.
During that process, I am getting the error mentioned below. What could be the possibility of getting this error?
Exception calling "Open" with "0" argument(s): "ERROR [HY000] [Snowflake][Snowflake] (4)
REST request for URL <hiding Snowflake URL> failed: CURLerror (curl_easy_perform() failed) - code=60 msg='SSL peer certificate or SSH remote key was not OK' osCode=2 osMsg='No such file or directory'.
ERROR [HY000] [Snowflake][Snowflake] (4)
REST request for URL <hiding Snowflake URL> failed: CURLerror (curl_easy_perform() failed) - code=60 msg='SSL peer certificate or SSH remote key was not OK' osCode=2 osMsg='No such file or directory'.
"
At C:\temp\wsla3205x.ps1:254 char:5
+ $sfOdbc.Open()
Is there any more information required on this from my side?

mbsync authentication failed

I was able to configure mbsync and mu4e in order to use my gmail account (so far everything works fine). I am now in the process of using mu4e-context to control multiple accounts.
I cannot retrieve emails from my openmailbox account whereas I receive this error
Reading configuration file .mbsyncrc
Channel ombx
Opening master ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave ombx-local...
Connection is now encrypted
Logging in...
IMAP command 'LOGIN <user> <pass>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
In other posts I've seen people suggesting AuthMechs Login or PLAIN but mbsync doesn't recognizes the command. Here is my .mbsyncrc file
IMAPAccount openmailbox
Host imap.ombx.io
User user#openmailbox.org
UseIMAPS yes
# AuthMechs LOGIN
RequireSSl yes
PassCmd "echo ${PASSWORD:-$(gpg2 --no-tty -qd ~/.authinfo.gpg | sed -n 's,^machine imap.ombx.io .*password \\([^ ]*\\).*,\\1,p')}"
IMAPStore ombx-remote
Account openmailbox
MaildirStore ombx-local
Path ~/Mail/user#openmailbox.org/
Inbox ~/Mail/user#openmailbox.org/Inbox/
Channel ombx
Master :ombx-remote:
Slave :ombx-local:
# Exclude everything under the internal [Gmail] folder, except the interesting folders
Patterns *
Create Slave
Expunge Both
Sync All
SyncState *
I am using Linux Mint and my isync is version 1.1.2
Thanks in advance for any help
EDIT: I have run a debug option and I have upgraded isync to version 1.2.1
This is what the debug returned:
Reading configuration file .mbsyncrc
Channel ombx
Opening master store ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave store ombx-local...
pattern '*' (effective '*'): Path, no INBOX
got mailbox list from slave:
Connection is now encrypted
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Openmailbox is ready to
handle your requests.
Logging in...
Authenticating with SASL mechanism PLAIN...
>>> 1 AUTHENTICATE PLAIN <authdata>
1 NO [AUTHENTICATIONFAILED] Authentication failed.
IMAP command 'AUTHENTICATE PLAIN <authdata>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
My .msyncrc file now contains these options instead
SSLType IMAPS
SSLVersions TLSv1.2
AuthMechs PLAIN
At the end, the solution was to use the correct password. Since openmailbox uses an application password for third-party e-mail clients I was using the wrong (original) password instead of the application password.

Puppet agent is not running successfully after updating ssl certs

I am running puppet 3.7. The certs are expiring for me so I have updated the certs (after creating a backup so I am able to get back to the original state and that's fine). After updating the certs on puppetmaster using this, updating certs on the agent using this and updating certs on puppetdb using this, I am unable to run puppet agent successfully on a client box. It gives me the following error:
root#ip-10-181-36:/var/lib/puppet# sudo puppet agent -t
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in 'issue_deprecation_warning')
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /node/ip-10-181-36 [find] authenticated at :39
Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /catalog/ip-10-181-36 [find] authenticated at :1
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /report/ip-10-181-36 [save] authenticated at :91
I am stuck at this point and no googling or reading docs or seeing the logs is helping. Does anyone have any ideas?