failed retrieving file from mirror.erickochen.nl - archlinux

When I run pacman -Syu to update, it first shows no error, I normally update everything and after that, I run pacman -Syu again, it shows this, what is the reason and any solution?
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
error: failed retrieving file 'core.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5241 ms: Connection timed out
error: failed retrieving file 'extra.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
error: failed retrieving file 'community.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
warning: too many errors from mirror.erickochen.nl, skipping for the remainder of this transaction
:: Starting full system upgrade...
there is nothing to do

Sometimes mirrors go offline, it's recommended to have multiple mirrors so you don't have a single point of failure, as well as keeping mirrors updated. Using reflector is recommended since it also finds fast candidates based on your location.
For the time being, edit /etc/pacman.d/mirrorlist and uncomment a couple of mirrors, then try updating again.

Related

Unable to establish SSL connection when using wget to download GEDI data from LP DAAC data pool

I was using wget to download GEDI data from LP DAAC data pool. It always returns an error of "unable to establish SSL connection". I attempted wget in promote or Pycharm and added the "--no-check-certificate" configuration.
The wget is the newest release (1.21.3,64bit).
OS: windows11.
from the following massages, I guess the connection to EarthData is successful because it returns the data downloading link that I can open manually in the browser and then can start downloading. This error could happen in the last step that wget starts accessing the returned link and then downloading.
returned messages:
--2022-08-14 09:51:09-- https://e4ftl01.cr.usgs.gov//GEDI_L1_L2/GEDI/GEDI01_B.002/2019.04.20/GEDI01_B_2019110092939_O01996_01_T03334_02_005_01_V002.h5
Resolving e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)... 2001:49c8:4000:127d::133:130, 152.61.133.130
Connecting to e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)|2001:49c8:4000:127d::133:130|:443... failed: Bad file descriptor.
Connecting to e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)|152.61.133.130|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://urs.earthdata.nasa.gov/oauth/authorize?scope=uid&app_type=401&client_id=ijpRZvb9qeKCK5ctsn75Tg&response_type=code&redirect_uri=https%3A%2F%2Fe4ftl01.cr.usgs.gov%2Foauth&state=aHR0cHM6Ly9lNGZ0bDAxLmNyLnVzZ3MuZ292Ly9HRURJX0wxX0wyL0dFREkvR0VESTAxX0IuMDAyLzIwMTkuMDQuMjAvR0VESTAxX0JfMjAxOTExMDA5MjkzOV9PMDE5OTZfMDFfVDAzMzM0XzAyXzAwNV8wMV9WMDAyLmg1 [following]
--2022-08-14 09:51:55-- https://urs.earthdata.nasa.gov/oauth/authorize?scope=uid&app_type=401&client_id=ijpRZvb9qeKCK5ctsn75Tg&response_type=code&redirect_uri=https%3A%2F%2Fe4ftl01.cr.usgs.gov%2Foauth&state=aHR0cHM6Ly9lNGZ0bDAxLmNyLnVzZ3MuZ292Ly9HRURJX0wxX0wyL0dFREkvR0VESTAxX0IuMDAyLzIwMTkuMDQuMjAvR0VESTAxX0JfMjAxOTExMDA5MjkzOV9PMDE5OTZfMDFfVDAzMzM0XzAyXzAwNV8wMV9WMDAyLmg1
Resolving urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)... 2001:4d0:241a:4081::89, 198.118.243.33
Connecting to urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)|2001:4d0:241a:4081::89|:443... failed: Bad file descriptor.
Connecting to urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)|198.118.243.33|:443... connected.
Unable to establish SSL connection.

Found error 'CRASH REPORT Process' on RabbitMQ in every 10 mins

I found error on RabbitMQ in every 10 mins. Please help me to investigate this problem.
Error message.
021-09-09 13:25:30.084 [error] <0.14464.32> CRASH REPORT Process <0.14464.32> with 0 neighbours crashed with reason: no function clause matching rabbit_mgmt_wm_node:find_type(rabbit#controller1, []) line 79
2021-09-09 13:25:30.085 [error] <0.14457.32> Ranch listener rabbit_web_dispatch_sup_15672, connection process <0.14457.32>, stream 1 had its request process <0.14464.32> exit with reason function_clause and stacktrace [{rabbit_mgmt_wm_no
I had the same issue with Zabbix monitoring the RabbitMQ server every minute which generated a crash-error with the same frequency.
The URL used by Zabbix to monitor contained a domain part to the node name ie. rabbit#my_host.zzz.aws instead of the actual node name as displayed by the console: rabbit#my_host. this explains why rabbit_mgmt_wm_node:find_type failed and crashed.
This was verified using curl as shown below:
curl -v -u user:passwd 'http://127.0.0.1:15672/api/nodes/rabbit#my_host?memory=true'
which returned a valid response, HTTP/1.1 200 OK, when the node name matched and the crash/error when it did not.
please refer to this thread:
https://groups.google.com/g/rabbitmq-users/c/N0EgrLn55XQ

Error while running query on Impala with Superset

I'm trying to connect impala to superset, and when I test the connection prints: "Seems OK!", and when I try to see databases on impala with the SQL Editor in the left side it shows all databases without problems.
Preview of Databases/Tables
But when i write a query and click on "Run Query", it gives the error: "Could not start SASL: b'Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Ticket expired)'"
Error running query
I'm running superset with SSL and in production mode (with Gunicorn) and Impala with SSL in a Kerberized Hadoop Cluster, and my impala database config is:
Impala Config
And in the extras I put:
{
"metadata_params": {},
"engine_params": {
"connect_args": {
"port": 21050,
"use_ssl": "True",
"ca_cert": "path/to/my/ca_cert.pem",
"auth_mechanism": "GSSAPI"
}
},
"metadata_cache_timeout": {},
"schemas_allowed_for_csv_upload": []
}
How can I solve this error? In my superset log it only shows:
Triggering query_id: 65
INFO:superset.views.core:Triggering query_id: 65
Query 65: Running query on a Celery worker
INFO:superset.views.core:Query 65: Running query on a Celery worker
Versions: Superset 0.36.0, Impyla 0.16.2
I was able to fix this error doing this steps:
1 - Created service user for celery-worker, created a kerberos ticket for him and created a crontab to renew the ticket.
2 - Runned celery worker from this service user, instead running from root.
3 - Killed an celery-worker that was running in another machine of my cluster
4 - Restarted Impala and Superset
I think this error ocurred because in some queries instead of use the celery worker in my superset machine, it was using the celery worker that was in another machine without a valid kerberos ticket. I could fix this error because when I was reading celery-worker log , it showed that a connection with the celery worker in other machine failed in a query running.

How to fix etcd cluster "error "tls: first record does not look like a TLS handshake""

I created a three node etcd cluester, config and start is already OK, but when I check the /var/log/messages, it shows
etcd: rejected connection from "172.17.0.3:43192" (error "tls: first
record does not look like a TLS handshake", ServerName "")
How can I fix it ?
I have checked the health of etcd :
member 48b0dff99d5c867e is healthy: got healthy result from https://172.17.0.9:2379
member 646dab89331aabab is healthy: got healthy result from https://172.17.0.8:2379
member b45603216bfac234 is healthy: got healthy result from https://172.17.0.10:2379
That shows Ok, but when I cat the /var/log/messages, it always shows this error :
Jan 12 20:08:57 master etcd: rejected connection from
"172.17.0.3:43160" (error "tls: first record does not look like a TLS
handshake", ServerName "")
Jan 12 20:08:57 master etcd: rejected
connection from "172.17.0.3:43162" (error "tls: oversized record
received with length 21536", ServerName "")
I got this message for the etcd peer communication when switching from http to https for peer communication. Apparently etcd has persistent peer information that overrides the command line options so it continued to use http for peer communication in spite of the command line options.
In the end, since this was a test cluster, I nuked /var/lib/etcd and the new cli configuration took hold
There is no solution from my side to fully help you with an issue but I've found couple of links that might help you in further investigations. Read them carefully, try solutions and I hope you will resolve the problem.
Github question #9917: check ETCDCTL_API variable, especially make sure --endpoints is configured with https.
Runtime reconfiguration: try to reconfigure you etcd by updating/removing/adding etcs members.
nginx ingress: check your nginx ingress annotations in case you are using nginx
google groups TLS handshake topic: Check this topic, especially comments related to VAULT_ADDR variable. I will copy paste last comment from thread here:
We were able to get everything to work, after understanding the
permission issues.
You asked: "Please confirm if you are seeing server error messages
before initializing Vault" Upon further examination, I did determine
that the errors were not happening before initializing the Vault.
The problem ended up not being related to VAULT_ADDR, and we used the
value: "http://127.0.0.1:8200"
I have the setup operation scripted, and it appears that not
everything was being run at the proper permissions. At first I was
running the scripts using the "sudo" command, which resulted in the
failures. I discovered that the permissions for the certificate key
were restricted and the file could not be accessed by my user. There
may have been other permission issues as well. But once I switched
user to root, and ran the script, everything behaved correctly.
Thanks

Puppet agent is not running successfully after updating ssl certs

I am running puppet 3.7. The certs are expiring for me so I have updated the certs (after creating a backup so I am able to get back to the original state and that's fine). After updating the certs on puppetmaster using this, updating certs on the agent using this and updating certs on puppetdb using this, I am unable to run puppet agent successfully on a client box. It gives me the following error:
root#ip-10-181-36:/var/lib/puppet# sudo puppet agent -t
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in 'issue_deprecation_warning')
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /node/ip-10-181-36 [find] authenticated at :39
Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /catalog/ip-10-181-36 [find] authenticated at :1
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /report/ip-10-181-36 [save] authenticated at :91
I am stuck at this point and no googling or reading docs or seeing the logs is helping. Does anyone have any ideas?