centos7 wget over ssl : Unable to establish SSL connection - ssl

I have tried upgrading wget, now it is the latest version,
but the problem still exists,
how can I solve this problem?
ps: this problem appears in the one pc which system use minimize installation then install gnome,but the another pc which directly selects gnome installation is ok(--!!!)
wget gives
--no-check-certificate does not work
openssl s_client show

According to the output you connect to 172.31.106.79 when trying to connect to www.facebook.com. This is not the IP address for facebook but an IP address reserved for use in internal networks. It looks like that you have some DNS server which returns the wrong IP address for www.facebook.com. This might be some captive portal or it might be an attack or maybe some firewall blocking facebook or similar.
To fix the problem make sure that you are using a network which is not affected by this problem.

Use this command and then try :
sudo yum update wget

Related

Compute engine- GCP Click to Deploy solution crashed

I am using Google Cloud deployment manager - Wordpress click to deploy solution.
I installed a certificate through the Virtual machine SSH on the compute engine page using Certbot. Immediately after I installed the certificate the page started showing
"ssl_error_bad_cert_domain " and didn´t open.
I went back to the SSH and deleted the certificate by using the certbot command $ sudo certbot delete . Since that didn´t solved the error I tried turning off and on and also restarting the VM which didn't resolve the issue either
I could see in the logs explorer there was an error coming from the VM saying: Invalid ssh key entry - expired key:[expired_key] so I requested a new one through the VM ssh using:
ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C username printed the content of cd ~/.ssh && cat gcloud_instance1.pub and then added that to the VM ssh-keys text-area. That did stop the errors on the logs explorer but didn't solve the issue since the Wordpress implementation still doesn't open.
Another thing to add is that when the VM was turned off and on the IP address changed, which I am not sure it is also causing the crash.
This is how the page currently looks like: webpage failing
This is the logs explorer : https://docs.google.com/spreadsheets/d/1cKXWkfaFbmUFakomwM_-TtoSelxWKBDxNBK7fnMy_PA/edit?usp=sharing
Any ideas what could be happening ?
Thanks
The "ssl_error_bad_cert_domain" error can happen when the SSL certificate is not properly installed or is not issued for the correct domain.
My guess is that you had a typo in the domain name or you need a static ip.
You can use this "-sudo certbot certificates" command to verify after the installation.
It is recommended to have a static IP for better stability, especially if you are using SSL certificates.

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

Why can't I see https webpage after using sudo certbot --apache

I've just done a fresh install of Ubuntu 20.04 and followed the Digital Ocean instructions to get my apache server up and running:
https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-20-04
Which worked fine for HTTP traffic, then I used the Digital Ocean instructions (which I knew, but followed them anyway) to set up for SSL (https) access:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04
I selected the option to redirect all traffic to https. I opened my firewall using sudo ufw allow 'Apache Full'.
But I am unable to see my sites - the browsers just timeout. I have tried disabling ufw just to see, and nope, nothing.
SSL Labs just gives me an "Assessment failed: Unable to connect to the server" error.
I also ran https://check-your-website.server-daten.de/?q=juglugs.com
and it timed out:
I have deleted the letsencrypt stuff and ran through it again three times with the same result, and now I'm stuck...
Everything I've searched points to a firewall error, but as I've said, I've disabled that and have the same result. The router settings have not been changed since I did my fresh Ubuntu install.
Any help gratefully received.
Thanks in advance.
on8tom answered this one for me - In setting up the new build of Ubuntu, my local IP address for the apache server had changed, and my Virgin Media Hub only had port 443 open to the old IP address.
Many thanks for pointing me at that (but I should have checked that before posting this - kicking myself!)

Unable to register host while creating Apache Ambari cluster

I am trying to create localhost Apache Ambari cluster on CentOS7. I am using Ambari 2.2.2 binaries downloaded and installed from the Ambari repository with the following commands
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo
yum install ambari-server
ambari-server setup
ambari-server start
Before starting the server I have done all the necessary preparations steps described on the Hortonworks including the setup of passwordless ssh, which is frequent reason of problems according to the posts found on the internet. I verify it with
ssh root#localhost
During the creation of cluster in the "Install options" window I enter the name of the host I want to create (localhost in my case) and have already tried both of the options, which are
providing rsa secret key direktly - in this case the next window
simply stucks in the "Installing" stage and does not go any further,
showing no errors
performing manual registration of hosts.
For the second option I have downloaded and installed ambari-agent
yum install ambari-agent
ambari-agent start
In case of manual host registration I am getting the following error
"Host checks were skipped on 1 hosts that failed to register.".
When I click on "Failed", which in some cases described over the internet is supposed to deliver more precise description of a problem I see the following
"Registering with the server...
Registration with the server failed."
As a result I don't even now where to start searching for the possible reasons of this error.
Ambari cluster nodes need to be configured with a Fully Qualified Domain Name (FQDN). localhost is not an FQDN. You will need to configure the node with an FQDN and then retry the installation. You could use something like: localhost.local which is an FQDN. This requirement and how to configure the node to meet it are documented in the pre-requirements. From the HDP documentation:
All hosts in your system must be configured for both forward and and reverse DNS.
If you are unable to configure DNS in this way, you should edit the /etc/hosts file on every host in your cluster to contain the IP address and Fully Qualified Domain Name of each of your hosts.
I had the same "Registering with the server... Registration with the server failed." problem just recently.
I found the response on the same topic recommending to take a look at the log file which is located here /var/log/ambari-agent/ambari-agent.log from there was able to check that the hostname was set up incorrectly during some other phase of installation (I had it something like ambari.hadoop instead of localhost). So I went to the /etc/ambari-agent/conf/ambari-agent.ini and fixed it there.
I know that I'm digging some quite old question, but seems that compiling all that at one place might help someone with the same problem.

SSL verification behind McAfee Proxy on LAMP VM

I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.