IP based virtual hosts are not working properly after upgrading Mozilla NSS - apache

We are using NSS as SSL engine in Apache server. Recently we applied latest SUSE Linux Enterprise server patches on Apache server which is hosting two IP based virtual hosts. After upgrade the first virtual host is working fine but the second one is not working.
Error log shows "Hostname vhost1.xxyyzz.com provided via SNI and hostname vhost2.xxyyzz.com provided via HTTP are different" when accessing vhost2.xxyyzz.com.
If we switch back to use mod_ssl the issue was gone. Obviously the issue is related to the following patches. Any help would be appreciated.
mozilla-nss 3.16.4-0.8.1
mozilla-nss-tools 3.16.4-0.8.1
apache2-mod_nss 1.0.8-0.4.9.1

Check your /etc/hosts file to see if you might be assigning the domain name to a local internal IP address or interface.
This caused the same error message for me and many 400 errors.
After changing /etc/hosts don't forget to restart the name service cache daemon ( service nscd restart ).

SNI isn't technically fully supported in that version of mod_nss but it has since been added: https://www.suse.com/support/update/announcement/2015/suse-ru-20150591-1.html
Saw the same error and saw it go away after applying the referenced patch.

Related

Mamp-Pro SSL for local virtualhost

I've seen a lot of similar questions but none of the answers helped me (and there's one addition I didn't see anywhere).
So, I'm using Mamp-Pro 6.0.1 for local testing. I have a domain set up (www.mydomain.lo), enabled SSL and used a self signed certificate I created with the button in Mamp.
I added the cert to my keychain (I'm on a Mac) and set it to «always trust» in the keychain-info.
But when I try to access the local page with https://www.mydomain.lo, I get an error saying:
There was an error connecting to … SSL received an entry which exceeds the max allowed length. Error-Code: SSL_ERROR_RX_RECORD_TOO_LONG
(this is loosely translated from German).
The page works with http:// but I'd like to test the SSL-Version, too.
Any ideas?
I was able to partly solve this riddle.
SSL just doesn't work on local hosts, when the standard port (443) is used.
But it works when the «default MAMP ports» are used.
in MAMP-Pro got to «Ports & User» and click on «Set default MAMP ports».
The ports change as following:
Apache 8888 - SSL 8890
Nginx 7888 - SSL 7890
MySQL 8889
…
It is important that you don't change any of these. I tried to only change the Apache SSL port to 8890 and leave the other ports on their standard (Apache 80, MySQL 3306,…) but then the MySQL-Server doesn't respond.

How to tell vsftpd which ssl to use

I already have vsftp set up with an SSL which is working fine. The issue is that the SSL is for the server's host name and not one of my client's. This client has to be PCI compliant, so when the PCI scan takes place it checks the FTP ports and sees that the SSL is not associated with my client's URL. My question is how can I set vsftp up to serve an SSL based off the IP address or the hostname?
vsftpd version 3.0.3
Red Hat 8.2
I finally found the answer to this on Red Hat's site (https://access.redhat.com/solutions/5172631).
Essentially, the default configuration file is located at /etc/vsftpd/vsftpd.conf. You need to update this file to listen to the default IP address for the server using listen_address=.... Then, copy that file to /etc/vsftpd/[site].conf and change the listen_address to the one for the other site. (Obviously, you have to have different IP addresses for different sites for this to work.)
Once done, enable vsftpd.target and start it:
systemctl enable vsftpd.target
systemctl start vsftpd.target
I also had to restart vsftpd to get this to work:
systemctl restart vsftpd
After that, when connecting to FTP for site 1, everything worked as expected. When connecting to site 2 (the one with it's own unique SSL) I got the correct SSL.

Why can't I see https webpage after using sudo certbot --apache

I've just done a fresh install of Ubuntu 20.04 and followed the Digital Ocean instructions to get my apache server up and running:
https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-20-04
Which worked fine for HTTP traffic, then I used the Digital Ocean instructions (which I knew, but followed them anyway) to set up for SSL (https) access:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04
I selected the option to redirect all traffic to https. I opened my firewall using sudo ufw allow 'Apache Full'.
But I am unable to see my sites - the browsers just timeout. I have tried disabling ufw just to see, and nope, nothing.
SSL Labs just gives me an "Assessment failed: Unable to connect to the server" error.
I also ran https://check-your-website.server-daten.de/?q=juglugs.com
and it timed out:
I have deleted the letsencrypt stuff and ran through it again three times with the same result, and now I'm stuck...
Everything I've searched points to a firewall error, but as I've said, I've disabled that and have the same result. The router settings have not been changed since I did my fresh Ubuntu install.
Any help gratefully received.
Thanks in advance.
on8tom answered this one for me - In setting up the new build of Ubuntu, my local IP address for the apache server had changed, and my Virgin Media Hub only had port 443 open to the old IP address.
Many thanks for pointing me at that (but I should have checked that before posting this - kicking myself!)

Unable to register host while creating Apache Ambari cluster

I am trying to create localhost Apache Ambari cluster on CentOS7. I am using Ambari 2.2.2 binaries downloaded and installed from the Ambari repository with the following commands
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo
yum install ambari-server
ambari-server setup
ambari-server start
Before starting the server I have done all the necessary preparations steps described on the Hortonworks including the setup of passwordless ssh, which is frequent reason of problems according to the posts found on the internet. I verify it with
ssh root#localhost
During the creation of cluster in the "Install options" window I enter the name of the host I want to create (localhost in my case) and have already tried both of the options, which are
providing rsa secret key direktly - in this case the next window
simply stucks in the "Installing" stage and does not go any further,
showing no errors
performing manual registration of hosts.
For the second option I have downloaded and installed ambari-agent
yum install ambari-agent
ambari-agent start
In case of manual host registration I am getting the following error
"Host checks were skipped on 1 hosts that failed to register.".
When I click on "Failed", which in some cases described over the internet is supposed to deliver more precise description of a problem I see the following
"Registering with the server...
Registration with the server failed."
As a result I don't even now where to start searching for the possible reasons of this error.
Ambari cluster nodes need to be configured with a Fully Qualified Domain Name (FQDN). localhost is not an FQDN. You will need to configure the node with an FQDN and then retry the installation. You could use something like: localhost.local which is an FQDN. This requirement and how to configure the node to meet it are documented in the pre-requirements. From the HDP documentation:
All hosts in your system must be configured for both forward and and reverse DNS.
If you are unable to configure DNS in this way, you should edit the /etc/hosts file on every host in your cluster to contain the IP address and Fully Qualified Domain Name of each of your hosts.
I had the same "Registering with the server... Registration with the server failed." problem just recently.
I found the response on the same topic recommending to take a look at the log file which is located here /var/log/ambari-agent/ambari-agent.log from there was able to check that the hostname was set up incorrectly during some other phase of installation (I had it something like ambari.hadoop instead of localhost). So I went to the /etc/ambari-agent/conf/ambari-agent.ini and fixed it there.
I know that I'm digging some quite old question, but seems that compiling all that at one place might help someone with the same problem.

Vagrant share producing a 400 bad request

I'm using Vagrant with apache2 and specifically the command
vagrant share --https 443
It all starts fine and provides a URL. When I access that URL I'm presented with a 400 error:
Bad Request
Your browser sent a request that this server could not understand.
Apache/2.4.12 (Ubuntu) Server at *.vagrantshare.com Port 443
I have been accessing the vagrant machine using https just fine, but it doesn't seem to like to work with vagrant share.
This is a known Vagrant Share bug: https://github.com/webdevops/vagrant-docker-vm/issues/51
The only workarounds I've seen discussed are to use a custom domain or to use another product entirely (e.g. ngrok) to create the share. See the bug discussion here: https://github.com/mitchellh/vagrant/issues/5493#issuecomment-159792794
Vagrant Share docs for custom domains are here: https://atlas.hashicorp.com/help/vagrant/shares/custom-domains