How to tell vsftpd which ssl to use - ssl

I already have vsftp set up with an SSL which is working fine. The issue is that the SSL is for the server's host name and not one of my client's. This client has to be PCI compliant, so when the PCI scan takes place it checks the FTP ports and sees that the SSL is not associated with my client's URL. My question is how can I set vsftp up to serve an SSL based off the IP address or the hostname?
vsftpd version 3.0.3
Red Hat 8.2

I finally found the answer to this on Red Hat's site (https://access.redhat.com/solutions/5172631).
Essentially, the default configuration file is located at /etc/vsftpd/vsftpd.conf. You need to update this file to listen to the default IP address for the server using listen_address=.... Then, copy that file to /etc/vsftpd/[site].conf and change the listen_address to the one for the other site. (Obviously, you have to have different IP addresses for different sites for this to work.)
Once done, enable vsftpd.target and start it:
systemctl enable vsftpd.target
systemctl start vsftpd.target
I also had to restart vsftpd to get this to work:
systemctl restart vsftpd
After that, when connecting to FTP for site 1, everything worked as expected. When connecting to site 2 (the one with it's own unique SSL) I got the correct SSL.

Related

Get SSL to work on Google Compute Engine with a VM Instance running a webserver (nginx)?

I am a bit new to Google Compute engine and managed to get a webserver with nginx to work on my google domain and installed WordPress. HTTP access was working. Now I wanted to get HTTPS to work as well.
I noticed that I don't have SSL running and so I ended up using cloudflare, made necessary changes to my nginx server and also changed the nameserver for my webserver IP address on the Google Compute Engine. That works fine. Although, there are still some errors when accessing the IP address instead of the domain name (400 Bad Request No required SSL certificate was sent nginx/1.18.0 (Ubuntu)).
So, I heard Google can do SSL on my google domain, but I am really stuck with the documentation, https://cloud.google.com/appengine/docs/standard/python/securing-custom-domains-with-ssl?authuser=2#upgrading_to_managed_ssl_certificates. It talks about Google App Engine and I haven't found a documentation to apply SSL certificates to my Google Compute Engine instance. Though, I added a custom domain there, but it points to a different IP address than my webserver on the Google Compute Engine. That surely can't be the right way?
Hence, does anyone know how I can get SSL from Google to work on my webserver using a VM instance on Google Compute Engine?
(Note to myself: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04)
It is very easy to set up SSL on Compute Engine.
STEP 1: Domain names
Determine which domain names you want SSL certificates for. Typically you want two. The naked domain (example.com) and the zone www (www.example.com). Replace example.com with your actual domain name.
Note: Let's Encrypt will not issue SSL certificates for an IP address. This also means you cannot access your web server using SSL specifying an IP address instead of a domain name. Trying this will generate an error: https://my-ip-address.com
STEP 2: Setup DNS
Change your DNS servers to point directly to your Compute Engine instance reserved static IP address. At this point, do not use CloudFlare. Let's Encrypt will talk directly to your Nginx web server. Validate that each domain name is configured correctly and that you can access your site via HTTP (http://example.com and http://www.example.com).
The following instructions are OS dependant and are for Debian based systems such as Debian and Ubuntu. There are similar steps for CentOS, Red Hat, etc.
STEP 3: Install Certbot
Certbot is the software agent for Let's Encrypt. This requires Python3 to be installed on your system. Most Google Cloud instances have Python 3 installed.
Run the following commands on your VM instance:
sudo apt update
sudo apt upgrade -y
sudo apt install certbot python3-certbot-nginx
STEP 4: VPC Firewall
Make sure that ports 80 and 443 are allowed in the Google Cloud VPC Firewall.
Using firewall rules
STEP 5: Issue the SSL Certificate
Run the following command on your VM instance. Replace example.com with your domain names.
sudo certbot --nginx -d example.com -d www.example.com
Summary
Your server now has SSL configured. The SSL certificate will auto-renew. Provided that you do not change the domain names or DNS server settings, SSL will continuously function.
In the future, you may decide to offload SSL certificates to another service such as Cloudflare or a Google HTTP(S) Load Balancer. I recommend understanding how to set up SSL directly on your instance so that encryption is end-to-end. Then you can decide on SSL-offloading, caching, load balancing, auto-scaling, and more options.

Mamp-Pro SSL for local virtualhost

I've seen a lot of similar questions but none of the answers helped me (and there's one addition I didn't see anywhere).
So, I'm using Mamp-Pro 6.0.1 for local testing. I have a domain set up (www.mydomain.lo), enabled SSL and used a self signed certificate I created with the button in Mamp.
I added the cert to my keychain (I'm on a Mac) and set it to «always trust» in the keychain-info.
But when I try to access the local page with https://www.mydomain.lo, I get an error saying:
There was an error connecting to … SSL received an entry which exceeds the max allowed length. Error-Code: SSL_ERROR_RX_RECORD_TOO_LONG
(this is loosely translated from German).
The page works with http:// but I'd like to test the SSL-Version, too.
Any ideas?
I was able to partly solve this riddle.
SSL just doesn't work on local hosts, when the standard port (443) is used.
But it works when the «default MAMP ports» are used.
in MAMP-Pro got to «Ports & User» and click on «Set default MAMP ports».
The ports change as following:
Apache 8888 - SSL 8890
Nginx 7888 - SSL 7890
MySQL 8889
…
It is important that you don't change any of these. I tried to only change the Apache SSL port to 8890 and leave the other ports on their standard (Apache 80, MySQL 3306,…) but then the MySQL-Server doesn't respond.

Check SSL installed correctly without domain name

Is there a way to check if SSL is correctly set up on a server, before pointing the domain at the server (the site has SSL on it's current server, and I want to make sure SSL is ready to go on the new server before I change the A record).
The site, on the new server, will not be in the root directory of the web server, so going to the server's IP address in my browser or using online SSL checker tools won't work (or is there a way to test just with IP address?).
The new server is Apache.
Thanks
Setup everything on the new server, then populate both its /etc/hosts and yours (or equivalent on your OS) with a mapping between its IP address and the name.
Hence at least the browser on your machine should, based on /etc/hosts query the new server, before you do the same change in the DNS for anyone else to see.
HTTPS and direct browsing by IP addresses does not mix well because:
certificates are based on hostnames, not IP addresses
with SNI, the client needs to pass an hostname at the TLS level for the server to properly select the certificate, in case of multihosting on a single IP address
If it's enough to test SSL/TLS, not HTTP level including things like redirects and linked resources (CSS, JS, images, etc)
openssl s_client -connect address:port -servername hostname_for_SNI </dev/null
# or <NUL: on Windows
# optionally add -quiet to suppress most non-error output

ssl for aws EC2 Flask application

I have registered a free domain name from freenom.com and added nameservers from AWS route53. Now my domain <blabla>.ga successfully redirects to EC2 python flask server. But I really can't figure out how to add ssl by using lets encrypt. I am following the link https://ivopetkov.com/b/let-s-encrypt-on-ec2/ for SLLifying my ec2.after running letsencrypt-auto I add domain names and press enter, then I get
[ec2-user#ip-172-31-40-218 letsencrypt]$ cd /opt/letsencrypt/
[ec2-user#ip-172-31-40-218 letsencrypt]$ ./letsencrypt-auto
Requesting to rerun ./letsencrypt-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated) (Enter 'c' to cancel): iotserver.ga www.iotserver.ga
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for iotserver.ga
http-01 challenge for www.iotserver.ga
Cleaning up challenges
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
A similar question is asked here, but I've already done most part explained in both of the answers. Can anyone assist me on what I am missing here ?
try following tutorials:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps
Make sure that you able to access said web app without https, then try to install SSL. As I can see you are getting following error
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
There must be some configuration issue. Please debug it and let me know.

IP based virtual hosts are not working properly after upgrading Mozilla NSS

We are using NSS as SSL engine in Apache server. Recently we applied latest SUSE Linux Enterprise server patches on Apache server which is hosting two IP based virtual hosts. After upgrade the first virtual host is working fine but the second one is not working.
Error log shows "Hostname vhost1.xxyyzz.com provided via SNI and hostname vhost2.xxyyzz.com provided via HTTP are different" when accessing vhost2.xxyyzz.com.
If we switch back to use mod_ssl the issue was gone. Obviously the issue is related to the following patches. Any help would be appreciated.
mozilla-nss 3.16.4-0.8.1
mozilla-nss-tools 3.16.4-0.8.1
apache2-mod_nss 1.0.8-0.4.9.1
Check your /etc/hosts file to see if you might be assigning the domain name to a local internal IP address or interface.
This caused the same error message for me and many 400 errors.
After changing /etc/hosts don't forget to restart the name service cache daemon ( service nscd restart ).
SNI isn't technically fully supported in that version of mod_nss but it has since been added: https://www.suse.com/support/update/announcement/2015/suse-ru-20150591-1.html
Saw the same error and saw it go away after applying the referenced patch.