Get SSL to work on Google Compute Engine with a VM Instance running a webserver (nginx)? - ssl

I am a bit new to Google Compute engine and managed to get a webserver with nginx to work on my google domain and installed WordPress. HTTP access was working. Now I wanted to get HTTPS to work as well.
I noticed that I don't have SSL running and so I ended up using cloudflare, made necessary changes to my nginx server and also changed the nameserver for my webserver IP address on the Google Compute Engine. That works fine. Although, there are still some errors when accessing the IP address instead of the domain name (400 Bad Request No required SSL certificate was sent nginx/1.18.0 (Ubuntu)).
So, I heard Google can do SSL on my google domain, but I am really stuck with the documentation, https://cloud.google.com/appengine/docs/standard/python/securing-custom-domains-with-ssl?authuser=2#upgrading_to_managed_ssl_certificates. It talks about Google App Engine and I haven't found a documentation to apply SSL certificates to my Google Compute Engine instance. Though, I added a custom domain there, but it points to a different IP address than my webserver on the Google Compute Engine. That surely can't be the right way?
Hence, does anyone know how I can get SSL from Google to work on my webserver using a VM instance on Google Compute Engine?
(Note to myself: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04)

It is very easy to set up SSL on Compute Engine.
STEP 1: Domain names
Determine which domain names you want SSL certificates for. Typically you want two. The naked domain (example.com) and the zone www (www.example.com). Replace example.com with your actual domain name.
Note: Let's Encrypt will not issue SSL certificates for an IP address. This also means you cannot access your web server using SSL specifying an IP address instead of a domain name. Trying this will generate an error: https://my-ip-address.com
STEP 2: Setup DNS
Change your DNS servers to point directly to your Compute Engine instance reserved static IP address. At this point, do not use CloudFlare. Let's Encrypt will talk directly to your Nginx web server. Validate that each domain name is configured correctly and that you can access your site via HTTP (http://example.com and http://www.example.com).
The following instructions are OS dependant and are for Debian based systems such as Debian and Ubuntu. There are similar steps for CentOS, Red Hat, etc.
STEP 3: Install Certbot
Certbot is the software agent for Let's Encrypt. This requires Python3 to be installed on your system. Most Google Cloud instances have Python 3 installed.
Run the following commands on your VM instance:
sudo apt update
sudo apt upgrade -y
sudo apt install certbot python3-certbot-nginx
STEP 4: VPC Firewall
Make sure that ports 80 and 443 are allowed in the Google Cloud VPC Firewall.
Using firewall rules
STEP 5: Issue the SSL Certificate
Run the following command on your VM instance. Replace example.com with your domain names.
sudo certbot --nginx -d example.com -d www.example.com
Summary
Your server now has SSL configured. The SSL certificate will auto-renew. Provided that you do not change the domain names or DNS server settings, SSL will continuously function.
In the future, you may decide to offload SSL certificates to another service such as Cloudflare or a Google HTTP(S) Load Balancer. I recommend understanding how to set up SSL directly on your instance so that encryption is end-to-end. Then you can decide on SSL-offloading, caching, load balancing, auto-scaling, and more options.

Related

HTTPS Connection over LAN

I am new to server management and all that HTTP stuff. I am setting up an internal server for my home to serve websites internally, my website needs to register a service worker and for that, I'll need an SSL Certificate and HTTP connection, which seems impossible in my case as all localhost or internal IPs are served over HTTP with untrusted SSL Certificates.
If anyone could suggest a way around serving websites over HTTPS with trusted certificates so that service worker can be used.
Note: I'll be using Xampp Apache for my Linux server with a static internal IP.
If you need 'trusted cert for any client', I may say "no way".
But if you need 'trusted cert for your client only', you have a way to do that.
I guess you published self-ssl cert for your Apache. In the case, you just install the cert into your client.
example: The following link tell us the case of client = Chrome on Windows.
https://peacocksoftware.com/blog/make-chrome-auto-accept-your-self-signed-certificate
If you use any programming language as a client, you may need another way to install the cert.

Check SSL installed correctly without domain name

Is there a way to check if SSL is correctly set up on a server, before pointing the domain at the server (the site has SSL on it's current server, and I want to make sure SSL is ready to go on the new server before I change the A record).
The site, on the new server, will not be in the root directory of the web server, so going to the server's IP address in my browser or using online SSL checker tools won't work (or is there a way to test just with IP address?).
The new server is Apache.
Thanks
Setup everything on the new server, then populate both its /etc/hosts and yours (or equivalent on your OS) with a mapping between its IP address and the name.
Hence at least the browser on your machine should, based on /etc/hosts query the new server, before you do the same change in the DNS for anyone else to see.
HTTPS and direct browsing by IP addresses does not mix well because:
certificates are based on hostnames, not IP addresses
with SNI, the client needs to pass an hostname at the TLS level for the server to properly select the certificate, in case of multihosting on a single IP address
If it's enough to test SSL/TLS, not HTTP level including things like redirects and linked resources (CSS, JS, images, etc)
openssl s_client -connect address:port -servername hostname_for_SNI </dev/null
# or <NUL: on Windows
# optionally add -quiet to suppress most non-error output

ssl for aws EC2 Flask application

I have registered a free domain name from freenom.com and added nameservers from AWS route53. Now my domain <blabla>.ga successfully redirects to EC2 python flask server. But I really can't figure out how to add ssl by using lets encrypt. I am following the link https://ivopetkov.com/b/let-s-encrypt-on-ec2/ for SLLifying my ec2.after running letsencrypt-auto I add domain names and press enter, then I get
[ec2-user#ip-172-31-40-218 letsencrypt]$ cd /opt/letsencrypt/
[ec2-user#ip-172-31-40-218 letsencrypt]$ ./letsencrypt-auto
Requesting to rerun ./letsencrypt-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated) (Enter 'c' to cancel): iotserver.ga www.iotserver.ga
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for iotserver.ga
http-01 challenge for www.iotserver.ga
Cleaning up challenges
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
A similar question is asked here, but I've already done most part explained in both of the answers. Can anyone assist me on what I am missing here ?
try following tutorials:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps
Make sure that you able to access said web app without https, then try to install SSL. As I can see you are getting following error
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
There must be some configuration issue. Please debug it and let me know.

AWS and TLS Authentication

Does anyone know how I can enable TLS Authentication on an application running inside an AWS Ubuntu machine.
To be specific, I have an Ubuntu machine on AWS running Linux Container (LXC) and LXD (a framework on top of LXC that provides REST APIs to access Linux Containers, among other things). I generated certificate and key on the Ubuntu host using LXC command line utility. I then tested whether the certificate works locally by running curl command providing the --cert and --key options to it, and everything works fine.
I then copied the Certificate over to my local machines (Mac OS X) keyChain and tried accessing the Ubuntu Server (which btw has an open security, allows traffic from everywhere on any port.) It gives me the error : "This server could not prove that it is X.X.X.X . Its security certificate is from ip-X.X.X.X".
I noticed that the certificate has the DNS name value as the private IP address given to the machine by AWS instead of public IP address.
Does any one know how I can access my TLS enabled application inside an AWS Ubuntu machine from outside, public network?
Please let me know if things are not clear and I would be happy to provide more details.
Within the certificate is a field that specifies what machine name or IP address the certificate should be coming from. This prevents another site from grabbing the same certificate and presenting it as the other site's certificate. The issue in this case is that your certificate specifies the AWS internal address, but the client sees the external address of the server.
The solution is simple: generate a security certificate with a subject alternative name (SAN) that is the external IP address rather than the internal IP address. External clients will then see the certificate IP address as matching the address they went to.

Amazon EC2 + SSL

I want to enable ssl on an EC2 instance. I know how to install third party SSL. I have also enabled ssl in security group.
I just want to use a url like this: ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com with https.
I couldn't find the steps anywhere.
It would be great if someone can direct me to some document or something.
Edit:
I have a instance on EC2. On Which I have installed LAMP. I have also enabled http, https and ssh in the security group policy.
When I open the Public DNS url in browser,I can see the web server running perfectly.
But When I add https to URL, nothing happens.
Is there a way I am missing? I really dont want to use any custom domain on this instance because I will terminate it after a month.
For development, demo, internal testing, (which is a common case for me) you can achieve demo grade https in ec2 with tunneling tools. Within few minutes especially for internal testing purposes with [ngrok] you would have https (demo grade traffic goes through tunnel)
Tool 1: https://ngrok.com Steps:
Download ngrok to your ec2 instance: wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (at the time of writing but you will see this link in ngrok home page once you login).
Enable 8080, 4443, 443, 22, 80 in your AWS security group.
Register and login to ngrok and copy the command to activate it with token: ./ngrok authtoken shjfkjsfkjshdfs (you will see it in their home page once you login)
Run your http - non https server (any, nodejs, python, whatever) on EC2
Run ngrok: ./ngrok http 80 (or a different port if your simple http server runs on a different server)
You will get an https link to your server.
Tool 2: cloudflare wrap
Alternatively, I think you can use an alternative to ngrok which is called cloudflare wrap but I haven't tried that.
Tool 3: localtunnel
A third alternative could be https://localtunnel.github.io which as opposed to ngrok can provide you a subdomain for free it's not permanent but you can ask for a specific subdomain and not a random string.
--subdomain request a named subdomain on the localtunnel server (default is random characters)
Tool 4: https://serveo.net/
Turns out that Amazon does not provide ssl certificates for their EC2 instances out of box. I skipped the part that they are a virtual servers providers.
To install ssl certificate even the basic one, you need to buy it from someone and install it manually on your server.
I used startssl.com They provide free basic ssl certificates.
Create a self signed SSL certificate using openssl. CHeck this link for more information.
Install that certificate on your web server. As you have mentioned LAMP, I guess it is Apache. So check this link for installing SSL to Apache.
In case you reboot your instance, you will get a different public DNS so be aware of this. OR attach an elastic IP address to your instance.
But When I add https to URL, nothing happens.
Correct, your web server needs to have SSL certificate and private key installed to serve traffic on https. Once it is done, you should be good to go. Also, if you use self-signed cert, then your web browser will complain about non-trusted certificate. You can ignore that warning and proceed to access the web page.
You can enable SSL on an EC2 instance without a custom domain using a combination of Caddy and nip.io.
nip.io is allows you to map any IP Address to a hostname without the need to edit a hosts file or create rules in DNS management.
Caddy is a powerful open source web server with automatic HTTPS.
Install Caddy on your server
Create a Caddyfile and add your config (this config will forward all requests to port 8000)
<EC2 Public IP>.nip.io {
reverse_proxy localhost:8000
}
Start Caddy using the command caddy start
You should now be able to access your server over https://<IP>.nip.io
I wrote an in-depth article on the setup here: Configure HTTPS on AWS EC2 without a Custom Domain