Does anyone know how I can enable TLS Authentication on an application running inside an AWS Ubuntu machine.
To be specific, I have an Ubuntu machine on AWS running Linux Container (LXC) and LXD (a framework on top of LXC that provides REST APIs to access Linux Containers, among other things). I generated certificate and key on the Ubuntu host using LXC command line utility. I then tested whether the certificate works locally by running curl command providing the --cert and --key options to it, and everything works fine.
I then copied the Certificate over to my local machines (Mac OS X) keyChain and tried accessing the Ubuntu Server (which btw has an open security, allows traffic from everywhere on any port.) It gives me the error : "This server could not prove that it is X.X.X.X . Its security certificate is from ip-X.X.X.X".
I noticed that the certificate has the DNS name value as the private IP address given to the machine by AWS instead of public IP address.
Does any one know how I can access my TLS enabled application inside an AWS Ubuntu machine from outside, public network?
Please let me know if things are not clear and I would be happy to provide more details.
Within the certificate is a field that specifies what machine name or IP address the certificate should be coming from. This prevents another site from grabbing the same certificate and presenting it as the other site's certificate. The issue in this case is that your certificate specifies the AWS internal address, but the client sees the external address of the server.
The solution is simple: generate a security certificate with a subject alternative name (SAN) that is the external IP address rather than the internal IP address. External clients will then see the certificate IP address as matching the address they went to.
Related
I am a bit new to Google Compute engine and managed to get a webserver with nginx to work on my google domain and installed WordPress. HTTP access was working. Now I wanted to get HTTPS to work as well.
I noticed that I don't have SSL running and so I ended up using cloudflare, made necessary changes to my nginx server and also changed the nameserver for my webserver IP address on the Google Compute Engine. That works fine. Although, there are still some errors when accessing the IP address instead of the domain name (400 Bad Request No required SSL certificate was sent nginx/1.18.0 (Ubuntu)).
So, I heard Google can do SSL on my google domain, but I am really stuck with the documentation, https://cloud.google.com/appengine/docs/standard/python/securing-custom-domains-with-ssl?authuser=2#upgrading_to_managed_ssl_certificates. It talks about Google App Engine and I haven't found a documentation to apply SSL certificates to my Google Compute Engine instance. Though, I added a custom domain there, but it points to a different IP address than my webserver on the Google Compute Engine. That surely can't be the right way?
Hence, does anyone know how I can get SSL from Google to work on my webserver using a VM instance on Google Compute Engine?
(Note to myself: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04)
It is very easy to set up SSL on Compute Engine.
STEP 1: Domain names
Determine which domain names you want SSL certificates for. Typically you want two. The naked domain (example.com) and the zone www (www.example.com). Replace example.com with your actual domain name.
Note: Let's Encrypt will not issue SSL certificates for an IP address. This also means you cannot access your web server using SSL specifying an IP address instead of a domain name. Trying this will generate an error: https://my-ip-address.com
STEP 2: Setup DNS
Change your DNS servers to point directly to your Compute Engine instance reserved static IP address. At this point, do not use CloudFlare. Let's Encrypt will talk directly to your Nginx web server. Validate that each domain name is configured correctly and that you can access your site via HTTP (http://example.com and http://www.example.com).
The following instructions are OS dependant and are for Debian based systems such as Debian and Ubuntu. There are similar steps for CentOS, Red Hat, etc.
STEP 3: Install Certbot
Certbot is the software agent for Let's Encrypt. This requires Python3 to be installed on your system. Most Google Cloud instances have Python 3 installed.
Run the following commands on your VM instance:
sudo apt update
sudo apt upgrade -y
sudo apt install certbot python3-certbot-nginx
STEP 4: VPC Firewall
Make sure that ports 80 and 443 are allowed in the Google Cloud VPC Firewall.
Using firewall rules
STEP 5: Issue the SSL Certificate
Run the following command on your VM instance. Replace example.com with your domain names.
sudo certbot --nginx -d example.com -d www.example.com
Summary
Your server now has SSL configured. The SSL certificate will auto-renew. Provided that you do not change the domain names or DNS server settings, SSL will continuously function.
In the future, you may decide to offload SSL certificates to another service such as Cloudflare or a Google HTTP(S) Load Balancer. I recommend understanding how to set up SSL directly on your instance so that encryption is end-to-end. Then you can decide on SSL-offloading, caching, load balancing, auto-scaling, and more options.
I have my website https://www.MyWebSite.com running on port 433. But I also have a admin login that only are available from the office local network http://MyServer:9999/Login.aspx. Both addresses points to the same site but different bindings.
Is it possible to get the one on port 9999 to use https? I tried creating a self signed certificate in IIS but my browser still complained, even though I exported the certificate and stored it in my CA Trusted root.
So just to sum everything:
My regular site: https://MyWebSite.com <-- working fine
My admin login, only accessible via local network: http://MyServer:9999/Login.aspx works fine.
When adding a selfsigned certificate issued to "MyServer" (not MyWebSite) and add the new binding on port 9999 I though to the website but Chrome is giving me a warning NET::ERR_CERT_COMMON_NAME_INVALID, even though the cert is Issued To MyServer and are trusted
Is it possible to get the one on port 9999 to use https?
yes it is possible to setup another port with selfsigned
certificate.
Normally Selfsigned certificate will have fully qualified machine name
e.g. machinename.subdomain.domain so you have to browse using https://machinename.subdomain.domain:9999/
Please double check what error you are running into ,In chrome
Your connection is not private
Attackers might be trying to steal your information from in08706523d (for example, passwords, messages, or credit cards). NET::ERR_CERT_COMMON_NAME_INVALID
in IE,you may get
There is a problem with this website’s security certificate.
The security certificate presented by this website was issued for a different website's address.
Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server.
In that case,assuming you have given hostname as * in IIS binding, and also installed the selfsigned certificate installed your "Root Certification Authorities " You should be able to browse to
https://machinename.subdomain.domain:9999/ without any issues
I am trying to use knife from my laptop to connect to a newly configured Chef server hosted on AWS. I know what is listed below is the right direction for me but I'm not sure how to go about this exactly.
If you are not able to connect to the server using the hostname ip-xx-x-x-xx.ec2.internal
you will have to update the certificate on the server to use the correct hostname.
I had this same problem. The problem is that EC2 instances place their private ip into their hostname file. Which causes chef to self assign certs to the internal ip. When you do knife ssl check you'll probably get an error message that looks like this:
ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to: 'ec2-x-x-x-x.us-west-2.compute.amazonaws.com'
ERROR: The server's certificate belongs to 'ip-y-y-y-y.us-west-2.compute.internal'
connecting to the public IP is correct however you'll continue to get this error if you don't configure your chef server to use your public dns when signing the cert.
EDIT: Chef's documentation used to have steps to correct this issue, but since the time I initially answered this question they have removed those steps from their tutorial. The following steps worked for me with Chef 12, Ubuntu 16 on an ec2 instance.
ssh onto your chef server
open your hostname file with the following command sudo vim /etc/hostname
remove the line containing you internal ip and replace it with your public ip and save the file.
reboot the server with sudo reboot
run sudo chef-server-ctl reconfigure (this signs a new certificate, among other things)
Go back to your workstation and use knife ssl fetch followed by knife ssl check and you should be good to go.
What you could ALSO do, is just complete steps 1 - 4 before you even install chef onto the server.
Update public IP on Chef Server
run chef-server-ctl reconfigure on Server (No reboot needed)
Update the knife.rb on Workstation with new IP address
run 'knife ssl fetch' on the Chef Workstation
This should resolve the issue, to confirm run 'knife client list'
You can't connect to an internal IP (or DNS that points to an internal IP) from outside AWS. Those are nonroutable IP addresses.
Instead, connect to the public IP of the instance, if you have one.
We have IBM Sterling Connect Direct 4.2 on Windows 2003 Server, everything is working fine, even the SSL Configuration, we exchange files properly. Now, I have migrated all the configuration to a Windows Server 2008 cluster environment. Everything it's ok... I have configured the IBM Sterling Connect Direct 4.6.0.1 -even the SSL Configuration, we just have made a copy/paste of the certificates, keycerts and trusted files-. Everything it's ok and we are able to receive files under a SSL session. But... there is an exception.. The problem we are facing is when we try to send files to our partners we get this error:
Message ID: CSPA311E
SSL Certificate verification failed, reason= self certificate in certificate chain:
Followed by this error:
Message ID: CSPA309E
SSL3_GET_SERVER_CERTIFICATE certificate verify failed:
We are using exactly the same configuration, except by the IP and server name, that have changed. The certificates in any way are linked to the server name or the IP?
Any hint on this issue is very appreciated.
A certificate is issued for a specific domain name or IP address. I'm pretty sure that this is the reason for your error. You can check this with keytool.exe which is shipped with a JRE or JDK installation and is located in the /bin directory. So issue the following from your command line:
keytool.exe -printcert -file C:\path\to\your\file.crt
This will give an output like:
In the second line there you can see: Owner: CN=localhost, ... which means that this certificate is issued for localhost.
If this CN entry differs from new the IP address or domain name, you have two possibilities.
Crate a new certificate which is issued for that specific IP or domain. You can use the java keytool.exe again.
You need to update your client application which checks the validity of the certificate. Thereby you need to tell the client to don't check the certs CN name against the real IP address or damain name of the remote server. (Not recommended because of security reasons.)
We allow users to dial-in to our system.
We run a firewall on the dial-in system that blocks all access by default and we only allow certain servers to be accessed by adding specific rules.
We have a web service that contacts our server. The service calls are made over SSL.
The SSL Cert is from GoDaddy.
We have found that when connecting to the service the first time something tries to verify the SSL certificate. We are seeing dropped packets to Microsoft IP addresses via port 80.
If we allow access to the Microsoft IP, the software works perfect.
Issue being the IP is random, so I have been adding a few different IP hosts.
Looks like some type of SSL verification system or something... anyone ever run into something like this? or know of a block of IP's or hostnames that I can allow in the firewall?
It's most likely trying to contact the Certificate Authority (CA) to verify the SSL cert.
It smells like browser is trying to connect to a CRL server. Try to reverse-resolve the IP addresses to a domain name and you should get some clue.