Kubernetes/Ingress/TLS - block access with IP Address in URL - ssl

A pod is accessible via nginx-ingress and https://FQDN. That works well with the configured public certificates. But if someone uses https://IP_ADDRESS - he will get a certificate error because of the default "Kubernetes Fake Certificate". Is it possible to block access completely using the IP_ADDRESS url?

I think you would first need the TLS handshake to complete, before Nginx could deny the access.
On the other hand, HAproxy may be able to close the connection while checking the ServerName. Say setting some ACL in your https frontend, routing applications to their backends. Though I'm not sure this would be doable unless mounting a custom HAproxy configuration template into your ingress controller.

Related

Nagios check_cert - CRITICAL - Cannot make SSL connection

I am using Nagios core at the moment to monitor the status of my domain and ssl cert. However, for one of my site, I can't get the expiry information of the SSL certificate.
The error shown on Nagios is as below:
CRITICAL - Cannot make SSL connection
The check_cert settings for the site is as below:
//cert.cfg
define service {
use generic-service
host_name cert
service_description [cert] {mysite's domain info}
check_command check_cert!{mysite's domain info}!-C 30,15
}
I am currently monitoring the domain status over Nagios for the same site as well, but it is working completely fine and there isn't any notice or error shown on Nagios.
Does anyone know why the domain connection is working whereas the ssl connection is not?
P.S. The OS that I used to monitor my site (the one that I installed Nagios) is CentOS 7.
Reading through the nagios documentation, I figured out the reason why nagios wasn't able to make a SSL connection. Apparently, a hand-shake was needed prior the SSL-certification check, so without specifying the option. The following error appeared...
https://nagios-plugins.org/doc/man/check_http.html
CRITICAL - Cannot make SSL connection
In order to solve the problem, the --sni option is what I needed.
define service {
use generic-service
host_name cert
service_description [cert] {mysite's domain info}
check_command check_cert!{mysite's domain info}!-C 30,15 --sni
}

Internal and external services running behind Traefik in Docker Swarm mode

I'm having some trouble finding any way to make my situation workable. I have 2 applications:
1: External service web application running on sub1.domain.com. If I run this application behind traefik with acme (LetsEncrypt) it works fine. I have a few more backend services (api/auth) that all run with a valid LetsEncrypt certificate and get their http traffic redirected to https by traefik
[entryPoints.http.redirect]
entryPoint = "https"
I have to have some form of http to https forwarding for this service.
2: Internal service web application running on sub2.domain.com. I have a self signed trusted certificate (internal CA) which works fine behind traefik if I set it as a default certificate, or if I use it in the application itself (inside tomcat). However, since it is an internal service I can live without ssl for this if it solves my problem. However, this does not work with traefik's http to https forwarding.
I have been trying to get these 2 services to run behind the same traefik instance but all the possible scenarios I could think of do not work because they are either still work in progress or just plain not working.
Scenarios
1: No http to https redirect, don't bother with https for the internal service and just use http. Then inside the backend for the external webservice redirect to https.
Problems:
Unable to have 2 traefik ports which traefik forwards too Unable to
forward 1 single port to another proto (since the backend is always
either http or https port)
Use ACME over the default cert
2: Use ACME over default certificate
someone else thought this was a good idea. It's just not working yet.
3: Re-use backend ssl certificate. Have traefik just redirect without "ssl termination". I'm not sure if this is the same thing but there is an option called "passTLSCert". However it seems that this is only possible with frontends defined in the .toml file which do not work (probably because I use docker for backends).
4: use DNS-01 challenge to create an SSL certificate for my internal service.
Sounds like this could work, so I'm now using CloudFlare and have an api key. However, it does not seem to work for subdomains. and there is no reply on my issue report: https://github.com/containous/traefik/issues/1953
EDIT: I might be able to fix the issue described in 4 to get this to work. It seems the internal DNS might be conflicting with traefik
Someone decided that on our internal DNS zones would be added per subdomain, meaning that the SOA request returned the subdomain as the name. This does not play nice with cloudflare since the internal dns zone is not the same as the cloudflare dns.
Changing this to a main zone with a records for the subdomains fixed the issue (in combination with the delayDontCheckDNS option).

icCube XMLA requests in SSL (HTTPS)

SSL is enabled in iccube.xml using a self-signed certificate. As describe in the icCube documentation, 8443 is assigned to the sslPortNumber variable, the certificate location is assigned to sslKeyStorePath and the right password is specified for sslKeyStorePassword. The web administration application and gvi requests work fine when using 8443 as the port in the URL.
Unfortunately, sending XMLA requests in SSL using the same port (8443) does not work. What's wrong with the configuration file? Is there another section to fill out in the configuration file that is specific to XMLA? Is there a special way to build the calling URL?
Thank you

How to manage SSL key for self host HTTPS

I have Windows service that listens for https requests on an end user's machine, is there an accepted way of creating or distributing the private key in this circumstance? Should I be packaging a real key specifically made for localhost requests (e.g. local.mydomain.com) or generating a self signed key and adding a trusted root CA at install time?
If it matters, the service uses Nancy Self Host for handing the requests and it runs on the SYSTEM user. We have a web app running over https that will be making CORS requests to the service, the user will be using it on a standard web browser (>=IE10). Only the machine onto which the service is installed will be making requests to it.
Thanks.
I have 2 options for you, doing the right way and not doing it at all.
The right way
(Warning: it costs a plenty)
Let's say your application is hosted in the cloud, under kosunen.fi. Main portion is served from the cloud at https://www.kosunen.fi.
Buy DNS delegation for this domain. Resolve localhost-a7b3ff.kosunen.fi to either 127.0.0.1 / ::1 or the actual client's local ip address 10.0.0.63 / fe80::xxxx
Buy subCA or get mass certificate purchase agreement, issue certificates (earlier) or get certificates issued (latter) on demand for each localhost-a7b3ff.kosunen.fi. These certificate will emanate from a trusted global CA and therefore are trusted by all browsers. Each certificate is only used by one PC.
Set up CORS/XSRF/etc bits for *.kosunen.fi.
Done.
Not doing it
Realise that localhost traffic, is, in practice quite secure. Browsers typically refuse http://localhost and http://127.0.0.1 URLs (to prevent JS loaded from internet probing your local servers).
You'll still need at least one DNS entry, localhost.kosunen.fi that resolves to 127.0.0.1 / ::1, browsers will happily accept http://localhost.kosunen.fi even though host name resolves to 127.0.0.1.
What are the risks?
Someone running wireshark on client machine -- if someone has privileges, your model is done for anyway.
Someone hijacks or poisons DNS -- sets it up so that www.kosunen.fi resolves to correct ip, but localhost.kosunen.fi resolves to their internet ip. They steal requests user's browser makes and can include JS.
Mitigate that ad hoc -- only serve data from localhost, not scripts. Set up restrictive CORS/XSRF/CSRF.
You are still left with CORS for HTTP x HTTPS there are solutions to that.
Super-simple CORS
Here between 2 ports, 4040 and 5050, that's just as distinct as between different hosts (localhost vs your.com) or protocols (HTTPS vs HTTP). This is the cloud server:
import bottle
#bottle.route("/")
def foo():
return """
<html><head></head>
<body><p id="42">Foobar</p>
<script>
fetch("http://localhost:5050/").then(
function(response) {
console.log("status " + response.status);
response.json().then(
function(data) {
console.log(data);
}
);
}
);
</script>
</body></html>""".strip()
bottle.run(host="localhost", port=4040, debug=True)
And this is localhost server:
import bottle
#bottle.route("/")
def foo():
bottle.response.headers["Access-Control-Allow-Origin"] = "*" # usafe!
bottle.response.headers["Access-Control-Allow-Methods"] = "HEAD, GET, POST, PUT, OPTIONS"
bottle.response.headers["Access-Control-Allow-Headers"] = "Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token"
return """{"foo": 42}"""
bottle.run(host="localhost", port=5050, debug=True)
Making it safe(r): in the localhost server, read request Origin, validate it, e.g. starswith("https://your.com/") and then return same Allow-Origin as request Origin. IMO that ensures that a compliant browser will only serve your localhost content to JS loaded in your.com context. A broken browser, or, any program running on same machine can, of course, trick your server.
The best way to go about this is to create a self-signed certificate on your local hostname and add an exception to that in IE.
There are a few services that offer 'localhost over SSL', but these all require the private key to be shared by all users using the service, effectively making the key unusable from a security perspective. You might not care about that too much as long as you only allow connections on the local network interface, but CA's try and revoke these certificates as they compromise the integrity of SSL (see for instance http://readme.localtest.me/).
It should be possible to make a mixed-content (HTTPS to HTTP) CORS request on IE11 (see https://social.technet.microsoft.com/Forums/ie/en-US/ffec5b73-c003-4ac3-a0fd-d286d9aee89a/https-to-http-cors-issue-in-ie10?forum=ieitprocurrentver). You just have to make sure that both sites are in a trusted zone and allow mixed-content. I'm not so sure if the web application can / should be trusted to run mixed-content, so the self-signed certificate is more explicit and provides a higher level of trust!
You can probably not use a real certificate signed by a trusted CA since there is no way for a CA to validate your identity based on a hostname that resolves to 127.0.0.1. Unless you create a wildcard certificate on a domain name (i.e. *.mydomain.com) that also has a subdomain local.mydomain.com resolving to your local machine, but this might interfere with existing SSL infrastructure already in place on mydomain.com.
If you already have a certificate on a hostname, it might be possible to set that hostname to 127.0.0.1 in your client's hosts file to work around this, but again this is more trouble and less explicit than just making a self-signed certificate and trusting that.
BTW: Don't rely on the security of the localhost hostname for the self-signed key as discussed here: https://github.com/letsencrypt/boulder/issues/137. The reason behind this is that some DNS resolvers send localhost requests to the network. A misconfigured or compromised router could then be used to intercept the traffic.

Amazon EC2 + SSL

I want to enable ssl on an EC2 instance. I know how to install third party SSL. I have also enabled ssl in security group.
I just want to use a url like this: ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com with https.
I couldn't find the steps anywhere.
It would be great if someone can direct me to some document or something.
Edit:
I have a instance on EC2. On Which I have installed LAMP. I have also enabled http, https and ssh in the security group policy.
When I open the Public DNS url in browser,I can see the web server running perfectly.
But When I add https to URL, nothing happens.
Is there a way I am missing? I really dont want to use any custom domain on this instance because I will terminate it after a month.
For development, demo, internal testing, (which is a common case for me) you can achieve demo grade https in ec2 with tunneling tools. Within few minutes especially for internal testing purposes with [ngrok] you would have https (demo grade traffic goes through tunnel)
Tool 1: https://ngrok.com Steps:
Download ngrok to your ec2 instance: wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (at the time of writing but you will see this link in ngrok home page once you login).
Enable 8080, 4443, 443, 22, 80 in your AWS security group.
Register and login to ngrok and copy the command to activate it with token: ./ngrok authtoken shjfkjsfkjshdfs (you will see it in their home page once you login)
Run your http - non https server (any, nodejs, python, whatever) on EC2
Run ngrok: ./ngrok http 80 (or a different port if your simple http server runs on a different server)
You will get an https link to your server.
Tool 2: cloudflare wrap
Alternatively, I think you can use an alternative to ngrok which is called cloudflare wrap but I haven't tried that.
Tool 3: localtunnel
A third alternative could be https://localtunnel.github.io which as opposed to ngrok can provide you a subdomain for free it's not permanent but you can ask for a specific subdomain and not a random string.
--subdomain request a named subdomain on the localtunnel server (default is random characters)
Tool 4: https://serveo.net/
Turns out that Amazon does not provide ssl certificates for their EC2 instances out of box. I skipped the part that they are a virtual servers providers.
To install ssl certificate even the basic one, you need to buy it from someone and install it manually on your server.
I used startssl.com They provide free basic ssl certificates.
Create a self signed SSL certificate using openssl. CHeck this link for more information.
Install that certificate on your web server. As you have mentioned LAMP, I guess it is Apache. So check this link for installing SSL to Apache.
In case you reboot your instance, you will get a different public DNS so be aware of this. OR attach an elastic IP address to your instance.
But When I add https to URL, nothing happens.
Correct, your web server needs to have SSL certificate and private key installed to serve traffic on https. Once it is done, you should be good to go. Also, if you use self-signed cert, then your web browser will complain about non-trusted certificate. You can ignore that warning and proceed to access the web page.
You can enable SSL on an EC2 instance without a custom domain using a combination of Caddy and nip.io.
nip.io is allows you to map any IP Address to a hostname without the need to edit a hosts file or create rules in DNS management.
Caddy is a powerful open source web server with automatic HTTPS.
Install Caddy on your server
Create a Caddyfile and add your config (this config will forward all requests to port 8000)
<EC2 Public IP>.nip.io {
reverse_proxy localhost:8000
}
Start Caddy using the command caddy start
You should now be able to access your server over https://<IP>.nip.io
I wrote an in-depth article on the setup here: Configure HTTPS on AWS EC2 without a Custom Domain