Internal and external services running behind Traefik in Docker Swarm mode - traefik

I'm having some trouble finding any way to make my situation workable. I have 2 applications:
1: External service web application running on sub1.domain.com. If I run this application behind traefik with acme (LetsEncrypt) it works fine. I have a few more backend services (api/auth) that all run with a valid LetsEncrypt certificate and get their http traffic redirected to https by traefik
[entryPoints.http.redirect]
entryPoint = "https"
I have to have some form of http to https forwarding for this service.
2: Internal service web application running on sub2.domain.com. I have a self signed trusted certificate (internal CA) which works fine behind traefik if I set it as a default certificate, or if I use it in the application itself (inside tomcat). However, since it is an internal service I can live without ssl for this if it solves my problem. However, this does not work with traefik's http to https forwarding.
I have been trying to get these 2 services to run behind the same traefik instance but all the possible scenarios I could think of do not work because they are either still work in progress or just plain not working.
Scenarios
1: No http to https redirect, don't bother with https for the internal service and just use http. Then inside the backend for the external webservice redirect to https.
Problems:
Unable to have 2 traefik ports which traefik forwards too Unable to
forward 1 single port to another proto (since the backend is always
either http or https port)
Use ACME over the default cert
2: Use ACME over default certificate
someone else thought this was a good idea. It's just not working yet.
3: Re-use backend ssl certificate. Have traefik just redirect without "ssl termination". I'm not sure if this is the same thing but there is an option called "passTLSCert". However it seems that this is only possible with frontends defined in the .toml file which do not work (probably because I use docker for backends).
4: use DNS-01 challenge to create an SSL certificate for my internal service.
Sounds like this could work, so I'm now using CloudFlare and have an api key. However, it does not seem to work for subdomains. and there is no reply on my issue report: https://github.com/containous/traefik/issues/1953
EDIT: I might be able to fix the issue described in 4 to get this to work. It seems the internal DNS might be conflicting with traefik

Someone decided that on our internal DNS zones would be added per subdomain, meaning that the SOA request returned the subdomain as the name. This does not play nice with cloudflare since the internal dns zone is not the same as the cloudflare dns.
Changing this to a main zone with a records for the subdomains fixed the issue (in combination with the delayDontCheckDNS option).

Related

HTTPS Connection over LAN

I am new to server management and all that HTTP stuff. I am setting up an internal server for my home to serve websites internally, my website needs to register a service worker and for that, I'll need an SSL Certificate and HTTP connection, which seems impossible in my case as all localhost or internal IPs are served over HTTP with untrusted SSL Certificates.
If anyone could suggest a way around serving websites over HTTPS with trusted certificates so that service worker can be used.
Note: I'll be using Xampp Apache for my Linux server with a static internal IP.
If you need 'trusted cert for any client', I may say "no way".
But if you need 'trusted cert for your client only', you have a way to do that.
I guess you published self-ssl cert for your Apache. In the case, you just install the cert into your client.
example: The following link tell us the case of client = Chrome on Windows.
https://peacocksoftware.com/blog/make-chrome-auto-accept-your-self-signed-certificate
If you use any programming language as a client, you may need another way to install the cert.

Kubernetes/Ingress/TLS - block access with IP Address in URL

A pod is accessible via nginx-ingress and https://FQDN. That works well with the configured public certificates. But if someone uses https://IP_ADDRESS - he will get a certificate error because of the default "Kubernetes Fake Certificate". Is it possible to block access completely using the IP_ADDRESS url?
I think you would first need the TLS handshake to complete, before Nginx could deny the access.
On the other hand, HAproxy may be able to close the connection while checking the ServerName. Say setting some ACL in your https frontend, routing applications to their backends. Though I'm not sure this would be doable unless mounting a custom HAproxy configuration template into your ingress controller.

How to manage SSL key for self host HTTPS

I have Windows service that listens for https requests on an end user's machine, is there an accepted way of creating or distributing the private key in this circumstance? Should I be packaging a real key specifically made for localhost requests (e.g. local.mydomain.com) or generating a self signed key and adding a trusted root CA at install time?
If it matters, the service uses Nancy Self Host for handing the requests and it runs on the SYSTEM user. We have a web app running over https that will be making CORS requests to the service, the user will be using it on a standard web browser (>=IE10). Only the machine onto which the service is installed will be making requests to it.
Thanks.
I have 2 options for you, doing the right way and not doing it at all.
The right way
(Warning: it costs a plenty)
Let's say your application is hosted in the cloud, under kosunen.fi. Main portion is served from the cloud at https://www.kosunen.fi.
Buy DNS delegation for this domain. Resolve localhost-a7b3ff.kosunen.fi to either 127.0.0.1 / ::1 or the actual client's local ip address 10.0.0.63 / fe80::xxxx
Buy subCA or get mass certificate purchase agreement, issue certificates (earlier) or get certificates issued (latter) on demand for each localhost-a7b3ff.kosunen.fi. These certificate will emanate from a trusted global CA and therefore are trusted by all browsers. Each certificate is only used by one PC.
Set up CORS/XSRF/etc bits for *.kosunen.fi.
Done.
Not doing it
Realise that localhost traffic, is, in practice quite secure. Browsers typically refuse http://localhost and http://127.0.0.1 URLs (to prevent JS loaded from internet probing your local servers).
You'll still need at least one DNS entry, localhost.kosunen.fi that resolves to 127.0.0.1 / ::1, browsers will happily accept http://localhost.kosunen.fi even though host name resolves to 127.0.0.1.
What are the risks?
Someone running wireshark on client machine -- if someone has privileges, your model is done for anyway.
Someone hijacks or poisons DNS -- sets it up so that www.kosunen.fi resolves to correct ip, but localhost.kosunen.fi resolves to their internet ip. They steal requests user's browser makes and can include JS.
Mitigate that ad hoc -- only serve data from localhost, not scripts. Set up restrictive CORS/XSRF/CSRF.
You are still left with CORS for HTTP x HTTPS there are solutions to that.
Super-simple CORS
Here between 2 ports, 4040 and 5050, that's just as distinct as between different hosts (localhost vs your.com) or protocols (HTTPS vs HTTP). This is the cloud server:
import bottle
#bottle.route("/")
def foo():
return """
<html><head></head>
<body><p id="42">Foobar</p>
<script>
fetch("http://localhost:5050/").then(
function(response) {
console.log("status " + response.status);
response.json().then(
function(data) {
console.log(data);
}
);
}
);
</script>
</body></html>""".strip()
bottle.run(host="localhost", port=4040, debug=True)
And this is localhost server:
import bottle
#bottle.route("/")
def foo():
bottle.response.headers["Access-Control-Allow-Origin"] = "*" # usafe!
bottle.response.headers["Access-Control-Allow-Methods"] = "HEAD, GET, POST, PUT, OPTIONS"
bottle.response.headers["Access-Control-Allow-Headers"] = "Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token"
return """{"foo": 42}"""
bottle.run(host="localhost", port=5050, debug=True)
Making it safe(r): in the localhost server, read request Origin, validate it, e.g. starswith("https://your.com/") and then return same Allow-Origin as request Origin. IMO that ensures that a compliant browser will only serve your localhost content to JS loaded in your.com context. A broken browser, or, any program running on same machine can, of course, trick your server.
The best way to go about this is to create a self-signed certificate on your local hostname and add an exception to that in IE.
There are a few services that offer 'localhost over SSL', but these all require the private key to be shared by all users using the service, effectively making the key unusable from a security perspective. You might not care about that too much as long as you only allow connections on the local network interface, but CA's try and revoke these certificates as they compromise the integrity of SSL (see for instance http://readme.localtest.me/).
It should be possible to make a mixed-content (HTTPS to HTTP) CORS request on IE11 (see https://social.technet.microsoft.com/Forums/ie/en-US/ffec5b73-c003-4ac3-a0fd-d286d9aee89a/https-to-http-cors-issue-in-ie10?forum=ieitprocurrentver). You just have to make sure that both sites are in a trusted zone and allow mixed-content. I'm not so sure if the web application can / should be trusted to run mixed-content, so the self-signed certificate is more explicit and provides a higher level of trust!
You can probably not use a real certificate signed by a trusted CA since there is no way for a CA to validate your identity based on a hostname that resolves to 127.0.0.1. Unless you create a wildcard certificate on a domain name (i.e. *.mydomain.com) that also has a subdomain local.mydomain.com resolving to your local machine, but this might interfere with existing SSL infrastructure already in place on mydomain.com.
If you already have a certificate on a hostname, it might be possible to set that hostname to 127.0.0.1 in your client's hosts file to work around this, but again this is more trouble and less explicit than just making a self-signed certificate and trusting that.
BTW: Don't rely on the security of the localhost hostname for the self-signed key as discussed here: https://github.com/letsencrypt/boulder/issues/137. The reason behind this is that some DNS resolvers send localhost requests to the network. A misconfigured or compromised router could then be used to intercept the traffic.

SSL: where is the certificate hosted? when does the verification occurs?

I am quite confused here:
I use DNSMadeeasy to manage my DNS. I have two apps.
One is Heroku hosted, and has https on https://example.com - Heroku has many great tutorials to setup the certificate, it hasn't been a problem.
The other one is a wordpress, hosted in 1and1 (though it shouldn't matter here), and is reachable at http://subdomain.example.com and we want it to be available at https://subdomain.example.com
1and1 does sell SSL certificate, but their automated setup works only when one uses their services for DNS also, as they say. Their support says it should be DNSMadeEasy which should be hosting our SSL certificate. I have the feeling it is not true, because for https://example.com, DNSMadeEasy was never involved.
Questions:
When does certificate querying occurs? Before, After, or in parallel of DNS resolution?
Who is hosting a certificate? The DNS provider? The server (accessible like a sitemap.xml at the root for instance)? A third party?
To enlarge the case, in general if I have a personal server with a fix IP, how can I communicate through https with a valid certificate?
In my case, how can I get my way out of it to make https://subdomain.example.com work?
You are right for not believing the 1and1 suggestion.
To answer your questions:
When does certificate querying occurs? Before, After, or in parallel
of DNS resolution?
A client resolves domain name to an IP address first. So DNS resolution happens first.
Who is hosting a certificate?
The server (in simplistic terms) hosts the certificate.
When a client wants to connect to your site (via HTTPS) it will first establish a secure connection with that IP address on port 443 (this is why usually (without SNI) you can only have one SSL certificate per IP address). As part of this process (which is called handshake) a client can also specify a server name (so-called server name extension) - this is a domain name of your site. This is useful if you have an SSL certificate that is valid for multiple domains.
A good/detailed explanation how it works can be found here
http://www.moserware.com/2009/06/first-few-milliseconds-of-https.html
if I have a personal server with a fix IP, how can I communicate
through https with a valid certificate?
Your server will need to be able to respond on port 443 and have/host an SSL certificate for a domain that resolves to that IP address.
In my case, how can I get my way out of it to make
https://subdomain.example.com work?
You need to purchase a certificate for subdomain.example.com and install it on the wordpress server.
Usually in hosted solution like yours you have 2 options:
Buy the SSL certificate via the provider (1and1 in your case) - a simpler option, they will configure everything for you.
Buy the SSL certificate yourself. Here you will most likely need to login to your 1and1/Wordpress management interface and generate a CSR (essentially a certificate request). Then you purchase the SSL certificate using this CSR and then you can install it via the same management interface.
The process will look similar to this:
http://wpengine.com/support/add-ssl-site/

Amazon EC2 + SSL

I want to enable ssl on an EC2 instance. I know how to install third party SSL. I have also enabled ssl in security group.
I just want to use a url like this: ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com with https.
I couldn't find the steps anywhere.
It would be great if someone can direct me to some document or something.
Edit:
I have a instance on EC2. On Which I have installed LAMP. I have also enabled http, https and ssh in the security group policy.
When I open the Public DNS url in browser,I can see the web server running perfectly.
But When I add https to URL, nothing happens.
Is there a way I am missing? I really dont want to use any custom domain on this instance because I will terminate it after a month.
For development, demo, internal testing, (which is a common case for me) you can achieve demo grade https in ec2 with tunneling tools. Within few minutes especially for internal testing purposes with [ngrok] you would have https (demo grade traffic goes through tunnel)
Tool 1: https://ngrok.com Steps:
Download ngrok to your ec2 instance: wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (at the time of writing but you will see this link in ngrok home page once you login).
Enable 8080, 4443, 443, 22, 80 in your AWS security group.
Register and login to ngrok and copy the command to activate it with token: ./ngrok authtoken shjfkjsfkjshdfs (you will see it in their home page once you login)
Run your http - non https server (any, nodejs, python, whatever) on EC2
Run ngrok: ./ngrok http 80 (or a different port if your simple http server runs on a different server)
You will get an https link to your server.
Tool 2: cloudflare wrap
Alternatively, I think you can use an alternative to ngrok which is called cloudflare wrap but I haven't tried that.
Tool 3: localtunnel
A third alternative could be https://localtunnel.github.io which as opposed to ngrok can provide you a subdomain for free it's not permanent but you can ask for a specific subdomain and not a random string.
--subdomain request a named subdomain on the localtunnel server (default is random characters)
Tool 4: https://serveo.net/
Turns out that Amazon does not provide ssl certificates for their EC2 instances out of box. I skipped the part that they are a virtual servers providers.
To install ssl certificate even the basic one, you need to buy it from someone and install it manually on your server.
I used startssl.com They provide free basic ssl certificates.
Create a self signed SSL certificate using openssl. CHeck this link for more information.
Install that certificate on your web server. As you have mentioned LAMP, I guess it is Apache. So check this link for installing SSL to Apache.
In case you reboot your instance, you will get a different public DNS so be aware of this. OR attach an elastic IP address to your instance.
But When I add https to URL, nothing happens.
Correct, your web server needs to have SSL certificate and private key installed to serve traffic on https. Once it is done, you should be good to go. Also, if you use self-signed cert, then your web browser will complain about non-trusted certificate. You can ignore that warning and proceed to access the web page.
You can enable SSL on an EC2 instance without a custom domain using a combination of Caddy and nip.io.
nip.io is allows you to map any IP Address to a hostname without the need to edit a hosts file or create rules in DNS management.
Caddy is a powerful open source web server with automatic HTTPS.
Install Caddy on your server
Create a Caddyfile and add your config (this config will forward all requests to port 8000)
<EC2 Public IP>.nip.io {
reverse_proxy localhost:8000
}
Start Caddy using the command caddy start
You should now be able to access your server over https://<IP>.nip.io
I wrote an in-depth article on the setup here: Configure HTTPS on AWS EC2 without a Custom Domain