We're running a lot of applications in Kubernetes and handle TLS termination inside a pod with HAProxy and a certificate generated with LetsEncrypt.
This works really well for traffic coming from outside the Kubernetes cluster because the requests use the domain name as specified in the certificate.
However for internal communication using the service name - with a url like https:/// - the host part of the url no longer matches what's defined in the certificate, resulting in failure.
Is there any way to let Kubernetes' dns system resolve the full domain name to a specific service, so it doesn't get routed outside the cluster?
I can think of a couple of options that you could pursue:
You could have the requests that transit just the cluster run over http instead of https if you trust the security of your cluster network.
You could have your HAProxy instance serve a different certificate to internal requests using SNI. You would need a way to generate and distribute the internal certificates, but it would allow you to present the client a certificate that matches the Kubernetes service name.
Continue to resolve the FQDN and not worry about routing requests out and then back into the cluster. This actually isn't that different than the upcoming cross-cluster service discovery/federation feature being built into Kubernetes cluster federation.
There isn't really a way to inject/overwrite the external FQDN resolution to return the internal service IP.
Related
In my example I am using https://portmap.io VPN service which is not exactly a pure VPN services but still uses VPN technology to break my ISP restrictions, allowing portforwarding to my own home server running in my android device.
So if I run 193.161.193.99:1200, my website gets browsed. The port 1200 is mapped to my local python server port running on port 1000. Port 1200 is given by the VPN provider.
However, if I try 193.161.193.99 without the port 1200. The portmap VPN official website gets called, cause that's the websites' IP. So basically each user of this VPN services has there own port to work with.
Question: I don't have any public IP totally in my own control to get an SSL certificate, which requires a file upload verification by the CA (CSR). So, it it anyhow possible to get an SSL certificate using 193.161.193.99:1200 ?
Note: Services like zerossl.com accepts to provide a certificate for ipv4 public addresses. So it not always essential to use a FQDN to get a cert.
Yes this is possible, you will need a domain pointing to the VPN/portmap IP and then obtain a SLL certificate from Let's Encrypt for that domain. This can be your own domain or one provided by a Dynamic DNS Service such as Duck DNS.
I'll describe how I have done it with Docker and Duck DNS in detail:
Sign in to Duck DNS, create a subdomain and point it to the VPN/portmap IP, note the token at the top of the page.
Deploy a docker container from LinuxServer.io's SWAG Image
Make sure to provide the required environment variables in your docker-compose.yml (or with docker run command):
- VALIDATION=duckdns
- DUCKDNSTOKEN={your token}
- URL={yourdomain}.duckdns.org
Note: If you want everything behind your VPN, there is a great docker container called gluetun which allows you to run the swag container behind your VPN
You will find your SSL certificates in the /config/etc/letsencrypt/live/{yourdomain}.duckdns.org folder of the SWAG container. Use those for the website/service that is running behind your forwarded port.
The certificates will get updated automatically 30 days before they expire. There is also a PKCS#12-file privkey.pfx, which is needed for services like emby. For more information on SWAG see the LinuxServer.io Docs. You may or may not need another container running that updates the Duck DNS IP periodically, I'm not sure if the SWAG container already does that.
All of this can of course be done without Docker and with your own domain. In this case you will need to map your domain or subdomain to the VPN IP in the DNS Record section of your domain provider. And then use certbot to create certificates for that domain. Docker just automates the renewal part.
Im currently building an application that will reside at app.mydomain.com which is running on Heroku. All users will have their own entry points, like app.mydomain.com/client1, app.mydomain.com/client2, etc. I want clients to be able to setup their own domain (www.clientdomain.com) and cname it to their entry point. I understand this is pretty straight forward up until now.
All my DNS is handled by Cloudflare and I believe I can configure Cloudlfare into Full (Strict) mode, all I need to do is install their Origin Cert onto my Heroku dyno. This will ensure that all direct connects to my domain will be secure (going to app.mydomain.com/client1).
Question is, how does a client go about getting an SSL'ed connection for their domain; do I need to get a multidomain cert and start adding domains to it as I get clients, or am i supposed to install their cert onto Heroku (I believe I can only install 1 so thats a no go) or is it supposed to live on Cloudflare somewhere, or are there additional options I'm not seeing (I hope there are!).
Im not wondering what to do for my own domains, but rather, how do clients setup an SSL connection with their domains that resolve onto my servers.
This is rather perplexing!
The flow would be (I think):
User Browser -> Clients DNS -> (cname to) My Cloudflare -> Heroku
Hmm, it looks like this might be a pretty solid solution to this issue...
https://blog.cloudflare.com/introducing-ssl-for-saas/
Edit - after clarification
I'm currently building an application that will reside at
app.mydomain.com which is running on Heroku. All users will have their
own entry points, like app.mydomain.com/client1,
app.mydomain.com/client2, etc. Question is, how does a client go about
getting an SSL'ed connection for their domain; do I need to get a
multidomain cert and start adding domains to it as I get clients?
If you are going to use the same Heroku app for all of your clients (I think this is a bad idea by the way, but you might be required to) - then yes - you should get a multi-domain certificate and keep adding domains to it as your list of clients expand.
Original answer - which explains SSL + Load Balancing on Heroku.
Im currently building an application that will reside at
app.mydomain.com which is running on Heroku. I was clients to be able
to setup their own domain www.clientdomain.com and cname it to mine.
You will need a wildcard certificate to cover your subdomain (for the app.mydomain.com). You'll have use that cert in heroku.
...all I need to do is install their Origin Cert onto my Heroku dyno.
You are correct - except it's not on your Heroku dyno, it's on your Heroku app endpoint. There's a good read here: https://serverfault.com/questions/68753/does-each-server-behind-a-load-balancer-need-their-own-ssl-certificate
If you do your load balancing on the TCP or IP layer (OSI layer 4/3,
a.k.a L4, L3), then yes, all HTTP servers will need to have the SSL
certificate installed.
If you load balance on the HTTPS layer (L7), then you'd commonly
install the certificate on the load balancer alone, and use plain
un-encrypted HTTP over the local network between the load balancer and
the webservers (for best performance on the web servers).
So you should install your SSL certificate to your Heroku endpoint and let Heroku handle the rest.
Question is, how does a client go about getting an SSL'ed connection;
do I need to get a multidomain cert and start adding domains to it as
I get clients, am i supposed to install their cert onto Heroku (I
believe I can only install 1 so thats a no go) or is it supposed to
live on Cloudflare somewhere?
If you're referring to adding servers to your service from heroku, all you need to do is increase the number of web-dynos. Heroku will handle the load balancing in between these dynos. Your SSL certificate should be resolved in the load balancer so your dynos will be serving requests for the same endpoint. You shouldn't need another SSL certificate for the endpoint you've defined - as long as you're serving traffic from multiple dynos attached to it.
Considering a Nginx reverse-proxy handling TLS LetsEncrypt certificates "in front" of a backend service, what is the good deployment architecture of this setup on Kubernetes ?
My first thought was do make a container with both Nginx and my server in a container as a Stateful Set.
All those stateful sets have access to a volume mounted on /etc/nginx/certificates.
All those containers are running a cron and are allowed to renew those certificates.
However, I do not think it's the best approach. This type of architecture is made to be splited, not running completely independant services everwhere.
Maybe I should run an independent proxy service which handle certificates and does the redirection to the backend server deployment (ingress + job for certificate renewal) ?
If you are using a managed service (such as GCP HTTPS Load Balancer), how do you issue a publicly trusted certificate and renew your it?
You want kube-lego.
kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt
It works with GKE+LoadBalancer and with nginx-ingress as well. Usage is trivial; automatic certificate requests (including renewals); uses LetsEncrypt.
The README says -perhaps tongue in the cheek- that you need a non production use case. I have been using it for production and I have found it to be reliable enough.
(Full disclosure: I'm loosely associated with the authors but not paid to advertise the product)
I am getting into load balancing and how security with SSL certificates can be integrated with a load balancer.
Let's say that I want to expose several copies of the same RESTful web service over Amazon Elastic Load Balancer. All should be fine and smooth up until now. However, security has not yet been taken into consideration.
Now, let's say that we want the communication to be secured with an SSL certificate, so we go ahead and buy a certificate. We will have several IP addresses which are all exposing the same RESTful server with the load balancer. These IP addresses will all get mapped to the same domain name (https://thedomain.com). This way, the clients always connect to the same domain. It is then up to the load balancer to redirect to the web service which is getting the least traffic.
The main question is, is it possible for such an architecture with a single SSL certificate? As if this is so, it would be possible to extend the amount of services dynamically without having to change the security.
It is then up to the load balancer to redirect to the web service which is getting the least traffic.
AFAIK, the ELB supports only RoundRobin and Stick sessions. So what you said above will not happen.
is it possible for such an architecture with a single SSL certificate?
You can install the SSL certificate on the ELB and let it do the SSL termination. The traffic between ELB and your Web Nodes will be un-encrypted then. You should explore AWS VPC where you can have a public facing ELB and your Web Nodes will be within Private subnet.
Also, ELB supports TCP load balancing. In this case, you install the Certificate on the Web Nodes and ELB will accept traffic on port 443 from internet and will simply forward it to port 443 on web nodes wherein web nodes have to do SSL encryption/decryption.
Hope this helps.
Context
I developed an application deployed in a Glassfish 3.1. This application is accessed only by https and sometimes it must connect to third-party webservices located out the customers networks. The customer have other applications inside his network; mine is only a new one "service".
Topology approximation
Big-ip F5 is the ssl end point. The customer have in this device the valid certificate
IIS redirects by domain to the respective service
glassfish is the machine with the application (over, of course, a glassfish 3.1)
How it works
When a user try to connect to _https://somedomain the request arrives to the F5 where the SSL encryption ends; now we have a request to _http://somedomain. In the next step F5 redirects this request to the IIS and this, finally, redirects to glassfish. This petitions are successfully processed.
Points of interest
I've full control over glassfish server and S.O. of the vm where it is located. Not other applications are or will be deployed on this server; it's a dedicated server for the app and some services it needs. The Glassfish runs on a VM with a Debian distribution as S.O. This VM is provided by a VM Server but I don't know the brand, model, etc. The glassfish have the default http listeners configuration.
I don't have any more information about network and other devices and i can't access to
any configuration file of any other device. I can't modify any part of the network for my own but maybe ask, suggest or advice for a change. Network's behavior should not change.
Actually users reach the application without problem.
The used certificate is a simple domain certificate trusted by Verysign
The customer have no idea of how to solve this.
The problem
All the third party WS the application must access have an unique https access and, in some cases, the authentication required is mutual (two-way) and here we find the problem. When the application wants to connect to WS with mutual ssl authentication it sends the glassfish local keystore configuration targeted certificate. Customer wants, if possible, use the same cert for incoming and outcoming secure connections. This cert is in the F5 and i can't add to the glassfish keystore because if I do this I would be breaking Verysign contract requirements. I've been looking for a solution at google, here(stackoverflow), jita,... but only incoming traffic solutions I've found. I understand that maybe a SSL proxy is required but I haven't found any example or alternative solution for the outcoming ssl connections.
What I'm asking for
I'm not english speaker (isn't obvious?) and maybe i doesn't use the correct terms in my search terms. I can understand that this context can be a nightmare and hard to solve but I will stand... The first think I need is to know if exists a solution (or solutions) for this problem and if it (or they!) exist where or how can I find it/them. I've prepared different alternatives to negotiate with the customer but I need to known the true. I've spent tones of hours on this.
There are a couple of solutions.
1)pay verisign more money for a second "license/cert". They will be happy to take your money for the "privilege". :)
2)Create a different virtual server listening on 443 which points to a pool that has your client's server address as the pool member. Then on the virtual server, attach a serverssl profile that is configured to use the same cert you are using for the incoming connections. Then the F5 would authenticate with the same cert along with your app server would not need a client cert installed. Also, if they need to initiate a session to you, you would have to setup a virtual server with a clientssl profile that uses the same cert and requires a client cert to connect.
If your destinations may not be static addresses, then an irule(s) would have to be created to deal with that. Can be handled in 10 or later code with a DNS call in the irule and setting a node for the session to go.