I want to migrate existing web infrastructure containing multiple websites and services into docker containers. Those websites are reachable from many different public domains. I'm using Traefik 2.9 as a reverse proxy solution to route into services/containers but it's not that clear to configure the TLS certificate individually for each service. I have certificates stored in CER/KEY files for each public domain. Global tls.certificates section is a flat list and in EntryPoint's TLS section there is no place for certificates to be assigned. Do you have any idea how to get it done right or maybe Traefik isn't the right solution?
Related
I have 3 servers. 1 is IIS-ARR load balancer. 2 servers are IIS web servers with my website hosted. I want to run the website on https. So how many SSL certificates are needed, which servers I have to create SSL CSR request and Which servers I have to install them.
I found a solution for this from another query. I need to use a certificate with Subject Alternate Names which is authorized for multiple domains.
Such a certificate could, once installed where the CSR was created in order to pair it with the private key, be exported as a bundle including the private key then imported on all involved servers.
So, just 1 certificate will solve the purpose.
I am developing a SaaS web application (https://mywebsite.example) which will be hosted in AWS and will have subdomains for individual customers like https://customer1.mywebsite.example , https://customer2.mywebsite.example.
As a second step I would like to introduce custom domain names and map it with the subdomains of mywebsite.com through cname records
https://customer1.example --> https://customer1.mywebsite.example
Here is what I have analysed till now.
Using Certificates in AWS loadbalancer for the custom domains as a SAN in the certificate. However the AWS Loadbalancer certificate limits are lesser than the number of customers I am expecting to add.
CloudFlare DNS setup for mywebsite.example and its subdomains, with ssl certificates configured in cloudflare. However Cloudflare allows thirdparty (custom domain) cname redirections only in the Enterprise Plan.
Are there any other alternative service or are there is an alternate way of achieving this use case?
it seems that this solution available in AWS EC2 marketplace should solve your problem
You can try, there is some trial available, called Kilo SSL
https://aws.amazon.com/marketplace/pp/prodview-nedlvgpke4hdk?sr=0-1&ref_=beagle&applicationId=AWSMPContessa
Also it is possible to map your customer's domains to your saas. Algorithm is:
you create EC2 instance. Allocate and associate public IP to it
create domain name which points to this instance. You will use this domain name as CNAME when pointing your own subdomains in your DNS provider (but there is limit of 50 certificates per week per one domain, so you can create only 50 domains like customer1.yourdomain.com ... customer50.yourdomain.com per week)
For customers who want to use their own domains (like app.customer1.com), you also provide them your CNAME and ask customer to set DNS record. After they will do it, you will be able to create certificate for their domain using this service.
Also this service allows to point different domains to different URLs. We started to use this in our SAAS application for URL shortening (we have several hundreds of customers who use their own domains. So we automatically able to create certificate for them, and everything is automated via API). Also we use the same machine to support SSL for all our company's domains.
available API methods: https://docs.kilossl.com/
I have an application that is internal and exposed only to other application on the cluster by a service with cluster IP. Other services are accessing this application via it's DNS (serviceName-namespace.svc.cluster.local). This application handles sensitive data, so although all the communication is inside the cluster I would like to use TLS to secure the communications to this application.
My question is - how can I enable TLS on a service? Is there something already exist or should I handle it on the application code? Also, is there already a CA I can use on the cluster that can sign certificates for .svc.cluster.local?
To clarify, I know I can use ingress for this purpose. The only problem is keeping this service internal only - so only services inside the cluster will be able to access it.
Thanks,
Omer
I just found that Kubernetes API can be used to generate a certificate that will be trusted by all the pods running on the cluster. This option might be simpler than the alternatives. You can find the documentation here, including full flow of generating a certificate and using it.
Following #vonc comments from bellow, I think I have a solution:
Purchase a public valid domain for this service (e.g. something.mycompany.com).
Use CoreDNS to add override rule so all requests to something.mycompany.com will go to something-namesapce.svc.cluster.local, as the service is not exposed externally (this can be done also with normal A record for my use case).
Use Nginx or something else to handle TLS with the certificate for something.mycompany.com.
This sounds pretty complicated but might work. What do you think?
Check if the tutorial "Secure Kubernetes Services with Ingress, TLS and LetsEncrypt" could apply to you:
Ingress can be backed by different implementations through the use of different Ingress Controllers. The most popular of these is the Nginx Ingress Controller, however there are other options available such as Traefik, Rancher, HAProxy, etc. Each controller should support a basic configuration, but can even expose other features (e.g. rewrite rules, auth modes) via annotations.
Give it a domain name and enable TLS. LetsEncrypt is a free TLS certificate authority, and using the kube-lego controller we can automatically request and renew LetsEncrypt certificates for public domain names, simply by adding a few lines to our Ingress definition!
In order for this to work correctly, a public domain name is required and should have an A record pointing to the external IP of the Nginx service.
For limiting to inside the cluster domain though (svc.cluster.local), you might need CoreDNS.
On Google Cloud you can make load balancer service internal, like this:
annotations = {
"cloud.google.com/load-balancer-type" = "Internal"
}
We have a Web-Application that should interact with a desktop application that has a helper tool character (e.g. no setup, no need for admin privileges). The helper is listening via http/https on a simple port bound to localhost.
The Web-Application uses a SSL certificate. Every customer has a machine on its own for his data. For claryfication: The Web-Application is running on a server, serving one customer but multiple people.
The problem is, the Web-Application cannot reach the helper tool via https (using image or iframe). The main issue is, that the local webserver listening on localhost has no signed certificate. So the web browser is blocking the interaction.
Is there any way to get around this trouble? I think, I cannot get a certificate for localhost, because no one would sign it.
I know, that I cannot use XMLHttpRequest for this, but that's not the point.
The goal is to have a customer friendly - no install - just works - solution. The customer should not do ANY configuration. Just downloading and starting the tool. We'd like to have a direct communication to the tool (e.g. no outbound direction to the web server).
Is the any solution for this?
If it is Active-directory environment , you can create your own CA and sign certificates and distribute them across the domain. also you can add to trusted sites through domain policies this way client side you don't need to configure anything .
We would like to setup an application on Windows Azure at abc.cloudapp.net which would have a CNAME record for www.mydomain.com pointing to it and then allow clients to do the same. Our application would then look at the requested URL and then pull out relevant data based on the requested domain (abc.theirdomain.com or www.theirotherdomain.com).
Our initial tests show that this should work, however the problem lies in that we need the site to be secure. So we'd like clients to be able to setup shared SSL certs with us that we would upload to our Azure subscription that then allowed them to create a CNAME record (abc.theirdomain.com or www.theirotherdomain.com) that points to either www.mydomain.com or abc.cloudapp.net.
Is this possible?
Edit: I'm not sure if this is the same question as Azure web role - Multiple ssl certs pointing to a single endpoint.
We've used a multi-domain certificate in this situation - see http://www.comodo.com/business-security/digital-certificates/multi-domain-ssl.php for details. This will work for up to 100 different top-level domains.
The problem with a multi-domain certificate is that it is more expensive than a "normal" certificate and that every time you add a new domain, you will have to deploy a new package with the updated certificate.
On the other hand, you could have multiple SSL certificates (one for each domain) and then the answer you seek is here Azure web role - Multiple ssl certs pointing to a single endpoint.
No, I don't think your setup would be possible with a single SSL cert. In general, SSL certs are tied to the hostname (e.g. foo.domain.com and foo.domain2.com need different certs). However, you can purchase a wildcard SSL cert that will help if you use the same root domain, but different subdomains (e.g. foo.domain.com and foo2.domain.com can share a wildcard cert).
So, in your case, since you are allowing different root domains, then you need a different SSL cert for each. If instead you choose to allow different sub-domains on same root domain, you can get away with the wildcard cert.