I have the following infrastructure for my artifactory web server
Artifactory Server - myMainartifactory.myCompany.com
load balancer - myLoadbalancer01.myCompany.com
load balancer - myLoadBlancer02.myCompany.com
The SSL certificate is purchased for the domain myMainArtifactory.myCompany.com that both load balancers serve.From some posts including this one, I was under the impression that I could use the same private key and certificate for the load balancers.
But when I try to ping the docker registry it says that the certificate is for myMainArtifactory.mycompany.com domain.How does this work?
Related
I am using google domains with gcloud static bucket and a https load balancer. A while ago the load balancer was set up to serve google-managed GTS ssl certs. This has now been revised to a self-managed third party ssl cert. The settings on the balancer do refer to the new cert; and gcloud does list the new cert as available.
Yet, the load balancer actually serves the 'old' GTS ssl cert no matter what. I've even removed the GTS cert via the console but that did not help.
How can I force the load balancer to serve the new cert?
I have a node js app deployed onto EC2 instance running on port 300 and it is exposed to internet via port 80 & 443 via http load balancers.
My security group allows inbound rules on 80 & 443. I have created SSL certificates with ACM for a domain created on GoDaddy (domain name:- www.abcd-example.com).
For CNAME I added these values, Host(Name): _57xxxxxxxxxxxxxxx5d, Points to(value): _68xxxxxxxxxxx67.bxxxxxxxxxxxj.acm-validations.aws.
My ACM certificate was issued and I had loaded these certificate on to load balancer. Now when I try to access the load balancer with https://, I am getting this error : NET::ERR_CERT_COMMON_NAME_INVALID.
I am not sure why this is happening as I had followed all the steps mentioned in AWS docs to dot. Can anyone help me out in this?
We have 10 different kubernetes pods which runs inside a private VPN, this pods are HTTP serving endpoints(not HTTPS). But this services would interact with HTTPS serving endpoints. Logically to make call to HTTP-S serving endpoints from a HTTP serving pod , the SSL server certificate trust is required. Hence we decided to store the SSL certificates inside each HTTP Service pods to make call to HTTPS serving pods.
I am wondering is there are any alternative approaches for managing SSL certificates across different pods in Kubernetes cluster? How about kubeadm for K8s certificate management ... any suggestions ?
This is more of a general SSL certificate question rather than specific to Kubernetes.
If the containers/pods providing the HTTPS endpoint already have their SSL correctly configured and the SSL certificate you are using was purchased/generated from a known, trusted CA (like letsencrypt or any one of the known, trusted certificate companies out there) then there is no reason your other container apps that are making connections to your HTTPS endpoint serving pods would need anything special stored in them.
The only exception to this is if you have your own private CA and you've generated certificates on that internally and are installing them in your HTTPS serving containers. (Or if you are generating self-signed certs). Your pods/containers connecting to the https endpoints would then need to know about the CA certificate. Here is a stackoverflow question/answer that deals with this scenario:
How do I add a CA root certificate inside a docker image?
Lastly, there are better patterns to manage SSL in containers and container schedulers like Kubernetes. It all depends on your design/architecture.
Some general ideas:
Terminate SSL at a load balancer before traffic hits your pods. The load balancer then handles the traffic from itself to the pods as HTTP, and your clients terminate SSL at the Load Balancer. (This doesn't really tackle your specific use case though)
Use something like Hashicorp Vault as an internal CA, and use automation around this product and Kubernetes to manage certificates automatically.
Use something like cert-manager by jetstack to manage SSL in your kubernetes environment automatically. It can connect to a multitude of 'providers' such as letsencrypt for free SSL. https://github.com/jetstack/cert-manager
Hope that helps.
I have an EC2 instance with Apache and Tomcat servers. And I want to add SSL certificates for https access. Since I am new to server technologies, can anybody help me on this? Where do I configure SSL certificates?
Setup a Elastic Load Balancer (ELB) in front of your EC2 instance you can upload ssl certs to these
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_UpdatingLoadBalancerSSL.html
ELB: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/gs-ec2classic.html
Background:
I had 1 instance created for EC2
I had my domain pointing to this instance
I had SSL installed for this instance (things were running great)
Furthermore:
I opted to create a second instance (using custom AMI from first instance)
I create a load balancer (things were working great -- from what i can tell...)
Question:
Why do I need to install the SSL certificate on the load balancer when it seems to have already been working?
I would presume when you say load balancer, you are refering to AWS ELB. If this is no the case, then disregard my answer.
Well the Best practice is to install SSL certs on load balancer and do the SSL termination there. Let the load balacner do SSL encryption/decryption so that your web server can do what they do the BEST...serving hte web pages.
Why do I need to install the SSL certificate on the load balancer
Now, technically You set is fine and you don't have to install SSL on load balancer. But then you have to use TCP Load Balancing feature of AWS ELB where in ELB will simply accespt traffic on 443 and will forward it to Web servers on 443. And then let your web servers do the SSL work.
I think this what you are looking for.