In our company's internal network we have self-signed certificates used for applications that runs on DEV or staging environments. For our local machines it's already trusted because Active Directory provides that using Group Policy Objects. But in the Kubernetes(Openshift) world, we have to do some additional operations to provide successful SSL/TLS traffic.
In the Dockerfile of related application, we copy the certificate into container and trust it while building Docker image. After that, the requests from application that runs in container to an HTTPS endpoint served with that self signed certificate are success. Otherwise we encounter the errors like "SSL/TLS secure channel cannot be established"
COPY ./mycertificate.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/mycertificate.crt && update-ca-certificates
However, I don't think this is the best way to do this. It requires lots of operational work when the certificate has expired. Shortly it's hard to manage and maintain. I wonder what's the efficient way to handle this.
Thanks in advance for your support.
Typically that should be configured cluster-wide by your OpenShift administrators using the following documentation so that your containers trust your internal root CA by default (additionalTrustBundle):
https://docs.openshift.com/container-platform/4.6/networking/configuring-a-custom-pki.html#nw-proxy-configure-object_configuring-a-custom-pki
Best is highly relative but you could start by pulling that out into a ConfigMap and mounting it into your container(s). That pushes all the work of updating it out to runtime, but introduces a fair bit of complexity. It depends on how often it changes and how much you can automate the rebuilds/redeploys when it does.
Related
I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.
We are trying to secure our AKS cluster by providing trusted CAs (ssl certs) to Kubernetes Control Plane.
The default API server certificate is issued by while the cluster is created.
Is there any way that we can embed trusted Certificates into the control plane before provisioning the cluster?
Like when we try to reach the kubernetes server it shows ssl certificate issue
To ged rid of this we must be able to add organizations certificates to the api server.
When we create a cluster in Cloud (managed Kubernetes Cluster) we do not have access to the control plane nodes, due to which we won't be able to configure the api server.
Could anyone please help me out figuring out how to add ssl certs to the control plane of kubernetes?
When we create a cluster in Cloud (managed Kubernetes Cluster) we do
not have access to the control plane nodes, due to which we won't be
able to configure the api server.
And that's the biggest inconvenience and pain for everyone who likes anything else except OOB solutions...
My answer is NO. No, unfortunately you cant achieve this in case of AKS usage.
Btw, here also interesting info: Self signed certificates used on management API. Copy paste here for future references despite the fact that answer doesn't help you.
You are correct that per the normal PKI specification dictates use of
non self signed certificates for SSL transport. However, the reason we
do not currently support fully signed certificates is:
Kubernetes requires the ability to self generate and sign certificates Users injecting their own CA is known to be error prone
in Kubernetes as a whole
We are aware of the desire to move away from self signed certificates,
however this requires work in upstream to make this much more likely
to succeed. The official documentation explains a lot of this as well
as the requirements well:
https://kubernetes.io/docs/concepts/cluster-administration/certificates/
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
https://kubernetes.io/docs/setup/best-practices/certificates/
Additionally, this post goes in deeper to the issues around cert
management:
https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/
I'm new to Docker, I've been trying to set up an environment that emulates a standard LAMP stack do develop PHP applications locally and easily deploy them
So far I've followed this setup for my Docker, it seems to be working fine, but I'm having trouble with certificates. On a normal server I would just run Certbot, select the Apache site to enable HTTPS for, and be done with it.
On Docker however I have no idea how to do this. My certificates should be placed inside ./cert/. Does that mean that I have to run commands to add the PPA, install Certbot, then create a certificate and place it in the folder I want? Or is there a simpler way to do this?
Googling brought me to a whole lot of Docker images that automatically create a Certificate and also create an Apache instance, but I'd like to keep this as vanilla as possible.
What is the process of using a Let's Encrypt certificate with Docker?
Should I even install one locally or is that bad practice?
My certificates should be placed inside ./cert/. Does that mean that I have to run commands to add the PPA, install Certbot, then create a certificate and place it in the folder I want? Or is there a simpler way to do this?
Yes, you can proceed like this and store the certificate into a volume which point to ./cert/.
What is the process of using a Let's Encrypt certificate with Docker?
Should I even install one locally or is that bad practice?
There is no certificate management with docker. Yes you can manage the certificate in your container but it would be hard to maintain it ( renewal etc).
The correct approach would be to use traefik as a load balancer it has built-in certificate manager which handle all the necessary.
A Twelve factor app is expected to store configuration in the environment.
Is this intended to include SSL certificate and key files, which can be "large" (multiples of kb at least), and (depending on the format), often contain non-printable characters (newlines at minimum).
Or is the environment expected just to point to the cert/key file names? (This seems perhaps non-ideal when trying to deploy via Docker, for instance--we don't really want to store private keys in the docker image, do we? But maybe that's a separate question.)
An SSL certificate is (strictly seen) not a configuration but an asset file.
How you provide this asset depends on your way of hosting. But here are some options:
An easy way is to integrate letsencrypt and use certbot which handles the downloading of the certs securly and automatically. letsencrypt has some integrations for some languages (e.g. go has several clients that can be integrated into an app).
You could use a load balancer and terminate the ssl at the load balancer. In this case your app does not need to know anything about the cert.
Kubernetes provides secrets that can store certs safely and copy those files on deployment to a pod (simplified: A pod is a package that thinly wraps a docker container including your app).
Kubernetes can also use Ingress as a LoadBalancer which terminates the ssl.
Another option is to use hashicorp's Vault. This is a service that manages and distributes secrets.
Surely, there are more options and these are just hints. But the secure storing and distrubution of ssl certificates is no easy task. I hope I have given some good hints.
I am no expert on 12-factor app best-practices; however, from the linked article on 12-factor app config it sounds like the prescribed practice is to use environment variables to identify configurable aspects of the program.
Perhaps SSL_CERT_FILE and SSL_KEY_FILE env vars could be read as paths to the respective files so that they could easily be overridden in different environments where the files might reside at different locations.
I am personally fond of software that "just works" without the need for additional configuration so you might also consider embedding the certificate and key files in the executable itself (if that's feasible) so that the program works "out of the box" but users of the program may also specify alternate cert/key file locations if needed by the environment or specific application.
There are different kinds of configuration elements. The motivation for 12 factor apps to store configuration in environment is for a specific goal: To make it easy for it to be redeployed elsewhere in a new environment. Thus, only those configuration elements qualify to go to the environment which contribute towards this goal. Other domain or application specific configuration elements can remain bundled with the application's local or technology specific configuration method of choice.
For SSL certificates, it appears that these will not change from environment to environment, so you are not bound to keep them in the environment variables, IMO.
In our environment,we often reinstall our machine with the same hostname.
So,everytime I sync the configuration vai puppet,SSL authentication will fail.
How can I totally disable SSL authentication?
The assumption that connections are authenticated and identified through trusted SSL certificates runs very deep in Puppet's core. There is no way to make Puppet run through unencrypted channels only.
Things you can try:
holding on to signed certificates and getting them back onto machines just when they are re-provisioned
creating a hook that will make the master revoke the old certificate when a machine is decommissioned/recommissioned
With the latter approach, the signing of fresh certificates will be eased considerably. You might even implement an autosigning scheme, although that is tricky to do in a secure fashion.