Can I just use my own admin.conf before kubeadm init and after that kubeadm will use the file for authorization? This will be really convenient for accessing this cluster outside it because I don't need to process the certs and keys each time.
Related
I have nodejs app using kafkajs package for connecting to AWS MSK.
We are moving to Strimzi Kafka because we already have a kubernetes cluster and we don't need the MSK anymore.
Until now we were connected with SSL but didn't have to specify any CA path or something. We used this way of connection both on our nodejs apps and kafka-ui and it worked with no issues.
We are trying to the same with Strimzi Kafka, but we get SSL handshake failed.
For my understanding is AWS MSK is using amazon certificates that are known while the Strimzi Kafka is generating self signed certificates which is ok by us.
How can I still using this way like we used with AWS MSK? With just use ssl: true in kafkajs (It works)
Thanks.
The easiest way to use a certificate signed by some public CA is using the listener certificate which lets you provide your own server certificate for given listener. I'm not sure how the Amazon CA works, but this blog post shows how to do it for example using Cert-Manager and Let's Encrypt.
Keep in mind that to use the public CAs, you usually need to use some proper domain names and not just internal Kubernetes services. This might for example increase costs or latency if your applications run in the same Kubernetes cluster because the traffic might need to go through a load balancer or ingress.
I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.
In our company's internal network we have self-signed certificates used for applications that runs on DEV or staging environments. For our local machines it's already trusted because Active Directory provides that using Group Policy Objects. But in the Kubernetes(Openshift) world, we have to do some additional operations to provide successful SSL/TLS traffic.
In the Dockerfile of related application, we copy the certificate into container and trust it while building Docker image. After that, the requests from application that runs in container to an HTTPS endpoint served with that self signed certificate are success. Otherwise we encounter the errors like "SSL/TLS secure channel cannot be established"
COPY ./mycertificate.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/mycertificate.crt && update-ca-certificates
However, I don't think this is the best way to do this. It requires lots of operational work when the certificate has expired. Shortly it's hard to manage and maintain. I wonder what's the efficient way to handle this.
Thanks in advance for your support.
Typically that should be configured cluster-wide by your OpenShift administrators using the following documentation so that your containers trust your internal root CA by default (additionalTrustBundle):
https://docs.openshift.com/container-platform/4.6/networking/configuring-a-custom-pki.html#nw-proxy-configure-object_configuring-a-custom-pki
Best is highly relative but you could start by pulling that out into a ConfigMap and mounting it into your container(s). That pushes all the work of updating it out to runtime, but introduces a fair bit of complexity. It depends on how often it changes and how much you can automate the rebuilds/redeploys when it does.
I have three questions basically.
How to create an HA rancher cluster with custom CA.
How to create kubernetes cluster using the same rancher and custom CA.
How to get etcd certificated from etc machines to monitor it on Prometheus kubertnetes over SSL.
I tried multiple forums and rancher documentation. I also tried generating certificates from rke.
I have two different problems
How to use custom certificates
How to get certificates from ectd to run this rancher kubectl -n monitoring create secret generic etcd-certs --from-file=/tmp/etcdcerts/kube-etcd.pem --from-file=/tmp/etcdcerts/kube-etcd-key.pem --from-file=/tmp/etcdcerts/kube-ca.pem
Right now I am doing scp on ectd machines to get those certificates after rancher agent runs. I want to create certificates and create a cluster with them.
You can bring in your own certificates when installing Rancher. See here for more info: https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/
When you create a Cluster in Rancher, the certificates are automatically managed for you.
Starting Rancher v2.2.x, Prometheus is integrated into Rancher. You just have to enable it in Settings. After the installation, you can access the etcd metrics by clicking the Grafana icon in the UI on the cluster page.
I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.