Openshift .kubeconfig file and certificate authentication - openshift-origin

I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.

OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.

Related

kubernetes traffic with own certificates

I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.

Kubernetes application authentication

Maybe this is a dumb question, but I really don't know if I have to secure applications with tokens etc. within a kubernetes cluster.
So for example I make a grpc-call from a client within the cluster to a server within the cluster.
I thought this should be secure without authenticating the client with a token or something like that, because (if I understood it right) kubernetes pods and services work within a VPN which won't be exposed as long as it's not told to.
But is this really secure, should I somehow build an authorization system within my cluster?
Also how can I use a service to load balance the grpc-calls over the server pods without exposing the server outside the cluster?
If you have a service, it already has built-in load balancer when you have more than one replica out of the box.
Also Kubernetes traffic is internal within the cluster out of the box, unless you explicitly expose traffic using LoadBalancer, Ingress or NodePort.
Does it mean traffic is safe? No.
By default, everything is allowed within Kubernetes cluster so every service can reach every service or pod in StatefulSet apps.
You can use NetworkPolicy to allow traffic from one service to another service and nothing else. That would increase security.
Does it mean traffic is safe now? It depends.
Authentication would add an additional security layer in case container is hacked. There could be more scenarios, but I can't think of for now.
So internal authentication is usually used to improve security in production systems.
I hope it answers the question.

How to set up an architecture of scalable custom domains & auto-SSL on Google Kubernetes Engine

We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.
I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.

Security in Azure Managed Kubernetes Service (AKS)

I am trying to get more documentation, understanding about security in Azure Managed Kubernetes Service (AKS).
Does Azure encrypt the containers deployed to the AKS cluster at "rest"? If so, how is data encryption achieved at rest, and in motion?
What are the ways to achieve SSL/TLS in AKS, any documentation is appreciated.
Thanks in advance
I can definitely tell you TLS termination is supported in AKS. I've been able to implement this.
HTTPS Ingress on Azure Kubernetes Service (AKS)
This documentation is slightly out of date though. You should use cert-manager instead of KUBE-LEGO.
I would welcome a more authoritative answer, but as far as I have determined managed disks are always encrypted (https://azure.microsoft.com/en-us/blog/azure-managed-disks-sse/), but the worker nodes are not encrypted by default. It would be necessary to run az vm encryption enable on every node (quite a chore if you are scaling up and down!). If you do that you should be covered, though.
As for SSL/TLS Kubernetes supports TLS for Ingress, see https://kubernetes.io/docs/concepts/services-networking/ingress, but I haven't tested it in AKS. We are using our own Nginx server as gateway and with that approach you can use any TLS tutorial out there. We feel that we have more control that way.

Will there be support to establish a private connection to Azure AKS

My client is currently evulating AKS which seems to be really promising. Our current platform is based on Azure VM's we provision ourselves. We would like to create private communication between both our existing platform and the managed AKS cluster but so far that does not seem to be supported yet.
Some example use cases for us are:
- Proxying incoming HTTP traffic via our main entrypoint, a Varnish server, to the new AKS environment so we don't have to change url's
- Accessing non publically exposed API's from the AKS environment
Right now the AKS cluster is it's a different subscription and resource group than other parts of our platform. The main reason we we can't connect though seems to be that it's not possible to specify which private IP range should be used when creating an AKS cluster.
Is there support planned for this or is there a reliable workaround?
Thanks for the inquiry, there's a workaround for the stated case, it's through the use of ACS Engine, "ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes"
So using this solution will allow you to integrate Azure Container Service Cluster into an existing Virtual Network.More details and step by step guide can be found here: https://blogs.msdn.microsoft.com/jcorioland/2017/01/10/how-to-integrate-a-new-azure-container-service-cluster-into-an-existing-virtual-network-using-acs-engine/