Security in Azure Managed Kubernetes Service (AKS) - azure-container-service

I am trying to get more documentation, understanding about security in Azure Managed Kubernetes Service (AKS).
Does Azure encrypt the containers deployed to the AKS cluster at "rest"? If so, how is data encryption achieved at rest, and in motion?
What are the ways to achieve SSL/TLS in AKS, any documentation is appreciated.
Thanks in advance

I can definitely tell you TLS termination is supported in AKS. I've been able to implement this.
HTTPS Ingress on Azure Kubernetes Service (AKS)
This documentation is slightly out of date though. You should use cert-manager instead of KUBE-LEGO.

I would welcome a more authoritative answer, but as far as I have determined managed disks are always encrypted (https://azure.microsoft.com/en-us/blog/azure-managed-disks-sse/), but the worker nodes are not encrypted by default. It would be necessary to run az vm encryption enable on every node (quite a chore if you are scaling up and down!). If you do that you should be covered, though.
As for SSL/TLS Kubernetes supports TLS for Ingress, see https://kubernetes.io/docs/concepts/services-networking/ingress, but I haven't tested it in AKS. We are using our own Nginx server as gateway and with that approach you can use any TLS tutorial out there. We feel that we have more control that way.

Related

Strimzi Kafka ssl validation failed VS AWS MSK ssl passed

I have nodejs app using kafkajs package for connecting to AWS MSK.
We are moving to Strimzi Kafka because we already have a kubernetes cluster and we don't need the MSK anymore.
Until now we were connected with SSL but didn't have to specify any CA path or something. We used this way of connection both on our nodejs apps and kafka-ui and it worked with no issues.
We are trying to the same with Strimzi Kafka, but we get SSL handshake failed.
For my understanding is AWS MSK is using amazon certificates that are known while the Strimzi Kafka is generating self signed certificates which is ok by us.
How can I still using this way like we used with AWS MSK? With just use ssl: true in kafkajs (It works)
Thanks.
The easiest way to use a certificate signed by some public CA is using the listener certificate which lets you provide your own server certificate for given listener. I'm not sure how the Amazon CA works, but this blog post shows how to do it for example using Cert-Manager and Let's Encrypt.
Keep in mind that to use the public CAs, you usually need to use some proper domain names and not just internal Kubernetes services. This might for example increase costs or latency if your applications run in the same Kubernetes cluster because the traffic might need to go through a load balancer or ingress.

Certificate Management in Managed Kubernetes

We are trying to secure our AKS cluster by providing trusted CAs (ssl certs) to Kubernetes Control Plane.
The default API server certificate is issued by while the cluster is created.
Is there any way that we can embed trusted Certificates into the control plane before provisioning the cluster?
Like when we try to reach the kubernetes server it shows ssl certificate issue
To ged rid of this we must be able to add organizations certificates to the api server.
When we create a cluster in Cloud (managed Kubernetes Cluster) we do not have access to the control plane nodes, due to which we won't be able to configure the api server.
Could anyone please help me out figuring out how to add ssl certs to the control plane of kubernetes?
When we create a cluster in Cloud (managed Kubernetes Cluster) we do
not have access to the control plane nodes, due to which we won't be
able to configure the api server.
And that's the biggest inconvenience and pain for everyone who likes anything else except OOB solutions...
My answer is NO. No, unfortunately you cant achieve this in case of AKS usage.
Btw, here also interesting info: Self signed certificates used on management API. Copy paste here for future references despite the fact that answer doesn't help you.
You are correct that per the normal PKI specification dictates use of
non self signed certificates for SSL transport. However, the reason we
do not currently support fully signed certificates is:
Kubernetes requires the ability to self generate and sign certificates Users injecting their own CA is known to be error prone
in Kubernetes as a whole
We are aware of the desire to move away from self signed certificates,
however this requires work in upstream to make this much more likely
to succeed. The official documentation explains a lot of this as well
as the requirements well:
https://kubernetes.io/docs/concepts/cluster-administration/certificates/
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
https://kubernetes.io/docs/setup/best-practices/certificates/
Additionally, this post goes in deeper to the issues around cert
management:
https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

Certificates per cluster or certificate per service provider?

We have service provider that takes a request and creates cluster of elastic search.
What is the best practice to issue ssl certificate ?
1. Should we issue certificate per cluster ?
2. or One cluster for my service provider should be enough which will be used to access clusters ?
I am assuming issuing new certificate while creating cluster is better.
Please provide me the input.
Also, inside the cluster, do I really need to enable ssl so that pods talk to each other passing certificate ?
Yes, you should definitely use TLS to encrypt network traffic to, from, and within your Elasticsearch clusters run on shared and managed K8S version (GKE).
Additionally I would opt for a maximum separation of customer spaces with:
Kubernetes namespaces
namespaced serviceaccounts/rolebindings
and even PD-SSD based volumes with customer supplied encryption keys
I'm not sure if you are aware of existence of 'Elastic Cloud on Kubernetes' (ECK) - it applies Kubernetes Operator pattern for running and operating Elasticsearch clusters on your own K8S cluster in GCP. Treat it also like a collection of a best practices for running Elasticsearch cluster in most secure way, here is a quick start tutorial.

Can logstash access a redis input via ssl?

I'm setting up a cloud service on azure and want to buffer logs in redis. However running redis as a web service on azure means my requests have to leave my virtual network, which means encryption is a must.
I've searched for hours but haven't found any clues whether logstash can read from redis via ssl. Isn't that possible at all?
Seems like redis isn't able to talk ssl and the redis web service of azure seems to come with custom ssl support, which seems to be the reason why there is no ssl support for the redis input.
However this solution (stunnel) helped me solving my problem: http://bencane.com/2014/02/18/sending-redis-traffic-through-an-ssl-tunnel-with-stunnel/

Openshift .kubeconfig file and certificate authentication

I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.