How to monitor ssl certificates with Datadog? - ssl

I have an nginx-pod which redirects traffic into Kubernetes services and stores related certificates insides its volume. I want to monitor these certificates - mainly their expiration.
I found out that there is a TLS integration in Datadog (we use Datadog in our cluster): https://docs.datadoghq.com/integrations/tls/?tab=host.
They provide sample file, which can be found here: https://github.com/DataDog/integrations-core/blob/master/tls/datadog_checks/tls/data/conf.yaml.example
To be honest, I am completely lost and do not understand comments of the sample file - such as:
## #param server - string - required
## The hostname or IP address with which to connect.
I want to monitor certificates that are stored in the pod, does it mean this value should be localhost or do I need to somehow iterate over all the certificates that are stored using this value (such as server_names in nginx.conf)?
If anyone could help me with setting sample configuration, I would be really grateful - if there are any more details I should provide, that is not a problem at all.

TLS Setup on Host
You can use a host type of instance to track all your certificate expiration dates
1- Install TLS Integration from datadog UI
2- Create instance and install datadog agent in there.
3- Create a /etc/datadog/conf.d/tls.d/conf.yaml
4- Edit following template for your need
init_config:
instances:
## #param server - string - required
## The hostname or IP address with which to connect.
#
- server: https://yourDNS1.com/
tags:
- dns:yourDNS.com
- server: https://yourDNS2.com/
tags:
- dns:yourDNS2
- server: yourDNS3.com
tags:
- dns:yourDNS3
- server: https://yourDNS4.com/
tags:
- dns:yourDNS4.com
- server: https://yourDNS5.com/
tags:
- dns:yourDNS5.com
- server: https://yourDNS6.com/
tags:
- dns:yourDNS6.com
5- Restart datadog-agent
systemctl restart datadog-agent
6- Check status if you see the tls is running successfully
watch systemctl status datadog-agent
8- Create a TLS Overview Dashboard
9- Create a Monitor for getting alert on expiration dates
TLS Setup on Kubernetes
1- Create a ConfigMap and attach that as a Volume
https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=configmap

Related

Minio does not seem to recognize TLS/https certificates

Im search now for hours to make minio work with self-signed tls certs using docker.
accroding to the documentation certs just need to be placed at /root/.minio/certs/CAs or /root/.minio/ inside the minio container
I tried both with no success
This is how I start minio (using saltstack):
minio:
docker_container.running:
- order: 10
- hostname: backup
- container_name: backup
- binds:
- /root/backup:/data
- /srv/salt/minio/certs:/root/.minio
- image: minio/minio:latest
- port_bindings:
- 10.10.10.1:9000:443
- environment:
- MINIO_BROWSER=off
- MINIO_ACCESS_KEY=BlaBlaBla
- MINIO_SECRET_KEY=BlaBlaBla
- privileged: false
- entrypoint: sh
- command: -c 'mkdir -p /data/backup && /usr/bin/minio server --address ":443" /data'
- restart_policy: always
If I do "docker logs minio" I just get to see http instead of https:
Endpoint: http://172.17.0.3:443 http://127.0.0.1:443
Both keys public and privat are mounted at the correct location inside the container but they not seem to recognize ...
can smb help, do I need to add some extra parameter here?
Thanks in advance
Per the docs (https://docs.minio.io/docs/how-to-secure-access-to-minio-server-with-tls.html), your keys must be named public.crt and private.key, respectively, and mounted at ~/.minio/certs (e.g. /root/.minio/certs). The CA's directory is for public certs of other servers you want to trust, for example in a distributed setup.
you doesn't need to setup cert in minio. Use nginx server then reverse proxy minio port as like 127.0.0.1:9000 port . then use cert file in nginx server block . your all problem solved

How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?

I am trying to create an HA cluster with HAProxy and below 3 master nodes.
On the proxy I am following the official documentation High Availability Considerations/haproxy configuration. I am passing the ssl verification to the Server Api option ssl-hello-chk.
Having said that I can understand that on my ~/.kube/config file I am using the wrong certificate-authority-data that I picked up from the prime master node e.g.:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <something-something>
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: <something-something>
client-key-data: <something-something>
token: <something-something>
I found a relevant ticket on GitHub Unable to connect to the server: x509: certificate signed by unknown authority/onesolpark which makes sense that I should extract the certificate-authority-data of the proxy.
On this case I assume that I should extract the certificate-authority-data from one of the certs in /etc/kubernetes/pki/ most likely apiserver.*?
Any idea on this?
Thanks in advance for your time and effort.
Okay I managed to figured it out.
When a k8s admin decides to create a HA ckuster he should have minimum one LB but ideally he should have two LB that both are able to LB towards all Master nodes (3,5 etc).
So when the user wants to send a request to Server API towards one of the Master nodes, the request will go through ideally through a Virtual IP and forward to one the LB. As a second step the LB will forward the request to one of the Master nodes.
The problem that I wanted to solve is that the Server API had no record of the IP of the LB(s).
In result the user will get the error Unable to connect to the server: x509: certificate signed by unknown authority.
The solution can be found on this relevant question How can I add an additional IP / hostname to my Kubernetes certificate?.
Straight answer is simply add the LB(s) in the kubeadm config file before launch of Master Prime node e.g.:
apiServer:
certSANs:
- "ip-of-LB1"
- "domain-of-LB1"
- "ip-of-LB2"
- "domain-of-LB2" # etc etc
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
But as it is also mentioned the analytical documentation can be found here Adding a Name to the Kubernetes API Server Certificate.
Also if the user decides to create its own certificates and not use the default self sign certificates (populated from k8s by default) he can add the nodes manually as documented from the official site Certificates.
Then if you want to copy the ca.crt is under the default dir /etc/kubernetes/pki/ca.crt (unless defined differently), or the user can choose to simply copy the ~/.kube/config file for the kubectl communication.
Hope this helps someone else to spend less time in the future.

kubernetes authentication against the API server

I have setup a kubernetes cluster from scratch. This just means I did not use services provided by others, but used the k8s installer it self. Before we used to have other clusters, but with providers and they give you tls cert and key for auth, etc. Now this cluster was setup by myself, I have access via kubectl:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
$
I also tried this and I can add a custom key, but then when I try to query via curl I get pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope.
I can not figure out where can I get the cert and key for a user to authenticate using the API for tls auth. I have tried to understand the official docs, but I have got nowhere. Can someone help me find where those files are or how to add or get certificates that i can use for the rest API?
Edit1: my .kube.config file looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t(...)=
server: https://private_IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS(...)Qo=
client-key-data: LS0(...)tCg==
It works from the localhost just normally.
In the other hand I noticed something. From the localhost I can access the cluster by generating the token using this method.
Also notice that for now I do not mind about creating multiple roles for multiple users, etc. I just need access to the API from remote and can be using "default" authentication or roles.
Now when I try to do the same from remote I get the following:
I tried using that config to run kubectl get all from remote, it runs for a while and then ends in Unable to connect to the server: dial tcpprivate_IP:6443: i/o timeout.
This happens because the config has private_IP, then I changed the IP to Public_IP:6443 and now get the following : Unable to connect to the server: x509: certificate is valid for some_private_IP, My_private_IP, not Public_IP:6443
Keep present that this is and AWS ec2 instance with elastic IP (You can think of Elastic IP as just a public IP on a traditional setup, but this public ip is on your public router and then this router routes requests to your actual server on private network). For AWS fans like I said, I can not use the EKS service here.
So how do I get this to be able to use the Public IP?
It seems your main problem is the TLS server certificate validation.
One option is to tell kubectl to skip the validation of the server certificate:
kubectl --insecure-skip-tls-verify ...
This has obviously the potential to be "insecure", but that depends on your use case
Another option is to recreate the cluster with the public IP address added to the server certificate. And it should also be possible to recreate only the certificate with kubeadm without recreating the cluster. Details about the latter two points can be found in this answer.
You need to setup RBAC for the user. define roles and rolebinding. follow the link for reference -> https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

OpenShift Pro - Using SSL in Passthrough Mode

By watching this video OpenShift Demo Part 13: Using SSL I have managed to get ssl working on my OpenShift Pro web app. But only by using Edge Termination Type. I believe Passthrough Termination Type is more secure, so I thought I'd try that too.
Unfortunately I cannot get Passthrough to work. The video example uses a pre-written template to create the app which I cannot use.
I believe the reason is that I have not specified the certificate and private key in the YAML:
spec:
host: www.bookshop.com
port:
targetPort: 8080-tcp
tbs:
insecureEdgeTerminationPolicy: Redirect
termination: passthrough
to:
kind: Service
name: web
weight: 100
I have created a secret to hold the keystone.jks:
oc secrets new ssl-cert-secret /Users/infomedia/Clients/PJLocums/ssl/keystore.jks
And created a Service Account that specifies ssl-cert-secret:
oc create -f /Users/infomedia/Clients/PJLocums/ssl/ssl-cert-service-account.txt
And I'm guessing that one or other of the secret or the service account should be specified in the route YAML.
Any ideas or suggestions would be much appreciated.

setting up a drone server to use TLS/SSL

The default installation instructions show how to set up a server on port 80 using HTTP and WS (i.e. unencrypted).
The agent installation shows that TLS enabled servers are possible (I'l link here, but I'm not allowed).
The server configuration options show that DRONE_SERVER_CERT and DRONE_SERVER_KEY are available http://readme.drone.io/0.5/install/server-configuration/
Are there any fuller instructions to set this up? e.g. have port 80 forward to port 443 and have all agents talking to the server over encrypted channels.
If you were using certificates with drone 0.4 it will be the same configuration, although the names perhaps changed slightly. You will need to pass the following variables to your container:
DRONE_SERVER_CERT=/path/to/drone.cert
DRONE_SERVER_KEY=/path/to/drone.key
These certificates will exist on your host machine, which means their paths need to be mounted into your drone server:
--volume=/path/to/drone.cert:/path/to/drone.cert
--volume=/path/to/drone.key:/path/to/drone.key
You can also instruct Docker to expose 443 and forward to drone's default port 8000
-p 443:8000
When you configure the agent, you will of course need to update the configuration to use wss. You can read more in the agent docs, but essentially something like this:
DRONE_SERVER=wss://drone.server.com/ws/broker
And finally, if you get cert errors I recommend including the cert chain in your bundle. Bottom line, drone does not parse certs. Drone uses http.ListenAndServeTLS(cert, key). So any cert issues are coming from the standard library directly, and questions should therefore be directed to the Go support channels.