I'm using kubernetes to host my app. I want automatically generate and renew lets-encrypt certificates using [cert-manager]
My project is open-source and all kubernetes configs are publicly available here. The domain I'm requesting a certificate is pychat.org and is managing my cloudflare. Kubernetes is set up already, the domain is pointing to the load-balancer ip address and returns index.html correctly.
So I'm following the guide:
Installing cert-manager CRD and cert-manager worker itself
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm template cert-manager jetstack/cert-manager --namespace cert-manager --version v1.8.0| kubectl apply -f -
Defining cloudflare-api key in cf-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
type: Opaque
stringData:
api-token: cf-token-copied-from-api
Defining issues and certificate in cert-manager.yaml:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: deathangel908#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
cloudflare:
email: deathangel908#gmail.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pychat-domain
namespace: pychat
spec:
secretName: pychat-tls
issuerRef:
name: letsencrypt-prod
duration: 2160h # 90d
renewBefore: 720h # 30d before SSL will expire, renew it
dnsNames:
- "pychat.org"
- "*.pychat.org"
3 .If I check the certificate, it seems like generated correctly:
kubectl get secret pychat-tls -n default -o yaml
apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiB...
tls.key: LS0tLS1CRUdJTiBSU0E...
metadata:
annotations:
cert-manager.io/alt-names: '*.pychat.org,pychat.org'
cert-manager.io/certificate-name: pychat-domain
cert-manager.io/common-name: pychat.org
cert-manager.io/ip-sans: ""
cert-manager.io/issuer-group: ""
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: letsencrypt-prod
cert-manager.io/uri-sans: ""
creationTimestamp: "2022-05-21T22:44:22Z"
name: pychat-tls
namespace: default
resourceVersion: "1800"
uid: f38c228b-b3a6-4649-aaf6-d9727685569c
type: kubernetes.io/tls
echo 'tls.crt...LS0tLS1CRUdJTiB' |base64 -d > lol.cert
openssl x509 -in ./lol.cert -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
04:b4:24:ae:61:c9:24:b6:50:5d:c2:50:0c:28:0f:c1:d5:17
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Let's Encrypt, CN = R3
Validity
Not Before: May 21 21:46:14 2022 GMT
Not After : Aug 19 21:46:13 2022 GMT
Subject: CN = pychat.org
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:c7:7f:08:....
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
38:41:8D:F9:5A...
X509v3 Authority Key Identifier:
keyid:14:2E:B3..
Authority Information Access:
OCSP - URI:http://r3.o.lencr.org
CA Issuers - URI:http://r3.i.lencr.org/
X509v3 Subject Alternative Name:
DNS:*.pychat.org, DNS:pychat.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
Policy: 1.3.6.1.4.1.44947.1.1.1
CPS: http://cps.letsencrypt.org
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : 46:A5:55:..
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:21:00:...
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : 6F:53:76:A...
Timestamp : May 21 22:46:14.722 2022 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:...
Signature Algorithm: sha256WithRSAEncryption
6b:21:da:3a:ea:d8:...
If I check ingress, it shows this:
: kubectl describe ingress ingress
Name: ingress
Labels: <none>
Namespace: pychat
Address: 194.195.247.104
Default backend: frontend-service:80 (10.2.0.15:80)
TLS:
pychat-tls terminates pychat.org
Rules:
Host Path Backends
---- ---- --------
pychat.org
/api backend-service:8888 (10.2.0.16:8888,10.2.0.19:8888)
/ws backend-service:8888 (10.2.0.16:8888,10.2.0.19:8888)
/ frontend-service:80 (10.2.0.15:80)
Annotations: cert-manager.io/issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 15m cert-manager-ingress-shim Successfully created Certificate "pychat-tls"
Normal Sync 15m (x2 over 15m) nginx-ingress-controller Scheduled for sync
But If I make a request, it returns self-signed kube default certificate:
: curl -vk https://pychat.org/ 2>&1 | grep -e subject: -e issuer:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
The server is hosted on Linode and uses its load balancer if it matters:
helm repo add stable https://charts.helm.sh/stable
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx
The ingress.yaml configuration looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: "letsencrypt-prod"
name: ingress
namespace: pychat
spec:
ingressClassName: nginx
tls:
- hosts:
- pychat.org
secretName: pychat-tls
defaultBackend:
service:
name: frontend-service
port:
number: 80
rules:
- host: pychat.org
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8888
- path: /ws
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8888
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Found the issue, the guides just don't tell you much about if:
All the resources you create should be in the same namespace as your server. secret cloud-flare-api-token-secret, lets-encrypt-prod issuer, pychat-domain certificate, all of these things should have metadata -> namespace of your server, rather than cert-manager or default one.
Further tip: to debug this issue, I took a look at pods logs:
kubectl get all -n cert-manager
kubectl logs pod/cert-manager-id-from-top -n cert-manager
Related
I team I have followed this link to configure cert manager in for My Istio but still I am not able to access the app through Istio ingress.
my manifest file look like this:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-cert
namespace: testing
spec:
secretName: test-cert
dnsNames:
- "example.com"
issuerRef:
name: test-letsencrypt
kind: ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: test-letsencrypt
namespace: testing
spec:
acme:
email: abc#example.com
privateKeySecretRef:
name: testing-letsencrypt-private-key
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: istio
selector: {}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: test-letsencrypt
name: test-gateway
namespace: testing
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "example.com"
tls:
mode: SIMPLE
credentialName: test-cert
Can anyone help me with what I am missing here?
Error from browser :
Secure Connection Failed
An error occurred during a connection to skydeck-test.asteria.co.in. PR_CONNECT_RESET_ERROR
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the web site owners to inform them of this problem.
Learn more…
these are few logs may be helpful:
Normal Generated 5m13s cert-manager Stored new private key in temporary Secret resource "test-cert-sthkc"
Normal Requested 5m13s cert-manager Created new CertificateRequest resource "test-cert-htxcr"
Normal Issuing 4m33s cert-manager The certificate has been successfully issued
samirparhi#Samirs-Mac ~ % k get certificate -n testing
NAME READY SECRET AGE
test-cert True test-cert 19m
Note: this Namespace (testing) has Istio side car injection enabled and all the http request is working but HTTPS when I try to setup , it fails
I encountered the same problem when my certificate was not authenticated by a trusted third party but instead signed by me. I had to add an exception to my browser in order to access the site. So a simple money issue.
Also I was able to add my certificate to the /etc/ssl directory of the client machine to connect without problems.
Also I was able to add certificates by using TLS secrets and adding them to my virtual service configuration. You can try them too.
Examples:
TLS Secret:
kubectl create -n istio-system secret tls my-tls-secret --key=www.example.com.key --cert=www.example.com.crt
I assumed that you already have your certificate and its key but in case you need it:
Certificate creation:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=My Company Inc./CN=example.com' -keyout example.com.key -out example.com.crt
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=World Wide Example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Just don't forget to fill -subj fields in a reasonable manner. They are the working factor of authenticity when it comes to SSL certs as I understand. For example the first line of certificate creation creates a key and certificate for your organisation. Which is not approved by authorities to be added to Mozilla's or Chrome's or OS's ssl database.
That is why you get your "Untrusted certificate" message. So, for that reasons you can simply create a key and create your dns records on a trusted third parties dns zone and database and by paying them, you can use their trusted organisation certificates for authenticating your own site.
Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: my-tls-secret # must be the same as secret
hosts:
- www.example.com
Hope it helps.
Feel free to share "app" details.
I am using AWS NLB and therefore SSL should happen at the argocd (1.7.8) side. However it seems nothing I do argocd always uses self-signed cert.
➜ curl -vvI https://argocd-dev.example.com
* Trying 54.18.49.47:443...
* Connected to argocd-dev.example.com (54.18.49.47) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: self signed certificate
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
this is my ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd-dev.example.com
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
this is how I start argocd-server:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: server
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
name: argocd-server
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
template:
metadata:
labels:
app.kubernetes.io/name: argocd-server
spec:
containers:
- command:
- argocd-server
- --staticassets
- /shared/app
- --loglevel
- debug
- --client-certificate
- /var/ssl-cert/tls.crt
- --client-key
- /var/ssl-cert/tls.key
image: argoproj/argocd:v1.7.8
imagePullPolicy: Always
name: argocd-server
ports:
- containerPort: 8080
- containerPort: 8083
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts
- mountPath: /app/config/tls
name: tls-certs
- mountPath: /var/ssl-cert
name: ssl-cert
readOnly: true
serviceAccountName: argocd-server
volumes:
- emptyDir: {}
name: static-files
- configMap:
name: argocd-ssh-known-hosts-cm
name: ssh-known-hosts
- configMap:
name: argocd-tls-certs-cm
name: tls-certs
- name: ssl-cert
secret:
secretName: tls-secret
You should take a look at https://argoproj.github.io/argo-cd/operator-manual/ingress/
ArgoCD has some unusual configuration that is required.
Either you need to start Argo in http (insecure) mode if your load balancer is doing the SSL or you need to pass your secret into the Kubernetes Ingress.
Argo CD expects certificate and certificate key in the tls.crt and tls.key keys of argocd-secret Secret: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-secret.yaml#L11
Restart is not required - a new certificate should be used as soon as the secret is updated.
I am trying to create an API that serves HTTPS traffic publicly and is reachable by an IP address (not the domain), using a GKE cluster. Docker images have been tested locally. They were capable of serving HTTPS on their own, but as far as I've come to realize, this is not necessary for the setup that I am imagining.
So what I've come up with so far is to have a Kubernetes Service exposing it's 8443 port and having an Ingress load balancer mapping to that port and using self-signed certificates created using this tutorial - basic-ingress-secret referred in the template. The only thing I have skipped is the domain binding given I am not in the possession of a domain. I hoped it would bind the certificate to the external IP, but this is unfortunately not the case (have tried to attach an IP to a CN of the certificate, as some users have noted here).
This is my yaml for service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- protocol: "TCP"
port: 8443
targetPort: 8443
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-27417/some:latest"
This is my yaml for Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
It is way simpler than these documentations explain.
1.- Create the self-signed certs
openssl req -newkey rsa:2048 -nodes -keyout tls.key -out tls.csr
openssl x509 -in tls.csr -out tls.crt -req -signkey tls.key
2.- Create the secret
kubectl create secret tls basic-ingress-secret --cert tls.crt --key tls.key
3.- Create the Ingress object
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
Note: To make is work on GKE, your service must be of type NodePort
$ kubectl describe svc some-node
Name: some-node
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: NodePort
IP: 10.60.6.214
Port: <unset> 8443/TCP
TargetPort: 80/TCP
NodePort: <unset> 30250/TCP
Endpoints: 10.56.0.17:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I have found a final solution that suits my current needs.
The issue with the setup above was that I didn't had a nodePort: value setup and that the SSL certificate was not properly working so I have purchased a domain, secured a static IP for the Ingress load balancer using gcloud compute addresses create some-static-ip --globaland pointed that domain to the IP. I have then created a self-signed SSL certificate again following this tutorial and using my new domain.
The final yaml for the service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- port: 80
targetPort: 30041
nodePort: 30041
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-project/some:v1"
The final yaml for the LB:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "some-static-ip"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- host: my.domain.com
http:
paths:
- path: /some/*
backend:
serviceName: some-node
servicePort: 80
The service now serves HTTPS traffic (only 443 port) on my.domain.com/some/* (placeholder domain and path). It is a simple HTTPS service setup that would require only a purchase of CA issued SSL certificate and a proper scaling configuration to be fully productionalized, from the DevOps standpoint. Unless someone has found some serious drawbacks to this setup.
I am running a baremetal Kubernetes , with nginx ingress and metallb , and some hostnames mapped to the external ip provided by metallb.
I have created an nginx deployment , exposed it via service and created an ingress with the hostname.
I have created with openssl a self-signed certificate :
openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout tls.key -out tls.crt -subj "/CN=fake.example.com" -days 365
Then created a secret in the correct namespace:
kubectl -n demo create secret tls fake-self-secret --cert=tls.crt --key=tls.key
Then created the ingress :
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: demo-ingress
namespace: demo
spec:
rules:
- host: fake.example.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
path: /
tls:
- hosts:
- fake.example.com
secretName: fake-self-secret
Http works ( because of ssl-redirect false annotation) , https returns SSL_ERROR_RX_RECORD_TOO_LONG, on the nginx ingress controller log i see something like
"\x16\x03\x01\x00\xA8\x01\x00\x00\xA4\x03\x03*\x22\xA8\x8F\x07q\xAD\x98\xC1!\
openssl s_client -connect fake.example.com:443 -servername fake.example.com -crlf
140027703674768:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:794:
Nginx ingress-controller version is 0.30, with the default configuration, ssl-protocols enabled in the configmap are : TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
Any help / new ideas are welcomed :)
i have switched from kubernetes nginx ingress controller, to NGINX Ingress Controller, version nginx/nginx-ingress:1.7.0 ,and the config works
We purchased a Komodo SSL certificate, which come in 5 files:
I am looking for a guide for how to apply it on our Kubernetes Ingress.
As it described in documentation:
you need to create secret with your cert:
apiVersion: v1
data:
tls.crt: content_of_file_condohub_com_br.crt
tls.key: content_of_file_HSSL-5beedef526b9e.key
kind: Secret
metadata:
name: secret-tls
namespace: default
type: Opaque
and then update your ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- your.amazing.host.com
secretName: secret-tls
rules:
- host: your.amazing.host.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80
Ingress will use the certs from secret files.