We purchased a Komodo SSL certificate, which come in 5 files:
I am looking for a guide for how to apply it on our Kubernetes Ingress.
As it described in documentation:
you need to create secret with your cert:
apiVersion: v1
data:
tls.crt: content_of_file_condohub_com_br.crt
tls.key: content_of_file_HSSL-5beedef526b9e.key
kind: Secret
metadata:
name: secret-tls
namespace: default
type: Opaque
and then update your ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- your.amazing.host.com
secretName: secret-tls
rules:
- host: your.amazing.host.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80
Ingress will use the certs from secret files.
Related
I'm using kubernetes to host my app. I want automatically generate and renew lets-encrypt certificates using [cert-manager]
My project is open-source and all kubernetes configs are publicly available here. The domain I'm requesting a certificate is pychat.org and is managing my cloudflare. Kubernetes is set up already, the domain is pointing to the load-balancer ip address and returns index.html correctly.
So I'm following the guide:
Installing cert-manager CRD and cert-manager worker itself
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm template cert-manager jetstack/cert-manager --namespace cert-manager --version v1.8.0| kubectl apply -f -
Defining cloudflare-api key in cf-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
type: Opaque
stringData:
api-token: cf-token-copied-from-api
Defining issues and certificate in cert-manager.yaml:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: deathangel908#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
cloudflare:
email: deathangel908#gmail.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pychat-domain
namespace: pychat
spec:
secretName: pychat-tls
issuerRef:
name: letsencrypt-prod
duration: 2160h # 90d
renewBefore: 720h # 30d before SSL will expire, renew it
dnsNames:
- "pychat.org"
- "*.pychat.org"
3 .If I check the certificate, it seems like generated correctly:
kubectl get secret pychat-tls -n default -o yaml
apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiB...
tls.key: LS0tLS1CRUdJTiBSU0E...
metadata:
annotations:
cert-manager.io/alt-names: '*.pychat.org,pychat.org'
cert-manager.io/certificate-name: pychat-domain
cert-manager.io/common-name: pychat.org
cert-manager.io/ip-sans: ""
cert-manager.io/issuer-group: ""
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: letsencrypt-prod
cert-manager.io/uri-sans: ""
creationTimestamp: "2022-05-21T22:44:22Z"
name: pychat-tls
namespace: default
resourceVersion: "1800"
uid: f38c228b-b3a6-4649-aaf6-d9727685569c
type: kubernetes.io/tls
echo 'tls.crt...LS0tLS1CRUdJTiB' |base64 -d > lol.cert
openssl x509 -in ./lol.cert -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
04:b4:24:ae:61:c9:24:b6:50:5d:c2:50:0c:28:0f:c1:d5:17
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Let's Encrypt, CN = R3
Validity
Not Before: May 21 21:46:14 2022 GMT
Not After : Aug 19 21:46:13 2022 GMT
Subject: CN = pychat.org
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:c7:7f:08:....
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
38:41:8D:F9:5A...
X509v3 Authority Key Identifier:
keyid:14:2E:B3..
Authority Information Access:
OCSP - URI:http://r3.o.lencr.org
CA Issuers - URI:http://r3.i.lencr.org/
X509v3 Subject Alternative Name:
DNS:*.pychat.org, DNS:pychat.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
Policy: 1.3.6.1.4.1.44947.1.1.1
CPS: http://cps.letsencrypt.org
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : 46:A5:55:..
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:21:00:...
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : 6F:53:76:A...
Timestamp : May 21 22:46:14.722 2022 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:...
Signature Algorithm: sha256WithRSAEncryption
6b:21:da:3a:ea:d8:...
If I check ingress, it shows this:
: kubectl describe ingress ingress
Name: ingress
Labels: <none>
Namespace: pychat
Address: 194.195.247.104
Default backend: frontend-service:80 (10.2.0.15:80)
TLS:
pychat-tls terminates pychat.org
Rules:
Host Path Backends
---- ---- --------
pychat.org
/api backend-service:8888 (10.2.0.16:8888,10.2.0.19:8888)
/ws backend-service:8888 (10.2.0.16:8888,10.2.0.19:8888)
/ frontend-service:80 (10.2.0.15:80)
Annotations: cert-manager.io/issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 15m cert-manager-ingress-shim Successfully created Certificate "pychat-tls"
Normal Sync 15m (x2 over 15m) nginx-ingress-controller Scheduled for sync
But If I make a request, it returns self-signed kube default certificate:
: curl -vk https://pychat.org/ 2>&1 | grep -e subject: -e issuer:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
The server is hosted on Linode and uses its load balancer if it matters:
helm repo add stable https://charts.helm.sh/stable
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx
The ingress.yaml configuration looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: "letsencrypt-prod"
name: ingress
namespace: pychat
spec:
ingressClassName: nginx
tls:
- hosts:
- pychat.org
secretName: pychat-tls
defaultBackend:
service:
name: frontend-service
port:
number: 80
rules:
- host: pychat.org
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8888
- path: /ws
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8888
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Found the issue, the guides just don't tell you much about if:
All the resources you create should be in the same namespace as your server. secret cloud-flare-api-token-secret, lets-encrypt-prod issuer, pychat-domain certificate, all of these things should have metadata -> namespace of your server, rather than cert-manager or default one.
Further tip: to debug this issue, I took a look at pods logs:
kubectl get all -n cert-manager
kubectl logs pod/cert-manager-id-from-top -n cert-manager
I am using AWS NLB and therefore SSL should happen at the argocd (1.7.8) side. However it seems nothing I do argocd always uses self-signed cert.
➜ curl -vvI https://argocd-dev.example.com
* Trying 54.18.49.47:443...
* Connected to argocd-dev.example.com (54.18.49.47) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: self signed certificate
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
this is my ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd-dev.example.com
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
this is how I start argocd-server:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: server
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
name: argocd-server
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
template:
metadata:
labels:
app.kubernetes.io/name: argocd-server
spec:
containers:
- command:
- argocd-server
- --staticassets
- /shared/app
- --loglevel
- debug
- --client-certificate
- /var/ssl-cert/tls.crt
- --client-key
- /var/ssl-cert/tls.key
image: argoproj/argocd:v1.7.8
imagePullPolicy: Always
name: argocd-server
ports:
- containerPort: 8080
- containerPort: 8083
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts
- mountPath: /app/config/tls
name: tls-certs
- mountPath: /var/ssl-cert
name: ssl-cert
readOnly: true
serviceAccountName: argocd-server
volumes:
- emptyDir: {}
name: static-files
- configMap:
name: argocd-ssh-known-hosts-cm
name: ssh-known-hosts
- configMap:
name: argocd-tls-certs-cm
name: tls-certs
- name: ssl-cert
secret:
secretName: tls-secret
You should take a look at https://argoproj.github.io/argo-cd/operator-manual/ingress/
ArgoCD has some unusual configuration that is required.
Either you need to start Argo in http (insecure) mode if your load balancer is doing the SSL or you need to pass your secret into the Kubernetes Ingress.
Argo CD expects certificate and certificate key in the tls.crt and tls.key keys of argocd-secret Secret: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-secret.yaml#L11
Restart is not required - a new certificate should be used as soon as the secret is updated.
I’m trying to setup istio (v1.7.3) on AKS (v1.16.13) in a way that for all HTTPS requests within my domain, the TLS handshake is performed transparently by egress gateway.
I ended up with something like this (abc.mydomain.com is an external URL so that’s why I created a ServiceEntry for it):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.mydomain.com"
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mydomain
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: mydomain
trafficPolicy:
tls:
mode: SIMPLE
caCertificates: /etc/istio/mydomain-ca-certs/mydomain.crt
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mydomain-through-egress-gateway
spec:
hosts:
- "*.mydomain.com"
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- "*.mydomain.com"
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mydomain
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
sniHosts:
- "*.mydomain.com"
route:
- destination:
host: abc.mydomain.com
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: www-mydomain
spec:
hosts:
- abc.mydomain.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
I have mounted my certificate in the egress gateway and verified with: kubectl exec -n istio-system “$(kubectl -n istio-system get pods -l istio=egressgateway -o jsonpath=’{.items[0].metadata.name}’)” – ls -al /etc/istio/mydomain-ca-certs
I’m getting the following when invoking curl -vvI https://abc.mydomain.com from one of the pods running in another namespace:
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
I’ve also tried what’s described here (Trust custom Root CA on Egress Gateway) but I’m getting the error as above.
Any idea what I might be doing wrong?
UPDATE1
Here is an output of istioctl proxy-status (egress rds is STALE):
istio-egressgateway-695dc4fc7c-p5p42.istio-system SYNCED SYNCED SYNCED STALE istiod-5c6b7b5b8f-csggg 1.7.3 41d
istio-ingressgateway-5689f7c67-j54m7.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3 118d
Output of curl -vvI https://abc.mydomain.com:
* Expire in 0 ms for 1 (transfer 0x55ce54104f50)
* Trying 10.223.24.254...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55ce54104f50)
* Connected to abc.mydomain.com (10.223.24.254) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
Output of openssl s_client -connect abc.mydomain.com:443
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 328 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
I am trying to create an API that serves HTTPS traffic publicly and is reachable by an IP address (not the domain), using a GKE cluster. Docker images have been tested locally. They were capable of serving HTTPS on their own, but as far as I've come to realize, this is not necessary for the setup that I am imagining.
So what I've come up with so far is to have a Kubernetes Service exposing it's 8443 port and having an Ingress load balancer mapping to that port and using self-signed certificates created using this tutorial - basic-ingress-secret referred in the template. The only thing I have skipped is the domain binding given I am not in the possession of a domain. I hoped it would bind the certificate to the external IP, but this is unfortunately not the case (have tried to attach an IP to a CN of the certificate, as some users have noted here).
This is my yaml for service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- protocol: "TCP"
port: 8443
targetPort: 8443
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-27417/some:latest"
This is my yaml for Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
It is way simpler than these documentations explain.
1.- Create the self-signed certs
openssl req -newkey rsa:2048 -nodes -keyout tls.key -out tls.csr
openssl x509 -in tls.csr -out tls.crt -req -signkey tls.key
2.- Create the secret
kubectl create secret tls basic-ingress-secret --cert tls.crt --key tls.key
3.- Create the Ingress object
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
Note: To make is work on GKE, your service must be of type NodePort
$ kubectl describe svc some-node
Name: some-node
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: NodePort
IP: 10.60.6.214
Port: <unset> 8443/TCP
TargetPort: 80/TCP
NodePort: <unset> 30250/TCP
Endpoints: 10.56.0.17:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I have found a final solution that suits my current needs.
The issue with the setup above was that I didn't had a nodePort: value setup and that the SSL certificate was not properly working so I have purchased a domain, secured a static IP for the Ingress load balancer using gcloud compute addresses create some-static-ip --globaland pointed that domain to the IP. I have then created a self-signed SSL certificate again following this tutorial and using my new domain.
The final yaml for the service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- port: 80
targetPort: 30041
nodePort: 30041
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-project/some:v1"
The final yaml for the LB:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "some-static-ip"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- host: my.domain.com
http:
paths:
- path: /some/*
backend:
serviceName: some-node
servicePort: 80
The service now serves HTTPS traffic (only 443 port) on my.domain.com/some/* (placeholder domain and path). It is a simple HTTPS service setup that would require only a purchase of CA issued SSL certificate and a proper scaling configuration to be fully productionalized, from the DevOps standpoint. Unless someone has found some serious drawbacks to this setup.
I am running a baremetal Kubernetes , with nginx ingress and metallb , and some hostnames mapped to the external ip provided by metallb.
I have created an nginx deployment , exposed it via service and created an ingress with the hostname.
I have created with openssl a self-signed certificate :
openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout tls.key -out tls.crt -subj "/CN=fake.example.com" -days 365
Then created a secret in the correct namespace:
kubectl -n demo create secret tls fake-self-secret --cert=tls.crt --key=tls.key
Then created the ingress :
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: demo-ingress
namespace: demo
spec:
rules:
- host: fake.example.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
path: /
tls:
- hosts:
- fake.example.com
secretName: fake-self-secret
Http works ( because of ssl-redirect false annotation) , https returns SSL_ERROR_RX_RECORD_TOO_LONG, on the nginx ingress controller log i see something like
"\x16\x03\x01\x00\xA8\x01\x00\x00\xA4\x03\x03*\x22\xA8\x8F\x07q\xAD\x98\xC1!\
openssl s_client -connect fake.example.com:443 -servername fake.example.com -crlf
140027703674768:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:794:
Nginx ingress-controller version is 0.30, with the default configuration, ssl-protocols enabled in the configmap are : TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
Any help / new ideas are welcomed :)
i have switched from kubernetes nginx ingress controller, to NGINX Ingress Controller, version nginx/nginx-ingress:1.7.0 ,and the config works