I've setup cert mananger on microk8s following these instructions, I had it working 6 months ago but have since had to start again from scratch. Now when I setup my Cluster Issuer I'm getting the error below.
Everything else seems fine and in a good state. I'm struggling to know where to start debugging this.
Error initializing issuer: Get "https://acme-v02.api.letsencrypt.org/directory": remote error: tls: handshake failure
Cluster Issuer yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: <myemail>
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: prod-issuer-account-key
solvers:
- http01:
ingress:
class: nginx
UPDATE
Some extra info
All pods for cert manager are running, here are the logs
cert-manager pod logs
cert-manager-cainjector logs only shows some warnings about deprecated apis
cert-mananger-webhook logs
Describe ClusterIssuer
I've tried to get a cert for an ingress resource but it errors saying the cluster issuer isn't in a ready state
After uninstalling and reinstalling everything including Microk8s I tried again no luck. Then I tried using the latest helm chart v1.0.2 which had a newer cert-manager version, seemed to work straight away.
Another note, mainly to myself. This issue was also caused by having search domains setup in netplan, once removed everything started working.
Related
I am deploying the standard Kubernetes Dashboard (Jetstack) to the K3s cluster I have deployed on my RPI cluster, I am using lets-encrypt to provision the TLS cert and setting the following options on the dashboard deployment:
spec:
args:
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: cluster.smigula.com-tls
The cert is valid when I visit the URL in my browser, however the pod raises this:
http: TLS handshake error from 10.42.0.8:43704: remote error: tls: bad certificate
It appears that the ingress is terminating the TLS connection when the pod expects to terminate it. What should I do? Thanks.
[edit] I changed the resource kind from Ingress to IngressRouteTCP and set passthrough: true in the tls: section. Still same result.
What configuration is needed to use helm within a k8s job? The error given is x509: certificate signed by unknown authority. What is needed to verify the certificates?
I am trying to use helm CLI tool within a k8s job using the alpine/helm. My cluster is running locally using minikube. For testing, I am trying to simply list all helm charts. Regular kubectl commands work on the cluster within a job. I suspect there is something special needed to configure to the CA certificate and have it be accessible by helm.
apiVersion: batch/v1
kind: Job
metadata:
name: helm-job
spec:
template:
spec:
containers:
- name: helm
image: alpine/helm
imagePullPolicy: Always
env:
- name: HELM_KUBEAPISERVER
value: "https://kubernetes.default.svc"
- name: HELM_DEBUG
value: "true"
command: [ "helm", "list" ]
restartPolicy: Never
automountServiceAccountToken: false
If automountServiceAccountToken is not set to false the Job crashes with the following error on start up: MountVolume.SetUp failed for volume "kube-api-access-rqc6l" : object "default"/"kube-root-ca.crt" not registered. When it is set to false the job has the following debug logs:
Error: Kubernetes cluster unreachable: Get "https://kubernetes.default.svc/version": x509: certificate signed by unknown authority
helm.go:84: [debug] Get "https://kubernetes.default.svc/version": x509: certificate signed by unknown authority
Kubernetes cluster unreachable
helm.sh/helm/v3/pkg/kube.(*Client).IsReachable
helm.sh/helm/v3/pkg/kube/client.go:121
helm.sh/helm/v3/pkg/action.(*List).Run
helm.sh/helm/v3/pkg/action/list.go:148
main.newListCmd.func1
helm.sh/helm/v3/cmd/helm/list.go:80
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_amd64.s:1581
Unfortunately, helm: x509: certificate signed by unknown authority is not a solution. The answers given using helm init is for helm v2 not v3. When setting insecure-skip-tls-verify: true the following error is thrown: Unable to restart cluster, will reset it: getting k8s client: specifying a root certificates file with the insecure flag is not allowed.
Edit:
The default service account is available on all my nodes. I did also try create separate service accounts.
I'm setting up a Kubernetes cluster in AWS using Kops. I've got an nginx ingress controller, and I'm trying to use letsencrypt to setup tls. Right now I can't get my ingress up and running because my certificate challenges get this error:
Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://critsit.io/.well-known/acme-challenge/[challengeId]': Get http://critsit.io/.well-known/acme-challenge/[challengeId]: EOF
I've got a LoadBalancer service that's taking public traffic, and the certificate issuer automatically creates 2 other services which don't have public IPs.
What am I doing wrong here? Is there some networking issue preventing the pods from finishing the acme flow? Or maybe something else?
Note: I have setup an A record in Route53 to direct traffic to the LoadBalancer.
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cm-acme-http-solver-m2q2g NodePort 100.69.86.241 <none> 8089:31574/TCP 3m34s
cm-acme-http-solver-zs2sd NodePort 100.67.15.149 <none> 8089:30710/TCP 3m34s
default-http-backend NodePort 100.65.125.130 <none> 80:32485/TCP 19h
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 19h
landing ClusterIP 100.68.115.188 <none> 3001/TCP 93m
nginx-ingress LoadBalancer 100.67.204.166 [myELB].us-east-1.elb.amazonaws.com 443:30596/TCP,80:30877/TCP 19h
Here's my ingress setup:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: critsit-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/acme-challenge-type: "http01"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- critsit.io
- app.critsit.io
secretName: letsencrypt-prod
rules:
- host: critsit.io
http:
paths:
- path: /
backend:
serviceName: landing
servicePort: 3001
And my certificate issuer:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: michael.vegeto#gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
selector: {}
Update: I've noticed that my load balancer has all of the instances marked as OutOfOrder because they're failing health checks. I wonder if that's related to the issue.
Second update: I abandoned this route altogether, and rebuilt my networking/ingress system using Istio
The error message you are getting can mean a wide variety of issues. However, there are few things you can check/do in order to make it work:
Delete the Ingress, the certificates and the cert-manager fully. After that add them all back to make sure it installs clean. Sometimes stale certs or bad/multi Ingress pathing might be the issue. For example you can use Helm:
helm install my-nginx-ingress stable/nginx-ingress
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.0 --set installCRDs=true
Make sure your traffic allows HTTP or has HTTPS with a trusted cert.
Check if hairpin mode of your loadbalancer and make sure it is working.
Add: nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation to the Ingress rule. Wait a moment and see if valid cert will be created.
You can manually manually issue certificates in your Kubernetes cluster. To do so, please follow this guide.
The problem can solve itself in time. Currently if the self check fails, it
updates the status information with the reason (like: self check failed) and than
tries again later (to allow for propagation). This is an expected behavior.
This is an ongoing issue that is being tracked here and here.
I'm trying to uses ssl in several Ingress on k8s. A first look up lead me to cert-manager but I can't make it work and I suspect that the cause is because my Cloud provider (ovh) is not supported.
I'm using kubernetes 1.17 and cert-manager 0.13.0
The first error I encountered was related to the web hook secret and no solutions worked for me.
Because of this I deployed cert-manager without web-hook but I still couldn't get a ClusterIssuer up and running. When I apply the following :
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: leonard.panichi#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
a clusterissuer is created but there is no Status when I run describe on it.
So, after new searches (and more than a day of struggling), I found that ovh might not be compatible with cert-manager
which let me think that I'm loosing my time with this strategy.
Hence I'm looking for a new strategy.
How can I use certbot, there is a docker image certbot/certbot, to create and renew a few ssl certificates in kubernetes secrets in order to use them in my Ingress ? Is there any other way that do not requiere GKE, AWS, something simple, portable, production ready, etc...
Sincerely,
me
We have setup Kubernetes with nginx-ingress combined with cert-manager to automatically obtain and use SSL certificates for ingress domains using LetsEncrypt using this guide: https://medium.com/#maninder.bindra/auto-provisioning-of-letsencrypt-tls-certificates-for-kubernetes-services-deployed-to-an-aks-52fd437b06b0. The result is that each Ingress defines its own SSL certificate that is automatically provisioned by cert-manager.
This all works well but for one problem, the source IP address of the traffic is lost to applications in Pods.
There is an annotation that is advised to use to apply to the nginx-ingress controller service service.beta.kubernetes.io/aws-load-balancer-backend-protocol: '*'. This has the effect of preserving source IP addresses. However, doing it breaks SSL:
An error occurred during a connection to {my.domain.com}. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
My head is starting to spin. Does anyone know of any approaches to this (it seems to me that this would be a common requirement)?
Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-http-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: my.host.com
http:
paths:
- path: /
backend:
serviceName: my-http-service
servicePort: 80
tls:
- hosts:
- "my.host.com"
secretName: malcolmqa-tls
As #dom_watson mentioned in the comments, adding parameter controller.service.externalTrafficPolicy=Local to Helm install configuration solved the issue due to the fact that Local value preserves the client source IP, thus the network traffic will reach target Pod in Kubernetes cluster. Find more information in the official Kubernetes guidelines.
helm upgrade my-nginx stable/nginx-ingress --set rbac.create=true --set controller.service.externalTrafficPolicy=Local