debugging cert-manager certificate creation failure on AKS - ssl

I'm deploying cert-manager on Azure AKS and trying to have it request a Let's Encrypt certificate. It fails with certificate signed by unknown authority error and I have problem troubleshooting it further.
Not sure whether this is a problem with trusting LE server, a tunnelfront pod, or maybe an internal AKS self-generated CA. So my questions would be:
how to force cert-manager to debug (display more info) regarding the certificate it does not trust?
maybe the problem is occuring regularly and there is a known solution?
what steps should be undertaken to debug the issue further?
I have created an issue on jetstack/cert-manager's Github page, but was not answered, so I came here.
The whole story is as follows:
Certificates are not created. The following errors are reported:
the certificate:
Error from server: conversion webhook for &{map[apiVersion:cert-manager.io/v1alpha2 kind:Certificate metadata:map[creationTimestamp:2020-05-13T17:30:48Z generation:1 name:xxx-tls namespace:test ownerReferences:[map[apiVersion:extensions/v1beta1 blockOwnerDeletion:true controller:true kind:Ingress name:xxx-ingress uid:6d73b182-bbce-4834-aee2-414d2b3aa802]] uid:d40bc037-aef7-4139-868f-bd615a423b38] spec:map[dnsNames:[xxx.test.domain.com] issuerRef:map[group:cert-manager.io kind:ClusterIssuer name:letsencrypt-prod] secretName:xxx-tls] status:map[conditions:[map[lastTransitionTime:2020-05-13T18:55:31Z message:Waiting for CertificateRequest "xxx-tls-1403681706" to complete reason:InProgress status:False type:Ready]]]]} failed: Post https://cert-manager-webhook.cert-manager.svc:443/convert?timeout=30s: x509: certificate signed by unknown authority
cert-manager-webhook container:
cert-manager 2020/05/15 14:22:58 http: TLS handshake error from 10.20.0.19:35350: remote error: tls: bad certificate
Where 10.20.0.19 is the IP of tunnelfront pod.
Debugging with https://cert-manager.io/docs/faq/acme/ sort of "fails" when trying to kubectl describe order... as kubectl describe certificaterequest... returns CSR contents with error (as above), but not the order ID.
Environment details:
Kubernetes version: 1.15.10
Cloud-provider/provisioner : Azure (AKS)
cert-manager version: 0.14.3
Install method: static manifests (see below) + cluster issuer (see below) + regular CRDs (not legacy)
cluster issuer:
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: x
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
azuredns:
clientID: x
clientSecretSecretRef:
name: cert-manager-stage
key: CLIENT_SECRET
subscriptionID: x
tenantID: x
resourceGroupName: dns-stage
hostedZoneName: x
the manifest:
imagePullSecrets: []
isOpenshift: false
priorityClassName: ""
rbac:
create: true
podSecurityPolicy:
enabled: false
logLevel: 2
leaderElection:
namespace: "kube-system"
replicaCount: 1
strategy: {}
image:
repository: quay.io/jetstack/cert-manager-controller
pullPolicy: IfNotPresent
tag: v0.14.3
clusterResourceNamespace: ""
serviceAccount:
create: true
name:
annotations: {}
extraArgs: []
extraEnv: []
resources: {}
securityContext:
enabled: false
fsGroup: 1001
runAsUser: 1001
podAnnotations: {}
podLabels: {}
nodeSelector: {}
ingressShim:
defaultIssuerName: letsencrypt-prod
defaultIssuerKind: ClusterIssuer
prometheus:
enabled: true
servicemonitor:
enabled: false
prometheusInstance: default
targetPort: 9402
path: /metrics
interval: 60s
scrapeTimeout: 30s
labels: {}
affinity: {}
tolerations: []
webhook:
enabled: true
replicaCount: 1
strategy: {}
podAnnotations: {}
extraArgs: []
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
image:
repository: quay.io/jetstack/cert-manager-webhook
pullPolicy: IfNotPresent
tag: v0.14.3
injectAPIServerCA: true
securePort: 10250
cainjector:
replicaCount: 1
strategy: {}
podAnnotations: {}
extraArgs: []
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
image:
repository: quay.io/jetstack/cert-manager-cainjector
pullPolicy: IfNotPresent
tag: v0.14.3

Seems that v0.14.3 had a bug of some sort. The problem does not occur for v0.15.0.

Related

cert-manager challenges are pending

I'm using the cert-manager to manage my ssl certificates in my Kubernetes cluster. The cert-manager creates the pods and the challenges, but the challenges are never getting fulfilled. They're always saying:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://somedomain/.well-known/acme-challenge/VqlmMCsb019CCFDggs03RyBLZJ0jo53LO...': Get "http://somedomain/.well-known/acme-challenge/VqlmMCsb019CCFDggs03RyBLZJ0jo53LO...": EOF
But when I open the url (http:///.well-known/acme-challenge/VqlmMCsb019CCFDggs03RyBLZJ0jo53LO...), it returns the expected code:
vzCVdTk1q55MQCNH...zVkKYGvBJkRTvDBHQ.YfUcSfIKvWo_MIULP9jvYcgtsGxwfJMLWUGsB5kFKRc
When I do kubectl get certs, it says that the certs are ready:
NAME
READY
SECRET
AGE
crt1
True
crt1-secret
65m
crt1-secret
True
crt1-secret
65m
crt2
True
crt2-secret
65m
crt2-secret
True
crt2-secret
65m
It looks like Let's Encrypt never calls (or the cert-manager never instructs) these url's to verify.
When I list the challenges kubectl describe challenges, it says:
Name: crt-secret-hcgcf-269956107-974455061
Namespace: default
Labels: <none>
Annotations: <none>
API Version: acme.cert-manager.io/v1
Kind: Challenge
Metadata:
Creation Timestamp: 2021-07-23T10:47:27Z
Finalizers:
finalizer.acme.cert-manager.io
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"finalizer.acme.cert-manager.io":
f:ownerReferences:
.:
k:{"uid":"09e39ad0-cc39-421f-80d2-07c2f82680af"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:authorizationURL:
f:dnsName:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:key:
f:solver:
.:
f:http01:
.:
f:ingress:
.:
f:class:
f:ingressTemplate:
UID: 09e39ad0-cc39-421f-80d2-07c2f82680af
Resource Version: 19014474
UID: b914ad18-2f5c-45cd-aa34-4ad7a2786536
Spec:
Authorization URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/1547...9301
Dns Name: mydomain.something
Issuer Ref:
Group: cert-manager.io
Kind: Issuer
Name: letsencrypt
Key: VqlmMCsb019CCFDggs03RyBLZ...nc767h_g.YfUcSfIKv...GxwfJMLWUGsB5kFKRc
Solver:
http01:
Ingress:
Class: nginx
Ingress Template:
Metadata:
Annotations:
nginx.org/mergeable-ingress-type: minion
Service Type: ClusterIP
Token: VqlmMCsb019CC...03RyBLZJ0jo53LOiqnc767h_g
Type: HTTP-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/15...49301/X--4pw
Wildcard: false
Events: <none>
Any idea how I can solve this issue?
Update 1:
When I run curl http://some-domain.tld/.well-known/acme-challenge/VqlmMCsb019CC...gs03RyBLZJ0jo53LOiqnc767h_g in another pod, it returns:
curl: (52) Empty reply from server
When I do it locally (on my PC), it returns me the expected challenge-response.
Make sure your POD is returning something on the home URL or on the Home page of the domain that you are configuring on ingress host
You can also use the DNS-01 method for verification if HTTP-01 is not working
Here example for the DNS-01 :
Wild card certificate with cert-manager example
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: test123#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector:
dnsZones:
- "devops.example.in"
dns01:
route53:
region: us-east-1
hostedZoneID: Z0152EXAMPLE
accessKeyID: AKIA5EXAMPLE
secretAccessKeySecretRef:
name: route53-secret
key: secret-access-key
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: Issuer
name: letsencrypt-prod
commonName: "*.devops.example.in"
dnsNames:
- "*.devops.example.in"
Try using dns01 challenge instead of HTTP-01

Cert-Manager Certificate creation stuck at Created new CertificateRequest resource

I am using cert-manager v1.0.0 on GKE, I tried to use the staging environment for acme and it worked fine but when shifting to production I can find the created certificate stuck at Created new CertificateRequest resource and nothing changes after that
I expect to see the creation of the certificate to be succeeded and change the status of the certificate from false to true as happens in staging
Environment details::
Kubernetes version (v1.18.9):
Cloud-provider/provisioner (GKE):
cert-manager version (v1.0.0):
Install method (helm)
Here is my clusterIssuer yaml file
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: i-storage-ca-issuer-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: MY_EMAIL_HERE
privateKeySecretRef:
name: i-storage-ca-issuer-prod
solvers:
- http01:
ingress:
class: gce
And here is my ingress yaml file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: i-storage-core
namespace: i-storage
annotations:
kubernetes.io/ingress.global-static-ip-name: i-storage-core-ip
cert-manager.io/cluster-issuer: i-storage-ca-issuer-prod
labels:
app: i-storage-core
spec:
tls:
- hosts:
- i-storage.net
secretName: i-storage-core-prod-cert
rules:
- host: i-storage.net
http:
paths:
- path: /*
backend:
serviceName: i-storage-core-service
servicePort: 80
describe certificateRequest output
Name: i-storage-core-prod-cert-stb6l
Namespace: i-storage
Labels: app=i-storage-core
Annotations: cert-manager.io/certificate-name: i-storage-core-prod-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: i-storage-core-prod-cert-2pw26
API Version: cert-manager.io/v1
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2020-10-31T15:44:57Z
Generate Name: i-storage-core-prod-cert-
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:generateName:
f:labels:
.:
f:app:
f:ownerReferences:
.:
k:{"uid":"f3442651-3941-49af-81de-dcb937e8ba40"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:status:
.:
f:conditions:
Manager: controller
Operation: Update
Time: 2020-10-31T15:44:57Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: i-storage-core-prod-cert
UID: f3442651-3941-49af-81de-dcb937e8ba40
Resource Version: 18351251
Self Link: /apis/cert-manager.io/v1/namespaces/i-storage/certificaterequests/i-storage-core-prod-cert-stb6l
UID: 83412862-903f-4fff-a736-f170e840748e
Spec:
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: i-storage-ca-issuer-prod
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5HcQovRDRVZlRhV0xFa01GUzdsdVN1RmRlR0NNVjJ4czREcG5Pem1HbjJxSlRUTlBnS2hHbGVEd0p2TkZIaTc5WWxHCmpYcjhjNDFHU1JUT2U4UDdUS3AvWXpBSUtxSXpPMllIeHY5VzA5bEZDWWQ4MTByMUNsOG5jb2NYa3BGZlAxMzAKZURlczZ6SUkwZW9ZTW1uRXQ3cmRUNk52dHhuZ1ZZVmlnai9VcXpxSkZ4NmlLa0R6V1VHK3lNcWtQM1ZKa1lYeApZUFNTNWZsWXlTdkI4emdxb3pnNUNJUndra09KTU1aRlNoWHVxYkpNZnJvQmR2YW9nQWtEYmZYSWs0SVRIaXlYCkV4UDFBaFdieGhPbndDd2h5bXpGWmgzSkZUZHhzeFdtRDZJMmp3MzV1SXZ1WWlIWEJ4VTBCMG50K3FYMVVWaWwKSkRlOFdNcTdjT3AzWmtlT2FHa0NBd0VBQWFBNE1EWUdDU3FHU0liM0RRRUpEakVwTUNjd0dBWURWUjBSQkJFdwpENElOYVMxemRHOXlZV2RsTG01bGREQUxCZ05WSFE4RUJBTUNCYUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFLMkhhSEQxd3dDZVFqS1diU1N0SFkxMm1Da1A1amQ0RnFmZFFYRG5XR3grK3FCWExGY0F4TVZhbVF2cStQK0gKLzExQjhvdlUydU9icGRHRktoak9aNDJsdjNNMVllRWk5UG5nS0RFdndCbER0Q0Vsa0lHQzV4T1ZENCtheVlmaApEMUI2L20vdEJsdlhYNS8zRDlyejJsTWNRSzRnSTNVQ3Mxd0Y0bmduQ3JYMEhoSDJEendheXI5d2QvY1V1clZlClloYS9HZjcyaEFCcGQxSmkrR2hKaGxzVDlGbTNVZVNUTi9OYkpVWmk4NkM1S1dTRW1DblNjV3dzWGNoVW1vVisKVHpGQmNhOEhqOUxsVFdJVVBSYVl0bFQ2TEhrUjVLUW1EL2tJRTZDajlidTNXMG9oWDZ2UC9CQ012SWdaTVZEUgoyeFVwY3lhUmJad2ttWTQ2MktNZ25wUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Conditions:
Last Transition Time: 2020-10-31T15:44:57Z
Message: Waiting on certificate issuance from order i-storage/i-storage-core-prod-cert-stb6l-177980933: "pending"
Reason: Pending
Status: False
Type: Ready
Events: <none>
describe order output
Name: i-storage-core-prod-cert-stb6l-177980933
Namespace: i-storage
Labels: app=i-storage-core
Annotations: cert-manager.io/certificate-name: i-storage-core-prod-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: i-storage-core-prod-cert-2pw26
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2020-10-31T15:44:57Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:labels:
.:
f:app:
f:ownerReferences:
.:
k:{"uid":"83412862-903f-4fff-a736-f170e840748e"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:status:
.:
f:authorizations:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2020-10-31T15:44:57Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: i-storage-core-prod-cert-stb6l
UID: 83412862-903f-4fff-a736-f170e840748e
Resource Version: 18351252
Self Link: /apis/acme.cert-manager.io/v1/namespaces/i-storage/orders/i-storage-core-prod-cert-stb6l-177980933
UID: 92165d9c-e57e-4d6e-803d-5d28e8f3033a
Spec:
Dns Names:
i-storage.net
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: i-storage-ca-issuer-prod
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5HcQovRDRVZlRhV0xFa01GUzdsdVN1RmRlR0NNVjJ4czREcG5Pem1HbjJxSlRUTlBnS2hHbGVEd0p2TkZIaTc5WWxHCmpYcjhjNDFHU1JUT2U4UDdUS3AvWXpBSUtxSXpPMllIeHY5VzA5bEZDWWQ4MTByMUNsOG5jb2NYa3BGZlAxMzAKZURlczZ6SUkwZW9ZTW1uRXQ3cmRUNk52dHhuZ1ZZVmlnai9VcXpxSkZ4NmlLa0R6V1VHK3lNcWtQM1ZKa1lYeApZUFNTNWZsWXlTdkI4emdxb3pnNUNJUndra09KTU1aRlNoWHVxYkpNZnJvQmR2YW9nQWtEYmZYSWs0SVRIaXlYCkV4UDFBaFdieGhPbndDd2h5bXpGWmgzSkZUZHhzeFdtRDZJMmp3MzV1SXZ1WWlIWEJ4VTBCMG50K3FYMVVWaWwKSkRlOFdNcTdjT3AzWmtlT2FHa0NBd0VBQWFBNE1EWUdDU3FHU0liM0RRRUpEakVwTUNjd0dBWURWUjBSQkJFdwpENElOYVMxemRHOXlZV2RsTG01bGREQUxCZ05WSFE4RUJBTUNCYUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFLMkhhSEQxd3dDZVFqS1diU1N0SFkxMm1Da1A1amQ0RnFmZFFYRG5XR3grK3FCWExGY0F4TVZhbVF2cStQK0gKLzExQjhvdlUydU9icGRHRktoak9aNDJsdjNNMVllRWk5UG5nS0RFdndCbER0Q0Vsa0lHQzV4T1ZENCtheVlmaApEMUI2L20vdEJsdlhYNS8zRDlyejJsTWNRSzRnSTNVQ3Mxd0Y0bmduQ3JYMEhoSDJEendheXI5d2QvY1V1clZlClloYS9HZjcyaEFCcGQxSmkrR2hKaGxzVDlGbTNVZVNUTi9OYkpVWmk4NkM1S1dTRW1DblNjV3dzWGNoVW1vVisKVHpGQmNhOEhqOUxsVFdJVVBSYVl0bFQ2TEhrUjVLUW1EL2tJRTZDajlidTNXMG9oWDZ2UC9CQ012SWdaTVZEUgoyeFVwY3lhUmJad2ttWTQ2MktNZ25wUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Authorizations:
Challenges:
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: http-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/0EcdqA
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: dns-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/9chkYQ
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: tls-alpn-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/BaReZw
Identifier: i-storage.net
Initial State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/8230128790
Wildcard: false
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/100748195/5939190036
State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/order/100748195/5939190036
Events: <none>
List all certificates that you have:
kubectl get certificate --all-namespaces
Try to figure out the problem using describe command:
kubectl describe certificate CERTIFICATE_NAME -n YOUR_NAMESPACE
The output of the above command contains the name of the associated certificate request. Dig into more details using describe command once again:
kubectl describe certificaterequest CERTTIFICATE_REQUEST_NAME -n YOUR_NAMESPACE
You may also want to troubleshoot challenges with the following command:
kubectl describe challenges --all-namespaces
In my case, to make it work, I had to replace ClusterIssuer with just Issuer for reasons explained in the comment.
Here is my Issuer manifest:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: cert-manager-staging
namespace: YOUR_NAMESPACE
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: example#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: cert-manager-staging-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Here is my simple Ingress manifest:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: cert-manager-staging
name: YOUR_NAME
namespace: YOUR_NAMESPACE
spec:
tls:
- hosts:
- example.com
secretName: example-com-staging-certificate
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example.com
port:
number: 80

K8S: Unable to Create wildcard SSL using Issuer with acmedns provider

I have tried to create wildcard SSL certificate using k8s certmanager and issuer with acmedns acme provider. I have created the credentials by POST requesting to /register URL and tested the acmedns successfully. However I am unable to create new wildcard SSL certificate using the k8s issuer. I am adding my issuer YAML file below,
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
annotations:
name: letsencrypt-wildcard-prod
namespace: default
spec:
acme:
dns01:
providers:
acmedns:
accountSecretRef:
key: acmedns.json
name: acme-dns
host: http://auth.mydomain.com
email: info#mydomain.com
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
I have created the secret acme-dns using the json output got from the /register output.
Also, adding the k8s certificate YAML here
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: wildcard-mydomain.com
namespace: default
spec:
acme:
config:
- dns01:
provider: acmedns
domains:
- '*.mydomain.com'
commonName: '*.mydomain.com'
dnsNames:
- '*.mydomain.com'
issuerRef:
kind: Issuer
name: letsencrypt-wildcard-prod
secretName: wildcard-mydomain.com-tls
I am getting the following error from the cert-manager:
E1129 16:30:31.881025 1 reflector.go:205]
github.com/jetstack/cert-manager/pkg/client/informers/
externalversions/factory.go:71: Failed to list
*v1alpha1.Issuer: v1alpha1.IssuerList: Items:
[]v1alpha1.Issuer: v1alpha1.Issuer: Spec: v1alpha1.
IssuerSpec: IssuerConfig: ACME: v1alpha1.ACMEIssuer:
DNS01: v1alpha1.ACMEIssuerDNS01Config: Providers:
[]v1alpha1.ACMEIssuerDNS01Provider:
ReadArrayCB:
expect [ or n, but found {, error found in #10 byte
of ...|oviders":{"acmedns":|..., bigger context
...|81551da95"},
"spec":{"acme":{"dns01":{"providers":
{"acmedns":{"accountSecretRef":{"key":"acmedns.json|...
E1129 16:30:32.887374 1 reflector.go:205] github.com/
jetstack/cert-manager/pkg/client/informers/externalversions
/factory.go:71: Failed to list *v1alpha1.Issuer: v1alpha1.
IssuerList: Items: []v1alpha1.Issuer: v1alpha1.Issuer:
Spec: v1alpha1.IssuerSpec: IssuerConfig: ACME: v1alpha1.
ACMEIssuer: DNS01: v1alpha1.ACMEIssuerDNS01Config:
Providers: []v1alpha1.ACMEIssuerDNS01Provider:
ReadArrayCB:
expect [ or n, but found {, error found in #10
byte of ...|oviders":{"acmedns":|...,
bigger context
...|81551da95"},"spec":{"acme":{"dns01":
{"providers":{"acmedns":{"accountSecretRef":
{"key":"acmedns.json|...

Can't get kubernetes to pass my tls certificate to browsers

I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.
I think the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.
Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.
In my attempt to do this I've set up:
The cluster itself
An nginx-ingress controller
An ingress resource
Here's the related yaml:
Cluster:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-03T03:20:47Z
labels:
run: my-es
name: my-es
namespace: default
resourceVersion: "3159488"
selfLink: /api/v1/namespaces/default/services/my-es
uid: 373047e0-96cc-11e8-932b-42010a800043
spec:
clusterIP: 10.63.241.39
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 9200
selector:
run: my-es
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
The ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca
https://myOtherDomain.ca
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: 2018-08-12T08:44:29Z
generation: 16
name: es-ingress
namespace: default
resourceVersion: "3159625"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress
uid: ece0071d-9e0b-11e8-8a45-42001a8000fc
spec:
rules:
- http:
paths:
- backend:
serviceName: my-es
servicePort: 8080
path: /
tls:
- hosts:
- mydomain.ca
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 130.211.179.225
The nginx-ingress controller:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-12T00:41:32Z
labels:
app: nginx-ingress
chart: nginx-ingress-0.23.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
resourceVersion: "2781955"
selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller
uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc
spec:
clusterIP: 10.63.250.256
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32084
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31182
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.212.6.131
I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...
To get my certificate, I just requested one for mydomain.ca from godaddy.
Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name?
It doesn't seem possible to verify ownership of an IP.
I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.
Here are some logs from the nginx-controller:
This one is talking about a PEM with the tls-secret, but it's only a warning.
{
insertId: "1kvvhm7g1q7e0ej"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T02:58:42.135388365Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "WARNING"
textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found"
timestamp: "2018-08-14T02:58:37Z"
}
I have a few occurences of this handshake error, which may be a result of the last warning...
{
insertId: "148t6rfg1xmz978"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T15:55:52.438035706Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "ERROR"
textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442"
timestamp: "2018-08-14T15:55:50Z"
}
The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.
aaronmw#project-7d320:~$ kubectl describe ing
Name: es-ingress
Namespace: default
Address: 130.221.179.212
Default backend: default-http-backend:80 (10.61.3.7:8080)
TLS:
my-tls-secret terminates mydomain.ca
Rules:
Host Path Backends
---- ---- --------
*
/ my-es:8080 (<none>)
Annotations:
Events: <none>
I figured it out!
What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command
helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress
Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.
So what I did was:
Make my load balancer IP static
Add an A record to my domain, to map a subdomain to that IP
Re-key my cert to match that new subdomain
And I'm in business!
Thanks to #Crou, who's comment reminded me to look at the logs and got me on the right track.

Kubernetes executor on Gitlab - ERROR: Job failed (system failure): Post *api/v1/namespaces/gitlab/pods: x509: certificate signed by unknown authority

I'm trying to set up Kubernetes executor for Gitlab but I get this error:
ERROR: Job failed (system failure): Post
https://api.kubernetes.de/api/v1/namespaces/gitlab/pods: x509:
certificate signed by unknown authority
This is my configmap.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "http://########/ci"
token = "############"
executor = "kubernetes"
[runners.kubernetes]
host = "https://api.kubernetes.de"
namespace = "gitlab"
namespace_overwrite_allowed = "ci-.*"
privileged = true
cpu_limit = "1"
memory_limit = "1Gi"
service_cpu_limit = "1"
service_memory_limit = "1Gi"
helper_cpu_limit = "500m"
helper_memory_limit = "100Mi"
poll_interval = 5
poll_timeout = 3600
[runners.kubernetes.node_selector]
gitlab = "true"
And this is deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:latest
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
you are using https, so where are the certs, are they self signed certs ? if yes you have to mention --tls-cert-file and --tls-private-key-file flags in your configmap for kubelet.