Kubernetes cert manager ssl error verify ACME account - ssl

I can`t create wilcard ssl with cert manager, I add my domain to cloudflare but cert manager can`t verify ACME account. How i resolve this problem?
i want wilcard ssl for my domain and use any deployments how could i do?
I find error but how i resolve, error is my k8s doesnt resolve dns acme-v02.api.letsencrypt.org
error is k8s dns can't find
My k8s is
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3-k3s.1", GitCommit:"8343999292c55c807be4406fcaa9f047e8751ffd", GitTreeState:"clean", BuildDate:"2019-06-12T04:56+00:00Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Error log:
I0716 13:06:11.712878 1 controller.go:153] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="default/issuer-letsencrypt"
I0716 13:06:11.713218 1 setup.go:162] cert-manager/controller/issuers "level"=0 "msg"="ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
I0716 13:06:11.713245 1 logger.go:88] Calling GetAccount
E0716 13:06:16.714911 1 setup.go:172] cert-manager/controller/issuers "msg"="failed to verify ACME account" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
I0716 13:06:16.715527 1 sync.go:76] cert-manager/controller/issuers "level"=0 "msg"="Error initializing issuer: Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
E0716 13:06:16.715609 1 controller.go:155] cert-manager/controller/issuers "msg"="re-queuing item due to error processing" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "key"="default/issuer-letsencrypt"
my Issuer
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: issuer-letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: yusufkaan142#gmail.com
privateKeySecretRef:
name: issuer-letsencrypt
dns01:
providers:
- name: cf-dns
cloudflare:
email: mail#gmail.com
apiKeySecretRef:
name: cloudflare-api-key
key: api-key.txt
Secret:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-key
namespace: cert-manager
type: Opaque
data:
api-key.txt: base64encoded
My Certificate:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: wilcard-theykk-net
namespace: cert-manager
spec:
secretName: wilcard-theykk-net
issuerRef:
name: issuer-letsencrypt
kind: Issuer
commonName: '*.example.net'
dnsNames:
- '*.example.net'
acme:
config:
- dns01:
provider: cf-dns
domains:
- '*.example.net'
- 'example.net'
Dns for k8s
apiVersion: v1
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.1.1.1","8.8.8.8"]

I would start with debugging DNS resolution function within your K8s cluster:
Spin up some container with basic network tools on a board:
kubectl run -i -t busybox --image=radial/busyboxplus:curl --restart=Never
From within busybox container check /etc/resolv.conf file and ensure that you can resolve Kubernetes DNS service kube-dns:
$ cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local c.org-int.internal google.internal
options ndots:5
Make a lookup request to kubernetes.default which should get output with a DNS nameserver without any issues:
$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
Due to the fact that you've defined upstreamNameservers in the corresponded kube-dns ConfigMap, check whether you can ping upstream nameservers: 1.1.1.1 and 8.8.8.8 that should be accessible from within a Pod.
Verify DNS pod logs for any suspicious events for each container(kubedns, dnsmasq, sidecar):
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c sidecar
If you are fine with all precedent steps then DNS discovery is working properly, thus you can also inspect Cloudflare DNS firewall configuration in order to exclude potential restrictions. More relevant information about troubleshooting DNS issue you can find in the official K8s documentation.

Related

What is the quickest way to expose a LoadBalancer service over HTTPS?

I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....
First, your cluster should have Config Connector installed and function properly.
Start by delete your existing load balancer service kubectl delete service my-service
Create a static IP.
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
Retrieve the created IP kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'
Create an DNS "A" record that map your registered domain with the created IP address. Check with nslookup <your registered domain name> to ensure the correct IP is returned.
Update your load balancer service spec by insert the following line after type: LoadBalancer: loadBalancerIP: "<the created IP address>"
Re-create the service and check kubectl get service my-service has the EXTERNAL-IP set correctly.
Create ManagedCertificate.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
Then create the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Check with kubectl describe ingress <named ingress>, see the rules and annotations section.
NOTE: It can take up to 15mins for the load balancer to be fully ready. Test with curl https://<your registered domain name>.

Kubernetes: Why are my acme challenges getting EOF/no response?

I'm setting up a Kubernetes cluster in AWS using Kops. I've got an nginx ingress controller, and I'm trying to use letsencrypt to setup tls. Right now I can't get my ingress up and running because my certificate challenges get this error:
Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://critsit.io/.well-known/acme-challenge/[challengeId]': Get http://critsit.io/.well-known/acme-challenge/[challengeId]: EOF
I've got a LoadBalancer service that's taking public traffic, and the certificate issuer automatically creates 2 other services which don't have public IPs.
What am I doing wrong here? Is there some networking issue preventing the pods from finishing the acme flow? Or maybe something else?
Note: I have setup an A record in Route53 to direct traffic to the LoadBalancer.
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cm-acme-http-solver-m2q2g NodePort 100.69.86.241 <none> 8089:31574/TCP 3m34s
cm-acme-http-solver-zs2sd NodePort 100.67.15.149 <none> 8089:30710/TCP 3m34s
default-http-backend NodePort 100.65.125.130 <none> 80:32485/TCP 19h
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 19h
landing ClusterIP 100.68.115.188 <none> 3001/TCP 93m
nginx-ingress LoadBalancer 100.67.204.166 [myELB].us-east-1.elb.amazonaws.com 443:30596/TCP,80:30877/TCP 19h
Here's my ingress setup:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: critsit-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/acme-challenge-type: "http01"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- critsit.io
- app.critsit.io
secretName: letsencrypt-prod
rules:
- host: critsit.io
http:
paths:
- path: /
backend:
serviceName: landing
servicePort: 3001
And my certificate issuer:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: michael.vegeto#gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
selector: {}
Update: I've noticed that my load balancer has all of the instances marked as OutOfOrder because they're failing health checks. I wonder if that's related to the issue.
Second update: I abandoned this route altogether, and rebuilt my networking/ingress system using Istio
The error message you are getting can mean a wide variety of issues. However, there are few things you can check/do in order to make it work:
Delete the Ingress, the certificates and the cert-manager fully. After that add them all back to make sure it installs clean. Sometimes stale certs or bad/multi Ingress pathing might be the issue. For example you can use Helm:
helm install my-nginx-ingress stable/nginx-ingress
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.0 --set installCRDs=true
Make sure your traffic allows HTTP or has HTTPS with a trusted cert.
Check if hairpin mode of your loadbalancer and make sure it is working.
Add: nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation to the Ingress rule. Wait a moment and see if valid cert will be created.
You can manually manually issue certificates in your Kubernetes cluster. To do so, please follow this guide.
The problem can solve itself in time. Currently if the self check fails, it
updates the status information with the reason (like: self check failed) and than
tries again later (to allow for propagation). This is an expected behavior.
This is an ongoing issue that is being tracked here and here.

Istio + Kubernetes: Gateway more than one TLS Certificate

I have a Kubernetes cluster with multiple tenants (in different namespaces). I'd like to deploy an independent Istio Gateway object into each tenant, which I seem to be able to do. However, setting up TLS requires a K8s secret that contains the TLS key/cert. The docs indicate that the "secret must be named istio-ingressgateway-certs in the istio-system namespace". This would seem to indicate that I can only have one TLS secret per cluster. Maybe I'm not reading this correctly. Is there a way to configure independent Istio Gateways in their own namespaces, with their own TLS secrets? How might I go about doing that?
Here is the doc that I'm referencing.
https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/
Any thoughts are much appreciated.
As provided on istio documentation it's possible.
In this section you will configure an ingress gateway for multiple hosts, httpbin.example.com and bookinfo.com.
So You need to create private keys, in this example, for bookinfo and httbin, and update istio-ingressgateway.
I created them both and they exist.
bookinfo certs and gateway
kubectl exec -it -n istio-system $(kubectl -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items[0].metadata.name}') -- ls -al /etc/istio/ingressgateway-bookinfo-certs
lrwxrwxrwx 1 root root 14 Jan 3 10:12 tls.crt -> ..data/tls.crt
lrwxrwxrwx 1 root root 14 Jan 3 10:12 tls.key -> ..data/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https-bookinfo
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt
privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key
hosts:
- "bookinfo.com"
httpbin certs and gateway
kubectl exec -it -n istio-system $(kubectl -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items[0].metadata.name}') -- ls -al /etc/istio/ingressgateway-certs
lrwxrwxrwx 1 root root 14 Jan 3 10:07 tls.crt -> ..data/tls.crt
lrwxrwxrwx 1 root root 14 Jan 3 10:07 tls.key -> ..data/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "httpbin.example.com"
Haven't made a full reproduction to check if they both works but if that won't work for You i will try to make it and update the question.
This link might be helpful.

gke cert manager certificate in progress

Im trying to make my google services more secure by moving from http to https. I've been follwing the cert-manager documentation to get it working.
https://cert-manager.io/docs/configuration/acme/dns01/google/
I can't install helm on the cluster nor nginx ingress that's why im using the dns01 challenge instead of the http01.
I installed cert-manager with regular manifests v0.11.0.
After creating a dns admin service account, i used this yaml to create the issuer :
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: example-issuer
spec:
acme:
email: email#gmail.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: example-issuer-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- dns01:
clouddns:
project: my-project-id
# This is the secret used to access the service account
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: key.json
and my certificate object :
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: example-com
namespace: default
spec:
secretName: example-com-tls
issuerRef:
# The issuer created previously
name: example-issuer
commonName: my-domain.com
dnsNames:
- my-domain.com
- www.my-domain.com
After applying these files, i had this results :
$ kubectl describe issuer
Name: example-issuer
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cert-manager.io/v1alpha2","kind":"Issuer","metadata":{"annotations":{},"name":"example-issuer","namespace":"default"},"spec...
API Version: cert-manager.io/v1alpha2
Kind: Issuer
Metadata:
Creation Timestamp: 2019-11-28T15:00:33Z
Generation: 1
Resource Version: 306180
Self Link: /apis/cert-manager.io/v1alpha2/namespaces/default/issuers/example-issuer
UID: d3d1f66e-11ef-11ea-856a-42010a8401a2
Spec:
Acme:
Email: email#gmail.com
Private Key Secret Ref:
Name: example-issuer-account-key
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
dns01:
Clouddns:
Project: my-project-id
Service Account Secret Ref:
Key: key.json
Name: clouddns-dns01-solver-svc-acct
Status:
Acme:
Last Registered Email: email#gmail.com
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/11671464
Conditions:
Last Transition Time: 2019-11-28T15:00:34Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
$ kubectl get certificates -o wide
NAME READY SECRET ISSUER STATUS AGE
example-com False example-com-tls example-issuer Waiting for CertificateRequest "example-com-1030278725" to complete 49m
$ kubectl get CertificateRequest -o wide
NAME READY ISSUER STATUS AGE
example-com-1030278725 False example-issuer Waiting on certificate issuance from order default/example-com-1030278725-1017944607: "pending" 50m
The problem is that you are trying to complete DNS01 challenges for a domain managed by Google Domains DNS Servers. This is not possible at this time.
Google Domains DNS is not Google Cloud DNS. You cannot use Cert Manager for automatic DNS01 challenges with Google Domains. There is no API to setup TXT records in Google Domains. There is a supported API for Cert Manager for Google Cloud DNS.
My recommendation: move your domain's DNS servers to Cloud DNS.

Kubernetes NGINX Ingress Controller not picking up TLS Certificates

I setup a new kubernetes cluster on GKE using the nginx-ingress controller. TLS is not working, it's using the fake certificates.
There is a lot of configuration detail so I made a repo - https://github.com/jobevers/test_ssl_ingress
In short the steps were
create a new cluster without GKE's load balancer
create a tls secret with my key and cert
create an nginx-ingress deployment / pod
create an ingress controller
The nginx-ingress config comes from https://zihao.me/post/cheap-out-google-container-engine-load-balancer/ (and looks very similar to a lot of the examples in the ingress-nginx repo).
My ingress.yaml is nearly identical to the example one
When I run curl, I get
$ curl -kv https://35.196.134.52
[...]
* common name: Kubernetes Ingress Controller Fake Certificate (does not match '35.196.134.52')
[...]
* issuer: O=Acme Co,CN=Kubernetes Ingress Controller Fake Certificate
[...]
which shows that I'm still using the default certificates.
How am I supposed to get it using mine?
Ingress definition
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ssl-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: demo-echo-service
servicePort: 80
Creating the secret:
kubectl create secret tls tls-secret --key tls/privkey.pem --cert tls/fullchain.pem
Debugging further, the certificate is being found and exist on the server:
$ kubectl -n kube-system exec -it $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") -- ls -1 /ingress-controller/ssl/
default-fake-certificate-full-chain.pem
default-fake-certificate.pem
default-tls-secret-full-chain.pem
default-tls-secret.pem
And, from the log, I see
kubectl -n kube-system log -f $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ")
[...]
I1013 17:21:45.423998 6 queue.go:111] syncing default/test-ssl-ingress
I1013 17:21:45.424009 6 backend_ssl.go:40] starting syncing of secret default/tls-secret
I1013 17:21:45.424135 6 ssl.go:60] Creating temp file /ingress-controller/ssl/default-tls-secret.pem236555242 for Keypair: default-tls-secret.pem
I1013 17:21:45.424946 6 ssl.go:118] parsing ssl certificate extensions
I1013 17:21:45.743635 6 backend_ssl.go:102] found 'tls.crt' and 'tls.key', configuring default/tls-secret as a TLS Secret (CN: [...])
[...]
But, looking at the nginx.conf, its still using the fake certs:
$ kubectl -n kube-system exec -it $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") -- cat /etc/nginx/nginx.conf | grep ssl_cert
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
Turns out that the ingress definition needs to look like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ssl-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- app.example.com
secretName: tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: demo-echo-service
servicePort: 80
The host entry under rules needs to match one of the hosts entries under tls.
Just faced that issue as well with v0.30.0 and it turns out that having an ingress config like this without explicit hostnames is ok:
spec:
tls:
- secretName: ssl-certificate
On my side the problem was that I had a annotation on the ingress with an int64 value that was not parsed correctly and below that was the definiton kubernetes.io/ingress.class so essentially nginx did not find the ingress controller which was stated in the logs correctly:
ignoring add for ingress <ingressname> based on annotation kubernetes.io/ingress.class with value
So using strings in the annotations fixed the problem.
You need to add the ROOT CA Certificate to authorities section in places such as chrome, firefox, the server's certificate pool.
Create a directory called /usr/share/ca-certificates/extras
Change extension of .pem file to .crt and copy this file to
directory you created
Run sudo dpkg-reconfigure ca-certificates
In window that opens, first press enter, then select the file you
added in list that appears with space key and press enter again
Your computer will now automatically recognize other certificates, you have generated with this certificate.
I found that to use wild host tls we need to have tls host name and rules host name both using wild card for example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ssl-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- "*.example.com"
secretName: tls-secret
rules:
- host: "*.example.com"
http:
paths:
- path: /
backend:
serviceName: demo-echo-service
servicePort: 80