cert-manager + kubernetes wildcard problem [closed] - ssl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Im trying create wildcard cert on Rancher kubernetes engine behind cloud loadbalancer.
After install rancher i have a Issuer:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
annotations:
meta.helm.sh/release-name: rancher
meta.helm.sh/release-namespace: cattle-system
creationTimestamp: "2021-09-21T12:10:25Z"
generation: 1
labels:
app: rancher
app.kubernetes.io/managed-by: Helm
chart: rancher-2.5.9
heritage: Helm
release: rancher
name: rancher
namespace: cattle-system
resourceVersion: "1318"
selfLink: /apis/cert-manager.io/v1/namespaces/cattle-system/issuers/rancher
uid: #
spec:
acme:
email: #
preferredChain: ""
privateKeySecretRef:
name: letsencrypt-production
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress: {}
status:
acme:
lastRegisteredEmail: #
uri: https://acme-v02.api.letsencrypt.org/#
conditions:
- lastTransitionTime: "2021-09-21T12:10:27Z"
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
this is order:
kubectl describe order wildcard-dev-mctqj-4171528257 -n cattle-system
Name: wildcard-dev-mctqj-4171528257
Namespace: cattle-system
Labels: <none>
Annotations: cert-manager.io/certificate-name: wildcard-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: wildcard-dev-2g4rc
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2021-09-21T14:10:50Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:kubectl.kubernetes.io/last-applied-configuration:
f:ownerReferences:
.:
k:{"uid":"}
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:commonName:
f:dnsNames:
f:issuerRef:
.:
f:kind:
f:name:
f:request:
f:status:
.:
f:authorizations:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2021-09-21T14:10:52Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: wildcard-dev-mctqj
UID: #
Resource Version: 48930
Self Link: /apis/acme.cert-manager.io/v1/namespaces/cattle-system/orders/wildcard-dev-mctqj-4171528257
UID: #
Spec:
Common Name: *.
Dns Names:
*.rancher-dev.com
Issuer Ref:
Kind: Issuer
Name: rancher
Request:
Status:
Authorizations:
Challenges:
Token: #######
Type: dns-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/##
Identifier: rancher.dev.com
Initial State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/##
Wildcard: true
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/###
State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/order/###
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Solver 49m cert-manager Failed to determine a valid solver configuration for the set of domains on the Order: no configured challenge solvers can be used for th is challenge
dns changed ofc
Certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-dev
namespace: cattle-system
spec:
secretName: wildcard-dev
issuerRef:
kind: Issuer
name: rancher
commonName: '*.rancher.dev.com'
dnsNames:
- '*.rancher.dev.com'
i dont create ingress yet..
i think trubl in order
Type: dns-01
What i do wrong ?
Mbe create second issuer ?
Actually, i want create wildcard certificate and clone him wit kubed, becouse i need a lot namespaces in kube with 1 wldcard cert. What can you advise me, guys?)

As it is written here serving-a-wildcard-to-ingress, http01 solver does not support wildcard. Instead you should use dns01 for wildcard certificates.
See documentation to dns01 solver.

Related

AWS EKS ingress - Entity too large

I am running Laravel 8 api in the cluster and I have this ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"labels":{"app":"voterapi"},"name":"rapp-ingress","namespace":"voterapp"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"app-service","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
nginx.ingress.kubernetes.io/proxy-body-size: 100m
creationTimestamp: "2022-05-26T08:25:50Z"
finalizers:
- ingress.k8s.aws/resources
generation: 1
labels:
app: appapi
name: app-ingress
namespace: app
resourceVersion: "94262558"
uid: ec29661a-f4be-4ae1-a0e0-29c3d8bff0e5
spec:
rules:
- http:
paths:
- backend:
service:
name: app-service
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: XXX
I am trying to upload the file using the API and I am getting
413 Request Entity Too Large
I dont see this error in my PHP log so looks like it is not even getting there.
Can anyone help me to solve the issue?
Update: Try to update your ingress adding nginx.org/client-max-body-size
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.org/proxy-read-timeout: "40s"
nginx.org/proxy-connect-timeout: "40s"
nginx.org/client-max-body-size: "100m"
In some cases, you might need to increase the max size for all post body data and file uploads.
Try to update the post_max_size and upload_max_file_size values in the php.ini configuration:
post_max_size = 100M
upload_max_filesize = 100M
Reference:
NGINX Ingress Controller to increase the client request body
NGINX Ingress Controller: Advanced Configuration with Annotations
https://laracasts.com/discuss/channels/laravel/increase-file-upload-size

Cert manager letsencrypt only accepting default value for certificate , ignoring duration and renewBefore

I am creating certificate using cert-manager (1.6.3). But the issue is , duration and renewBefore is not taking my custom values , instead it is taking the default value (90 days )
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-tls-com
namespace: api
spec:
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
duration: 10h
renewBefore: 1h
commonName: "*.domain-name.in"
dnsNames:
- "*.domain-name.in"
secretName: test-tls-wild
But when I describe the certificate I can see Renewal Time is not matching
kubectl -n api describe cert test-tls-com
---
Not After: 2022-08-14T15:30:38Z
Not Before: 2022-05-16T15:30:39Z
Renewal Time: 2022-08-14T13:30:38Z <--it is not 1h renewal time
My cluster issuer looks like
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: xxx#gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.11", GitCommit:"38d3c1f3d5306401bcf39a71bad3b5a5106033d7", GitTreeState:"clean", BuildDate:"2022-03-16T14:02:06Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Using the above configuration I am able to create certificate , but not sure why the Renewal Time is not matching with the duration and renewBefore
Renewal Time is not matching with the duration and renewBefore because The renewBefore and duration fields must be specified using a Go time.Duration string format, which does not allow the d (days) suffix. You must specify these values using s, m, and h suffixes instead.
Duration and RenewBefore should be as below in the YAML file as prescribed by cert-manager; then only it will match with renewal time and duration time.
Example :
duration: 2160h # 90d
renewBefore: 360h # 15d - custom dates we can give .

Lets Encrypt DNS challenge using HTTP

I'm trying to setup a Let's Encrypt certificate on Google Cloud. I recently changed it from http01 to dns01 challenge type so that I could create Cloud DNS zones and the acme challenge TXT record would automatically be added.
Here's my certificate.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: san-tls
namespace: default
spec:
secretName: san-tls
issuerRef:
name: letsencrypt
commonName: www.evolut.net
altNames:
- portal.evolut.net
dnsNames:
- www.evolut.net
- portal.evolut.net
acme:
config:
- dns01:
provider: clouddns
domains:
- www.evolut.net
- portal.evolut.net
However now I get the following error when I kubectl describe certificate:
Message: DNS names on TLS certificate not up to date: ["portal.evolut.net" "www.evolut.net"]
Reason: DoesNotMatch
Status: False
Type: Ready
More worryingly, when I kubectl describe order I see the following:
Status:
Challenges:
Authz URL: https://acme-v02.api.letsencrypt.org/acme/authz/redacted
Config:
Http 01:
Dns Name: portal.evolut.net
Issuer Ref:
Kind: Issuer
Name: letsencrypt
Key: redacted
Token: redacted
Type: http-01
URL: https://acme-v02.api.letsencrypt.org/acme/challenge/redacted
Wildcard: false
Authz URL: https://acme-v02.api.letsencrypt.org/acme/authz/redacted
Config:
Http 01:
Notice how the Type is always http-01, although in the certificate they are listed under dns01.
This means that the ACME TXT file is never created in Cloud DNS and of course the domains aren't validated.
This seems to be related an issue related to the use of multiple domains. I suggest the use of two different namespaces. You can check an example in the following link:
Failed to list *v1alpha1.Order: orders.certmanager.k8s.io is forbidden

Cert-manager certificates not found and challenges not created

I followed https://docs.cert-manager.io/en/venafi/tutorials/quick-start/index.html from start to end and everything seems to be working except that I'm not getting an external ip for my ingress.
NAME HOSTS ADDRESS PORTS AGE
staging-site-ingress staging.site.io,staging.admin.site.io, 80, 443 1h
Altough I'm able to use the nginx ingress controller external ip and use dns to access the sites. When I'm going to the urls I'm being redirected to https, so I assume that's working fine.
It redirects to https but still says "not secured", so he don't get a certificate issued.
When I'm debugging I get the following information:
Ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 54m cert-manager Successfully created Certificate "tls-secret-staging"
Normal UPDATE 35m (x3 over 1h) nginx-ingress-controller Ingress staging/staging-site-ingress
Normal CreateCertificate 23m (x2 over 35m) cert-manager Successfully created Certificate "letsencrypt-staging-tls"
Certificate:
Status:
Conditions:
Last Transition Time: 2019-02-27T14:02:29Z
Message: Certificate does not exist
Reason: NotFound
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 3m (x2 over 14m) cert-manager Created Order resource "letsencrypt-staging-tls-593754378"
Secret:
Name: letsencrypt-staging-tls
Namespace: staging
Labels: certmanager.k8s.io/certificate-name=staging-site-io
Annotations: <none>
Type: kubernetes.io/tls
Data
====
ca.crt: 0 bytes
tls.crt: 0 bytes
tls.key: 1679 bytes
Order:
Status:
Certificate: <nil>
Finalize URL:
Reason:
State:
URL:
Events: <none>
So it seems something goes wrong in order and no challenges are created.
Here are my ingress.yaml and issuer.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: staging-site-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-staging"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- staging.site.io
- staging.admin.site.io
- staging.api.site.io
secretName: letsencrypt-staging-tls
rules:
- host: staging.site.io
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 80
path: /
- host: staging.admin.site.io
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 80
path: /
- host: staging.api.site.io
http:
paths:
- backend:
serviceName: gateway-service
servicePort: 9000
path: /
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: hello#site.io
privateKeySecretRef:
name: letsencrypt-staging-tls
http01: {}
Anyone knows what I can do to fix this or what went wrong? Certmanager is installed correctly 100%, I'm just not sure about the ingress and what went wrong in the order.
Thanks in advance!
EDIT: I found this in the nginx-ingress-controller:
W0227 14:51:02.740081 8 controller.go:1078] Error getting SSL certificate "staging/letsencrypt-staging-tls": local SSL certificate staging/letsencrypt-staging-tls was not found. Using default certificate
It's getting spammed & the CPU load is always at 0.003 and the cpu graph is full (the other services are almost nothing)
I stumbled over the same issue once, following exactly the same official tutorial.
As #mikebridge mentioned, the issue is with Issuer/Secret's namespace mismatch.
For me, the best was to switch from Issuer to ClusterIssuer, which is not scoped to a single namespace.
The reason your certificate order is not completing is because the challenge is failing to successfully complete. Review your solver configuration in either your Issuer or ClusterIssuer.
See my answer here for more details.
https://stackoverflow.com/a/75454772/4820940

Kubernetes PersistentVolume and PersistentVolumeClaim could be causing issues for my pod which crashes while copying logs

I have a PersistentVolume that I specified as the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv-shared
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
hostPath:
path: /data/mypv-shared/
Then I created a PersistentVolumeClaim with the following specifications:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypv-shared-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
But when I create the PVC, running kubectl get pv shows that it is bound to a randomly generated PV
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-38c77920-a223-11e7-89cc-08002719b642 5Gi RWX Delete Bound default/mypv-shared standard 16m
I believe this is causing issues for my pods when running tests because I am not sure if the pod is correctly mounting the specified directory. My pods crash at the end of the test when trying to copy over the test logs at the end of the run.
Is the cause really the persistentVolume/Claim or should I be looking into something else? Thanks!
Creating the PVC dynamically provisioned a PV instead of using the one you created manually with the hostpath. On the PVC simply set .spec.storageClassName to and an empty string ("")
From the documentation:
A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same ...
So create something like this (I've also added labels and selectors to make sure that the intended PV is paired up the PVC; you might not need that constraint):
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv-shared
labels:
name: mypv-shared
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
hostPath:
path: /data/mypv-shared/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypv-shared-claim
spec:
storageClassName: ""
selector:
matchLabels:
name: mypv-shared
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi