How can I use Cert manager letsencrypt-prod in my kubernetes service? - ssl

I have 4 yaml file
Deployment.yaml
Service.yaml
Ingress.yaml
issuer.yaml
I want to use letsencrypt-prod for my service for certification . But it doesn't work.
When I use to be sure ingress is working or issuer is working both of them are done!
kubectl get ing
kubectl get issuer
But when I run:
kubectl get cert
Cert is not readt during 2 days . Like below:
it creates problem like below. certification is not binding mandrakee.xyz.Mandrakee.xyz looks still not secure! how can I make my website secyre via cert manager?
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: httpapi-host
image: jmalloc/echo-server
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
ports:
- name: http-port
port: 80
targetPort: 8080
selector:
app: echo-server
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: ambassador
cert-manager.io/issuer: letsencrypt-prod
name: test-ingress
spec:
tls:
- hosts:
- mandrakee.xyz
secretName: letsencrypt-prod
rules:
- host: mandrakee.xyz
http:
paths:
- backend:
service:
name: echo-service
port:
number: 80
path: /
pathType: Prefix
issuer.yaml:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ykaratoprak#sphereinc.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: ce28952b5b4e33ea7d98de190f3148a7cc82d31f030bde966ad13b22c1abc524

If you have setup your issuer correctly, which you have assured us, you will see in your namespace a pod belonging to cert manager. This creates a pod that will validate that the server requesting the certificate resolves to the DNS record.
In your case, you would need to point your DNS towards your ingress.
If this is done successfully, then the next stage of debugging is to validate that both 443 and 80 can be resolved. The Validation Pod created by Cert Manager uses port 80 to validate the communication. A common mistake people make is assuming that they will only use port 443 for ssl and disable 80 for security reasons to find out later that letsencrypt can't validate the hostname without port 80.
Otherwise, the common scenario is that cert-manager is installed in the namespace cert-manager and so you should check the logs of the manager. This will provided a limited amount of logs and can be sometimes cryptic to finding the remedy to your issues.
To find the direct error, the pod spawned by cert-manager in the namespace you have deployed the ingress is a good place to focus.
A test I would run is to setup the ingress with both 80 and 443, if you use your domain from your browser you should get some invalid kubernetes generic certificates response on the port 443 and just "Not Found" on port 80. If this is successful, it rules out the limitation I have mentioned before.

Related

AKS Istio Ingress gateway Certificate is not valid

I have an AKS cluster with Istio install and I'm trying to deploy a containerised web api with TLS.
The api runs and is accessible but is showing as Not secure.
I have followed the directions on istios website to set this so not sure what I've missed.
I have created the secret with the command
kubectl create secret tls mycredential -n istio-system --key mycert.key --cert mycert.crt
and setup a gateway as follows
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: mycredential # must be the same as secret
hosts:
- 'dev.api2.mydomain.com'
The following virtual service
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapi
namespace: mynamespace
spec:
hosts:
- "dev.api2.mydomain.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: "/myendpoint"
rewrite:
uri: " "
route:
- destination:
port:
number: 8080
host: myapi
and service
apiVersion: v1
kind: Service
metadata:
name: myapi
namespace: mynamespace
labels:
app: myapi
service: myapi
spec:
ports:
- name: http
port: 8080
targetPort: 80
selector:
app: myapi
The container exposes port 80
Can someone please point me in the right direction because I'm not sure what I've done wrong
I managed to resolve the issue by setting up cert manager and pointing it at letsencrypt to generate the certificate, rather than using the pre-purchased one I was trying to add manually.
Although it took some searching to find how to correctly configure this, it is now working and actually saves having to purchase certificates, so win win :)

Cert manager doesn't get the challenge done

I'm setting up a k3s cluster for local development.
To be clear, I do not have a public IP address.
At this moment I'm looking for a solution to get the certificate process automated (via cert-manager).
In order to get this to work I've did the following:
Deployed k3s
Deployed cert-manager
Deployed traefik
Purchased a domain
Created a cloudflare account and added the domain there
Created an API token to do the acme challenge (based on https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/)
Created a simple test website
When a add the test website I get the following error:
Found no Zones for domain _acme-challenge.. (neither in
the sub-domain noir in the SLD) please make sure your domain-entries
in the config are correct and the API is correctly setup with
Zone.read rights.
I have the following configuration:
ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: my#emailaddress.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt
solvers:
- dns01:
cloudflare:
email: my#emailaddress.com
apiKeySecretRef:
name: cloudflare-api-key-secret
key: api-key
Test website
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
namespace: test
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: letsencrypt
spec:
rules:
- host: test.<mydomain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
tls:
- secretName: test.<mydomain>

Unable to successfully setup TLS on a Multi-Tenant GKE+Istio with LetsEncrypt (via Cert Manager)

I'm trying configure for TLS (LetsEncrypt) on a multi-tenant GKE+Istio setup.
I mainly followed this guide -> Full Isolation in Multi-Tenant SAAS with Kubernetes & Istio for setting up the multi-tenancy in GKE+Istio, which I was able to successfully pull-off. I'm able to deploy simple apps on their separate namespaces which are accessible through their respective subdomains.
I then tried to move forward and setup the TLS with LetsEncrypt. For this I mainly followed a different guide which is can be found here-> istio-gke . But unfortunately, following this guide didn't produce the result I wanted. When I was done with it, LetsEncrypt aren't even issuing certificates to my deployment or domain.
Thus I tried to follow a different guide which is as follows -> istio-gateway-tls-setup. Here I managed to get LetsEncrypt issue a certificate for my domain, but when I tried to test it out with openssl or other online ssl checkers, it says that I still aren't communicating securely.
Below are the results when I try describe the configurations of my certificates, issuer & gateway:
Certificate: kubectl -n istio-system describe certificate istio-gateway
Issuer: kubectl -n istio-system describe issuer letsencrypt-prod
Gateway: kubectl -n istio-system describe gateway istio-gateway
While here's the dry-run results for my helm install <tenant>
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/cjcabero/projects/aqt-ott-msging-dev/gke-setup/helmchart
NAME: tenanta
LAST DEPLOYED: Wed Feb 17 21:15:08 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
frontend:
image:
pullPolicy: IfNotPresent
repository: paulbouwer/hello-kubernetes
tag: "1.8"
ports:
containerPort: 8080
service:
name: http
port: 80
type: ClusterIP
HOOKS:
MANIFEST:
---
# Source: helmchart/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tenanta
labels:
istio-injection: enabled
---
# Source: helmchart/templates/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: tenanta
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: frontend
---
# Source: helmchart/templates/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: tenanta
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello tenanta
---
# Source: helmchart/templates/virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tenanta-frontend-ingress
namespace: istio-system
spec:
hosts:
- tenanta.cjcabero.dev
gateways:
- istio-gateway
http:
- route:
- destination:
host: frontend.tenanta.svc.cluster.local
port:
number: 80
I don't understand how come even though LetsEncrypt seem to be able issue the certificate for my domain, it still aren't communicating securely.
Google Domains even managed to find that a certificate was issued in the domain in it's Transparency Report.
Anyway, I'm not sure if this could help, but I also tried the check the domain with an online ssl checker and here are the results -> https://check-your-website.server-daten.de/?q=cjcabero.dev.
By the way I did use Istio on GKE which results with Istio v1.4.10 & Kubernetes v1.18.15-gke.1100.

SSL Certificates on Kubernetes Using ACME

I have been following this tutorial: https://cert-manager.io/docs/ , and after I have installed my cert manager and made sure they are running with kubectl get pods --namespace cert-manager,
cert-manager-5597cff495-l5hjs 1/1 Running 0 91m
cert-manager-cainjector-bd5f9c764-xrb2t 1/1 Running 0 91m
cert-manager-webhook-5f57f59fbc-q5rqs 1/1 Running 0 91m
I then configured my cert-manager using ACME issuer by following this tutorial https://cert-manager.io/docs/configuration/acme/ .
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: aidenhsy#gmail.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Here is my full ingress config file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: www.hyhaus.xyz
http:
paths:
- path: /api/?(.*)
backend:
serviceName: devback-srv
servicePort: 4000
- path: /?(.*)
backend:
serviceName: devfront-srv
servicePort: 3000
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-hostname: 'www.hyhaus.xyz'
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: aidenhsy#gmail.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
However when I browse to my site, the browser warns: security certificate is not trusted by your computer's operating system. And when I took a look a my certificate, it shows self-assigned, which is not really what I want. Am I doing something wrong here?
This is a certificate placeholder provided by nginx ingress controller. When you see it, it means there is no other (dedicated) certificate for the endpoint.
Now the first reason why this happened is that your Ingress doesn't have necessary data. Update it with this:
metadata:
annotations:
# which issuer to use
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls: # placing a host in TLS config indicates that a certificate should be created
- hosts:
- example.org
- www.example.org
- xyz.example.org
secretName: myingress-cert # cert-manager will store the created certificate in this secret
Documentation for ingress objects is here.
If the above didn't help, try the troubleshooting steps offered by the documentation. In my experience checking CertificateRequest and Certificate resources was enough in most cases to determine the problem.
$ kubectl get certificate
$ kubectl describe certificate <certificate-name>
$ kubectl get certificaterequest
$ kubectl describe certificaterequest <CertificateRequest name>
Remember that these objects are namespaced, meaning that they'll be in the same namespace as the ingress object.
To secure Ingress, First you have to add ClusterIssuer to your Ingress resources and cert-manager will then pick it up and create the Certificate resource for you .
Kind : ingress metadata: annotations : cert-manager.io/cluster-issuer: nameOfClusterIssuer .
Second you have to add tls <= this indicates the creation of certificate(key/cert pair) by Cert-manager via The ClusterIssuer.
Third you have to add secretName: myingress <= here the cert manager will store the tls secrets ( after creating key/cert pair and store them for you)..

GCP Health Checks with SSL enabled

I kind of new on Kubernetes stuff and I'm trying to improve one current system we have here.
The Application is developed using Spring Boot and until now it was using HTTP (Port 8080) without any encryption. The system requirement is to enable e2e-encryption for all Data In-Transit. So here is the problem.
Currently, we have GCE Ingress with TLS enabled using Let's Encrypt to provide the Certificates on Cluster entrance. This is working fine. Our Ingress has some Path Rules to redirect the traffic to the correct microservice and those microservices are not using TLS on the communication.
I managed to create a Self-Signed certificate and embedded it inside the WAR and this is working on the Local machine just fine (using certificate validation disabled). When I deploy this on GKE, the GCP Health Check and Kubernetes Probes are not working at all (I can't see any communication attempt on the Application logs).
When I try to configure the Backend and Health Check on GCP changing both to HTTPS, they don't show any error, but after some time they quietly switch back to HTTP.
Here are my YAML files:
admin-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
spec:
type: NodePort
selector:
app: admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
admin-deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "admin"
namespace: "default"
labels:
app: "admin"
spec:
replicas: 1
selector:
matchLabels:
app: "admin"
template:
metadata:
labels:
app: "admin"
spec:
containers:
- name: "backend-admin"
image: "gcr.io/my-project/backend-admin:X.Y.Z-SNAPSHOT"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
env:
- name: "FIREBASE_PROJECT_ID"
valueFrom:
configMapKeyRef:
key: "FIREBASE_PROJECT_ID"
name: "service-config"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "admin-etu-vk1a"
namespace: "default"
labels:
app: "admin"
spec:
scaleTargetRef:
kind: "Deployment"
name: "admin"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 3
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
ingress.yaml
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ingress-addr
kubernetes.io/ingress.class: "gce"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- my-domain.com
secretName: mydomain-com-tls
rules:
- host: my-domain.com
http:
paths:
- path: /admin/v1/*
backend:
serviceName: admin-service
servicePort: 443
status:
loadBalancer:
ingress:
- ip: XXX.YYY.WWW.ZZZ
Reading this document from GCP I understood that Loadbalancer it's compatible with Self-signed certificates.
I would appreciate any insight or new directions you guys can provide.
Thanks in advance.
EDIT 1: I've added here the ingress YAML file which may help to a better understanding of the issue.
EDIT 2: I've updated the deployment YAML with the solution I found for liveness and readiness probes (scheme).
EDIT 3: I've found the solution for GCP Health Checks using annotation on Services declaration. I will put all the details on the response to my own question.
Here is what I found on how to fix the issue.
After reading a lot of documentation related to Kubernetes and GCP I found a document on GCP explaining to use annotations on Service declaration. Take a look at lines 7-8.
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
annotations:
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
spec:
type: NodePort
selector:
app: iteam-admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
This will hint GCP to create the backend-service and health-check using HTTPS and everything will work as expected.
Reference: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application