How to configure https on deployment yaml file for asp.net core app locally in minikube? - asp.net-core

I have an ASP.NET Core app that I want to configure with HTTPS in my local kubernetes clustur using minikube.
The deployment yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-volume
labels:
app: kube-volume-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ckubevolume
image: kubevolume
imagePullPolicy: Never
ports:
- containerPort: 80
- containerPort: 443
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
volumeMounts:
- name: ssl
mountPath: "/app/https"
volumes:
- name: ssl
configMap:
name: game-config
You can see i have added environment variables for https in yaml file.
I also created a service for this deployment. The yaml file of the service is:
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
type: NodePort
selector:
component: web
ports:
- name: http
protocol: TCP
port: 100
targetPort: 80
- name: https
protocol: TCP
port: 200
targetPort: 443
But unfortunately the app is not opening by the service when I run the minikube service service-1 command.
However when I remove the env variables for https then the app is opening by the service. These are the lines which when I remove the app opens:
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
I also confirmed with the shell that the certificate is present in the /app/https folder.
Whay I am doing wrong?

I think your approach does not fit well with the architecture of Kubernetes. A TLS certificate (for https) is coupled to a hostname.
I would recommend one of two different approaches:
Expose your app with a Service of type: LoadBalancer
Expose your app with an Ingress resource
Expose your app with a Service of type LoadBalancer
This is typically called a Network LoadBalancer as it exposes your app for TCP or UDP directly.
See LoadBalancer access in the Minikube documentation. But beware that your app get an external address from your LoadBalancer, and your TLS certificate probably has to match that.
Expose your app with an Ingress resource
This is the most common approach for Microservices in Kubernetes. In addition to your Service of type: NodePort you also need to create an Ingress resource for your app.
The cluster needs an Ingress controller and the gateway will handle your TLS certificate, instead of your app.
See How to use custom TLS certificate with ingress addon for how to configure both Ingress and TLS certificate in Minikube.
I would recommend to go this route.

Related

AKS Istio Ingress gateway Certificate is not valid

I have an AKS cluster with Istio install and I'm trying to deploy a containerised web api with TLS.
The api runs and is accessible but is showing as Not secure.
I have followed the directions on istios website to set this so not sure what I've missed.
I have created the secret with the command
kubectl create secret tls mycredential -n istio-system --key mycert.key --cert mycert.crt
and setup a gateway as follows
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: mycredential # must be the same as secret
hosts:
- 'dev.api2.mydomain.com'
The following virtual service
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapi
namespace: mynamespace
spec:
hosts:
- "dev.api2.mydomain.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: "/myendpoint"
rewrite:
uri: " "
route:
- destination:
port:
number: 8080
host: myapi
and service
apiVersion: v1
kind: Service
metadata:
name: myapi
namespace: mynamespace
labels:
app: myapi
service: myapi
spec:
ports:
- name: http
port: 8080
targetPort: 80
selector:
app: myapi
The container exposes port 80
Can someone please point me in the right direction because I'm not sure what I've done wrong
I managed to resolve the issue by setting up cert manager and pointing it at letsencrypt to generate the certificate, rather than using the pre-purchased one I was trying to add manually.
Although it took some searching to find how to correctly configure this, it is now working and actually saves having to purchase certificates, so win win :)

Unable to successfully setup TLS on a Multi-Tenant GKE+Istio with LetsEncrypt (via Cert Manager)

I'm trying configure for TLS (LetsEncrypt) on a multi-tenant GKE+Istio setup.
I mainly followed this guide -> Full Isolation in Multi-Tenant SAAS with Kubernetes & Istio for setting up the multi-tenancy in GKE+Istio, which I was able to successfully pull-off. I'm able to deploy simple apps on their separate namespaces which are accessible through their respective subdomains.
I then tried to move forward and setup the TLS with LetsEncrypt. For this I mainly followed a different guide which is can be found here-> istio-gke . But unfortunately, following this guide didn't produce the result I wanted. When I was done with it, LetsEncrypt aren't even issuing certificates to my deployment or domain.
Thus I tried to follow a different guide which is as follows -> istio-gateway-tls-setup. Here I managed to get LetsEncrypt issue a certificate for my domain, but when I tried to test it out with openssl or other online ssl checkers, it says that I still aren't communicating securely.
Below are the results when I try describe the configurations of my certificates, issuer & gateway:
Certificate: kubectl -n istio-system describe certificate istio-gateway
Issuer: kubectl -n istio-system describe issuer letsencrypt-prod
Gateway: kubectl -n istio-system describe gateway istio-gateway
While here's the dry-run results for my helm install <tenant>
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/cjcabero/projects/aqt-ott-msging-dev/gke-setup/helmchart
NAME: tenanta
LAST DEPLOYED: Wed Feb 17 21:15:08 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
frontend:
image:
pullPolicy: IfNotPresent
repository: paulbouwer/hello-kubernetes
tag: "1.8"
ports:
containerPort: 8080
service:
name: http
port: 80
type: ClusterIP
HOOKS:
MANIFEST:
---
# Source: helmchart/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tenanta
labels:
istio-injection: enabled
---
# Source: helmchart/templates/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: tenanta
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: frontend
---
# Source: helmchart/templates/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: tenanta
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello tenanta
---
# Source: helmchart/templates/virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tenanta-frontend-ingress
namespace: istio-system
spec:
hosts:
- tenanta.cjcabero.dev
gateways:
- istio-gateway
http:
- route:
- destination:
host: frontend.tenanta.svc.cluster.local
port:
number: 80
I don't understand how come even though LetsEncrypt seem to be able issue the certificate for my domain, it still aren't communicating securely.
Google Domains even managed to find that a certificate was issued in the domain in it's Transparency Report.
Anyway, I'm not sure if this could help, but I also tried the check the domain with an online ssl checker and here are the results -> https://check-your-website.server-daten.de/?q=cjcabero.dev.
By the way I did use Istio on GKE which results with Istio v1.4.10 & Kubernetes v1.18.15-gke.1100.

GCP Health Checks with SSL enabled

I kind of new on Kubernetes stuff and I'm trying to improve one current system we have here.
The Application is developed using Spring Boot and until now it was using HTTP (Port 8080) without any encryption. The system requirement is to enable e2e-encryption for all Data In-Transit. So here is the problem.
Currently, we have GCE Ingress with TLS enabled using Let's Encrypt to provide the Certificates on Cluster entrance. This is working fine. Our Ingress has some Path Rules to redirect the traffic to the correct microservice and those microservices are not using TLS on the communication.
I managed to create a Self-Signed certificate and embedded it inside the WAR and this is working on the Local machine just fine (using certificate validation disabled). When I deploy this on GKE, the GCP Health Check and Kubernetes Probes are not working at all (I can't see any communication attempt on the Application logs).
When I try to configure the Backend and Health Check on GCP changing both to HTTPS, they don't show any error, but after some time they quietly switch back to HTTP.
Here are my YAML files:
admin-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
spec:
type: NodePort
selector:
app: admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
admin-deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "admin"
namespace: "default"
labels:
app: "admin"
spec:
replicas: 1
selector:
matchLabels:
app: "admin"
template:
metadata:
labels:
app: "admin"
spec:
containers:
- name: "backend-admin"
image: "gcr.io/my-project/backend-admin:X.Y.Z-SNAPSHOT"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
env:
- name: "FIREBASE_PROJECT_ID"
valueFrom:
configMapKeyRef:
key: "FIREBASE_PROJECT_ID"
name: "service-config"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "admin-etu-vk1a"
namespace: "default"
labels:
app: "admin"
spec:
scaleTargetRef:
kind: "Deployment"
name: "admin"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 3
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
ingress.yaml
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ingress-addr
kubernetes.io/ingress.class: "gce"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- my-domain.com
secretName: mydomain-com-tls
rules:
- host: my-domain.com
http:
paths:
- path: /admin/v1/*
backend:
serviceName: admin-service
servicePort: 443
status:
loadBalancer:
ingress:
- ip: XXX.YYY.WWW.ZZZ
Reading this document from GCP I understood that Loadbalancer it's compatible with Self-signed certificates.
I would appreciate any insight or new directions you guys can provide.
Thanks in advance.
EDIT 1: I've added here the ingress YAML file which may help to a better understanding of the issue.
EDIT 2: I've updated the deployment YAML with the solution I found for liveness and readiness probes (scheme).
EDIT 3: I've found the solution for GCP Health Checks using annotation on Services declaration. I will put all the details on the response to my own question.
Here is what I found on how to fix the issue.
After reading a lot of documentation related to Kubernetes and GCP I found a document on GCP explaining to use annotations on Service declaration. Take a look at lines 7-8.
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
annotations:
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
spec:
type: NodePort
selector:
app: iteam-admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
This will hint GCP to create the backend-service and health-check using HTTPS and everything will work as expected.
Reference: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application

GKE - Using Google managed certificate (mcrt) for SSL connection to Extensible service proxy (ESP)

I am currently trying to set up multiple Cloud Endpoints with my API services running inside of a GKE cluster. I am using an Ingress to expose the ESP to the internet and I have issued a managed certificate to access the proxy using HTTPS. This is the configuration of my ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mw-ingress
annotations:
networking.gke.io/managed-certificates: mw-cert
kubernetes.io/ingress.global-static-ip-name: mw-static-ip
spec:
backend:
serviceName: frontend-service
servicePort: 80
rules:
- http:
paths:
- path: /auth/api/*
backend:
serviceName: auth-service
servicePort: 8083
While this is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: auth
name: auth
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: auth
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: auth
spec:
volumes:
- name: cloud-endpoints-credentials-volume
secret:
secretName: cloud-endpoints-secret
containers:
- name: auth-service
image: eu.gcr.io/my-project/auth-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8083
protocol: TCP
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--backend=127.0.0.1:8083",
"--http_port=8084",
"--service=auth-service.endpoints.my-project.cloud.goog",
"--rollout_strategy=managed",
"--service_account_key=/etc/nginx/creds/cloudendpoint.json",
"-z", "healthz"
]
ports:
- containerPort: 8084
volumeMounts:
- name: cloud-endpoints-credentials-volume
mountPath: /etc/nginx/creds
readOnly: true
Up to this point everything is working fine.
However I cannot seem to find a way to enable SSL on the ESP. The official documentation says to create a secret from the certificate files. However as Google provisions the certificate I have no idea how to create a secret from it. All of the hints and comments I could find on other sources are using self-signed certificates and/or cert-manager like this: https://github.com/GoogleCloudPlatform/endpoints-samples/issues/52#issuecomment-454387373
They mount a volume containing that secret inside of the deployment. When I just try to add the flag "-ssl_port=443" to the list of arguments on the ESP I obviously get the following error during deployment because the certificate is not there: nginx: [emerg] BIO_new_file("/etc/nginx/ssl/nginx.crt") failed (SSL: error:02000002:system library:OPENSSL_internal:No such file or directory:fopen('/etc/nginx/ssl/nginx.crt','r') error:1100006e:BIO routines:OPENSSL_internal:NO_SUCH_FILE)
Has anybody used managed certificates in combination with the ESP before and has an idea on how to mount the certificate or create a secret?
I ran into the same issue. Solution was to upload my cert as a secret and mount the secret to the esp container in the location it's expecting. According to the documentation it's hard-coded in the esp container to look for the certs at a specific file path and with a specific naming convention.
https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
volumeMounts:
- mountPath: /etc/nginx/ssl
name: test-ssl
readOnly: true
.
.
.
volumes:
- name: test-ssl
projected:
sources:
- secret:
name: test-ssl
items:
- key: dev.crt
path: nginx.crt
- key: dev.key
path: nginx.key

Kubernetes: How to use https communication between pods

I have two Pods and they are in the same kubernetes cluster
and Pod1 should communicate Pod2 over https.
I use the internal
Domainname: backend-srv.area.cluster.local
But howto generate and integrate a cert to Pod2(apache)?
Your certificates should be generated and passed to apache by a Kubernetes Secret Resource
apiVersion: v1
kind: Secret
metadata:
name: apache-secret
data:
cacerts: your_super_long_string_with_certificate
In your pod yaml configuration you're going to use that secret:
volumes:
- name: certs
secret:
secretName: apache-secret
items:
- key: cacerts
path: cacerts
I suggest you to use a Service to connect to your pods:
apiVersion: v1
kind: Service
metadata:
labels:
app: apache
name: apache
spec:
externalTrafficPolicy: Cluster
ports:
- name: apache
port: 80
targetPort: 80
nodePort: 30080
selector:
app: apache
type: NodePort
Make the proper adjustments to my examples.