I have a Kubernetes cluster with multiple tenants (in different namespaces). I'd like to deploy an independent Istio Gateway object into each tenant, which I seem to be able to do. However, setting up TLS requires a K8s secret that contains the TLS key/cert. The docs indicate that the "secret must be named istio-ingressgateway-certs in the istio-system namespace". This would seem to indicate that I can only have one TLS secret per cluster. Maybe I'm not reading this correctly. Is there a way to configure independent Istio Gateways in their own namespaces, with their own TLS secrets? How might I go about doing that?
Here is the doc that I'm referencing.
https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/
Any thoughts are much appreciated.
As provided on istio documentation it's possible.
In this section you will configure an ingress gateway for multiple hosts, httpbin.example.com and bookinfo.com.
So You need to create private keys, in this example, for bookinfo and httbin, and update istio-ingressgateway.
I created them both and they exist.
bookinfo certs and gateway
kubectl exec -it -n istio-system $(kubectl -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items[0].metadata.name}') -- ls -al /etc/istio/ingressgateway-bookinfo-certs
lrwxrwxrwx 1 root root 14 Jan 3 10:12 tls.crt -> ..data/tls.crt
lrwxrwxrwx 1 root root 14 Jan 3 10:12 tls.key -> ..data/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https-bookinfo
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt
privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key
hosts:
- "bookinfo.com"
httpbin certs and gateway
kubectl exec -it -n istio-system $(kubectl -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items[0].metadata.name}') -- ls -al /etc/istio/ingressgateway-certs
lrwxrwxrwx 1 root root 14 Jan 3 10:07 tls.crt -> ..data/tls.crt
lrwxrwxrwx 1 root root 14 Jan 3 10:07 tls.key -> ..data/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "httpbin.example.com"
Haven't made a full reproduction to check if they both works but if that won't work for You i will try to make it and update the question.
This link might be helpful.
Related
I'm trying configure for TLS (LetsEncrypt) on a multi-tenant GKE+Istio setup.
I mainly followed this guide -> Full Isolation in Multi-Tenant SAAS with Kubernetes & Istio for setting up the multi-tenancy in GKE+Istio, which I was able to successfully pull-off. I'm able to deploy simple apps on their separate namespaces which are accessible through their respective subdomains.
I then tried to move forward and setup the TLS with LetsEncrypt. For this I mainly followed a different guide which is can be found here-> istio-gke . But unfortunately, following this guide didn't produce the result I wanted. When I was done with it, LetsEncrypt aren't even issuing certificates to my deployment or domain.
Thus I tried to follow a different guide which is as follows -> istio-gateway-tls-setup. Here I managed to get LetsEncrypt issue a certificate for my domain, but when I tried to test it out with openssl or other online ssl checkers, it says that I still aren't communicating securely.
Below are the results when I try describe the configurations of my certificates, issuer & gateway:
Certificate: kubectl -n istio-system describe certificate istio-gateway
Issuer: kubectl -n istio-system describe issuer letsencrypt-prod
Gateway: kubectl -n istio-system describe gateway istio-gateway
While here's the dry-run results for my helm install <tenant>
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/cjcabero/projects/aqt-ott-msging-dev/gke-setup/helmchart
NAME: tenanta
LAST DEPLOYED: Wed Feb 17 21:15:08 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
frontend:
image:
pullPolicy: IfNotPresent
repository: paulbouwer/hello-kubernetes
tag: "1.8"
ports:
containerPort: 8080
service:
name: http
port: 80
type: ClusterIP
HOOKS:
MANIFEST:
---
# Source: helmchart/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tenanta
labels:
istio-injection: enabled
---
# Source: helmchart/templates/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: tenanta
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: frontend
---
# Source: helmchart/templates/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: tenanta
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello tenanta
---
# Source: helmchart/templates/virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tenanta-frontend-ingress
namespace: istio-system
spec:
hosts:
- tenanta.cjcabero.dev
gateways:
- istio-gateway
http:
- route:
- destination:
host: frontend.tenanta.svc.cluster.local
port:
number: 80
I don't understand how come even though LetsEncrypt seem to be able issue the certificate for my domain, it still aren't communicating securely.
Google Domains even managed to find that a certificate was issued in the domain in it's Transparency Report.
Anyway, I'm not sure if this could help, but I also tried the check the domain with an online ssl checker and here are the results -> https://check-your-website.server-daten.de/?q=cjcabero.dev.
By the way I did use Istio on GKE which results with Istio v1.4.10 & Kubernetes v1.18.15-gke.1100.
This is my k3d cluster creation command:
$ k3d cluster create arxius \
--agents 3 \
--k3s-server-arg --disable=traefik \
-p "8888:80#loadbalancer" -p "9000:9000#loadbalancer" \
--volume ${HOME}/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml
Here my nodes:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c83f2f746621 rancher/k3d-proxy:v3.0.1 "/bin/sh -c nginx-pr…" 2 weeks ago Up 21 minutes 0.0.0.0:9000->9000/tcp, 0.0.0.0:8888->80/tcp, 0.0.0.0:45195->6443/tcp k3d-arxius-serverlb
0ed525443da2 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-2
561a0a51e6d7 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-1
fc131df35105 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-0
4cfceabad5af rancher/k3s:v1.18.6-k3s1 "/bin/k3s server --d…" 2 weeks ago Up 21 minutes k3d-arxius-server-0
873a4f157251 registry:2 "/entrypoint.sh /etc…" 3 months ago Up About an hour 0.0.0.0:5000->5000/tcp registry.localhost
I've installed traefik using default helm installation command:
$ helm install traefik traefik/traefik
After that, an ingressroute is also installed in order to reach dashboard:
Name: traefik-dashboard
Namespace: traefik
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: helm.sh/hook: post-install,post-upgrade
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2020-12-09T19:07:41Z
Generation: 1
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:helm.sh/hook:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:app.kubernetes.io/name:
f:helm.sh/chart:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: Go-http-client
Operation: Update
Time: 2020-12-09T19:07:41Z
Resource Version: 141805
Self Link: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
UID: 1cbcd5ec-d967-440c-ad21-e41a59ca1ba8
Spec:
Entry Points:
traefik
Routes:
Kind: Rule
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
As you can see:
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
I'm trying to reach dashboard. Nevertheless:
Details are not shown.
I've also tried to launch a curl command:
curl 'http://localhost:9000/api/overview'
curl: (52) Empty reply from server
Any ideas?
First, using the default configuration of the traefik helm chart (in version 9.1.1) sets up the entryPoint traefik on port 9000 but does not expose it automatically. So, if you check the service created for you, you will see that this only maps the web and websecure endpoints.
Check this snippet from kubectl get svc traefik -o yaml
spec:
clusterIP: xx.xx.xx.xx
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30388
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 31115
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
As explained in the docs, there are two ways to reach your dashboard. Either, you start a port-forward to your local machine for port 9000 or you expose the dashboard via ingressroute on another entrypoint.
Please be aware that you still net to port-forward even though your k3d proxy already binds to 9000. This is only the reservation if some loadbalanced service wants to be exposed on that external port. At the moment, this is not used and is also not necessary for any of the solutions. You still need to port-forward to the traefik pod. After establishing the port-forward, you can access the dashboard on http://localhost:9000/dashboard/ (be aware of the trailing slash that is needed for the PathPrefix rule).
The other solution of exposing on another entrypoint requires no port-forward, but you need to care for a proper domain name (dns entry + host rule) and take care of not exposing it to the whole world by e.g. adding an auth middleware.
See the changes highlighted below:
# dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web # <-- using the web entrypoint, not the traefik (9000) one
routes: # v-- adding a host rule
- match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
services:
- name: api#internal
kind: TraefikService
I have a K8s cluster (v1.12.8-gke.10) in GKE and have a nginx ingress with hosts rules. I am trying to enable TLS using cert-manager for ingress routes. I am using a selfsign cluster issuer. But, when I access the site over HTTPS, I am still getting the default K8s certificate. (The certificate is only valid for the following names: kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: test
name: test
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: selfsign
spec:
tls:
- secretName: test
hosts:
- test.example.com
rules:
- host: test.example.com
http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
I have checked the following and is working fine:
A cluster issuer named "selfsign"
A valid self-signed certificate backed by a secret "test"
A healthy and running nginx ingress deployment
A healthy and running ingress service of type load-balancer
I think it's issue of clusterissuer
Just have a look at my cluster issuer and check
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: it-support#something.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: prod
# Enable the HTTP-01 challenge provider
http01: {}
Check for the right url to get production-grade certificates:
server: https://acme-v02.api.letsencrypt.org/directory
If your server url is something like this :
server: https://acme-staging-v02.api.letsencrypt.org/directory
which means you are applying for the staging certificate which may occur the error.
I've followed the tutorial from Digital Ocean and was able to enable TLS using cert-manager for ingress routes using Helm, Tiller, Letsencrypt and Nginx Ingress controller in GKE.
Instead of host test-example.com, I used my own domain name and spun up dummy backend services (echo1 and echo2) for testing purposes.
After followed the tutorial and to verify that HTTPS is working correctly, try to curl the host:
$ curl test.example.com
you should see a 308 http response (Permanent Redirect). This indicates that HTTP requests are being redirected to use HTTPS.
On the other hand, try running curl on:
$ curl https://test.example.com
Should show you the site response.
You can run the previous commands with the verbose -v flag to check into the certificate handshake and to verify the certificate information.
I can`t create wilcard ssl with cert manager, I add my domain to cloudflare but cert manager can`t verify ACME account. How i resolve this problem?
i want wilcard ssl for my domain and use any deployments how could i do?
I find error but how i resolve, error is my k8s doesnt resolve dns acme-v02.api.letsencrypt.org
error is k8s dns can't find
My k8s is
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3-k3s.1", GitCommit:"8343999292c55c807be4406fcaa9f047e8751ffd", GitTreeState:"clean", BuildDate:"2019-06-12T04:56+00:00Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Error log:
I0716 13:06:11.712878 1 controller.go:153] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="default/issuer-letsencrypt"
I0716 13:06:11.713218 1 setup.go:162] cert-manager/controller/issuers "level"=0 "msg"="ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
I0716 13:06:11.713245 1 logger.go:88] Calling GetAccount
E0716 13:06:16.714911 1 setup.go:172] cert-manager/controller/issuers "msg"="failed to verify ACME account" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
I0716 13:06:16.715527 1 sync.go:76] cert-manager/controller/issuers "level"=0 "msg"="Error initializing issuer: Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
E0716 13:06:16.715609 1 controller.go:155] cert-manager/controller/issuers "msg"="re-queuing item due to error processing" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "key"="default/issuer-letsencrypt"
my Issuer
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: issuer-letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: yusufkaan142#gmail.com
privateKeySecretRef:
name: issuer-letsencrypt
dns01:
providers:
- name: cf-dns
cloudflare:
email: mail#gmail.com
apiKeySecretRef:
name: cloudflare-api-key
key: api-key.txt
Secret:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-key
namespace: cert-manager
type: Opaque
data:
api-key.txt: base64encoded
My Certificate:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: wilcard-theykk-net
namespace: cert-manager
spec:
secretName: wilcard-theykk-net
issuerRef:
name: issuer-letsencrypt
kind: Issuer
commonName: '*.example.net'
dnsNames:
- '*.example.net'
acme:
config:
- dns01:
provider: cf-dns
domains:
- '*.example.net'
- 'example.net'
Dns for k8s
apiVersion: v1
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.1.1.1","8.8.8.8"]
I would start with debugging DNS resolution function within your K8s cluster:
Spin up some container with basic network tools on a board:
kubectl run -i -t busybox --image=radial/busyboxplus:curl --restart=Never
From within busybox container check /etc/resolv.conf file and ensure that you can resolve Kubernetes DNS service kube-dns:
$ cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local c.org-int.internal google.internal
options ndots:5
Make a lookup request to kubernetes.default which should get output with a DNS nameserver without any issues:
$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
Due to the fact that you've defined upstreamNameservers in the corresponded kube-dns ConfigMap, check whether you can ping upstream nameservers: 1.1.1.1 and 8.8.8.8 that should be accessible from within a Pod.
Verify DNS pod logs for any suspicious events for each container(kubedns, dnsmasq, sidecar):
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c sidecar
If you are fine with all precedent steps then DNS discovery is working properly, thus you can also inspect Cloudflare DNS firewall configuration in order to exclude potential restrictions. More relevant information about troubleshooting DNS issue you can find in the official K8s documentation.
I setup a new kubernetes cluster on GKE using the nginx-ingress controller. TLS is not working, it's using the fake certificates.
There is a lot of configuration detail so I made a repo - https://github.com/jobevers/test_ssl_ingress
In short the steps were
create a new cluster without GKE's load balancer
create a tls secret with my key and cert
create an nginx-ingress deployment / pod
create an ingress controller
The nginx-ingress config comes from https://zihao.me/post/cheap-out-google-container-engine-load-balancer/ (and looks very similar to a lot of the examples in the ingress-nginx repo).
My ingress.yaml is nearly identical to the example one
When I run curl, I get
$ curl -kv https://35.196.134.52
[...]
* common name: Kubernetes Ingress Controller Fake Certificate (does not match '35.196.134.52')
[...]
* issuer: O=Acme Co,CN=Kubernetes Ingress Controller Fake Certificate
[...]
which shows that I'm still using the default certificates.
How am I supposed to get it using mine?
Ingress definition
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ssl-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: demo-echo-service
servicePort: 80
Creating the secret:
kubectl create secret tls tls-secret --key tls/privkey.pem --cert tls/fullchain.pem
Debugging further, the certificate is being found and exist on the server:
$ kubectl -n kube-system exec -it $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") -- ls -1 /ingress-controller/ssl/
default-fake-certificate-full-chain.pem
default-fake-certificate.pem
default-tls-secret-full-chain.pem
default-tls-secret.pem
And, from the log, I see
kubectl -n kube-system log -f $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ")
[...]
I1013 17:21:45.423998 6 queue.go:111] syncing default/test-ssl-ingress
I1013 17:21:45.424009 6 backend_ssl.go:40] starting syncing of secret default/tls-secret
I1013 17:21:45.424135 6 ssl.go:60] Creating temp file /ingress-controller/ssl/default-tls-secret.pem236555242 for Keypair: default-tls-secret.pem
I1013 17:21:45.424946 6 ssl.go:118] parsing ssl certificate extensions
I1013 17:21:45.743635 6 backend_ssl.go:102] found 'tls.crt' and 'tls.key', configuring default/tls-secret as a TLS Secret (CN: [...])
[...]
But, looking at the nginx.conf, its still using the fake certs:
$ kubectl -n kube-system exec -it $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") -- cat /etc/nginx/nginx.conf | grep ssl_cert
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
Turns out that the ingress definition needs to look like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ssl-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- app.example.com
secretName: tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: demo-echo-service
servicePort: 80
The host entry under rules needs to match one of the hosts entries under tls.
Just faced that issue as well with v0.30.0 and it turns out that having an ingress config like this without explicit hostnames is ok:
spec:
tls:
- secretName: ssl-certificate
On my side the problem was that I had a annotation on the ingress with an int64 value that was not parsed correctly and below that was the definiton kubernetes.io/ingress.class so essentially nginx did not find the ingress controller which was stated in the logs correctly:
ignoring add for ingress <ingressname> based on annotation kubernetes.io/ingress.class with value
So using strings in the annotations fixed the problem.
You need to add the ROOT CA Certificate to authorities section in places such as chrome, firefox, the server's certificate pool.
Create a directory called /usr/share/ca-certificates/extras
Change extension of .pem file to .crt and copy this file to
directory you created
Run sudo dpkg-reconfigure ca-certificates
In window that opens, first press enter, then select the file you
added in list that appears with space key and press enter again
Your computer will now automatically recognize other certificates, you have generated with this certificate.
I found that to use wild host tls we need to have tls host name and rules host name both using wild card for example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ssl-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- "*.example.com"
secretName: tls-secret
rules:
- host: "*.example.com"
http:
paths:
- path: /
backend:
serviceName: demo-echo-service
servicePort: 80