I am running a baremetal Kubernetes , with nginx ingress and metallb , and some hostnames mapped to the external ip provided by metallb.
I have created an nginx deployment , exposed it via service and created an ingress with the hostname.
I have created with openssl a self-signed certificate :
openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout tls.key -out tls.crt -subj "/CN=fake.example.com" -days 365
Then created a secret in the correct namespace:
kubectl -n demo create secret tls fake-self-secret --cert=tls.crt --key=tls.key
Then created the ingress :
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: demo-ingress
namespace: demo
spec:
rules:
- host: fake.example.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
path: /
tls:
- hosts:
- fake.example.com
secretName: fake-self-secret
Http works ( because of ssl-redirect false annotation) , https returns SSL_ERROR_RX_RECORD_TOO_LONG, on the nginx ingress controller log i see something like
"\x16\x03\x01\x00\xA8\x01\x00\x00\xA4\x03\x03*\x22\xA8\x8F\x07q\xAD\x98\xC1!\
openssl s_client -connect fake.example.com:443 -servername fake.example.com -crlf
140027703674768:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:794:
Nginx ingress-controller version is 0.30, with the default configuration, ssl-protocols enabled in the configmap are : TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
Any help / new ideas are welcomed :)
i have switched from kubernetes nginx ingress controller, to NGINX Ingress Controller, version nginx/nginx-ingress:1.7.0 ,and the config works
Related
I'm using kubernetes to host my app. I want automatically generate and renew lets-encrypt certificates using [cert-manager]
My project is open-source and all kubernetes configs are publicly available here. The domain I'm requesting a certificate is pychat.org and is managing my cloudflare. Kubernetes is set up already, the domain is pointing to the load-balancer ip address and returns index.html correctly.
So I'm following the guide:
Installing cert-manager CRD and cert-manager worker itself
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm template cert-manager jetstack/cert-manager --namespace cert-manager --version v1.8.0| kubectl apply -f -
Defining cloudflare-api key in cf-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
type: Opaque
stringData:
api-token: cf-token-copied-from-api
Defining issues and certificate in cert-manager.yaml:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: deathangel908#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
cloudflare:
email: deathangel908#gmail.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pychat-domain
namespace: pychat
spec:
secretName: pychat-tls
issuerRef:
name: letsencrypt-prod
duration: 2160h # 90d
renewBefore: 720h # 30d before SSL will expire, renew it
dnsNames:
- "pychat.org"
- "*.pychat.org"
3 .If I check the certificate, it seems like generated correctly:
kubectl get secret pychat-tls -n default -o yaml
apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiB...
tls.key: LS0tLS1CRUdJTiBSU0E...
metadata:
annotations:
cert-manager.io/alt-names: '*.pychat.org,pychat.org'
cert-manager.io/certificate-name: pychat-domain
cert-manager.io/common-name: pychat.org
cert-manager.io/ip-sans: ""
cert-manager.io/issuer-group: ""
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: letsencrypt-prod
cert-manager.io/uri-sans: ""
creationTimestamp: "2022-05-21T22:44:22Z"
name: pychat-tls
namespace: default
resourceVersion: "1800"
uid: f38c228b-b3a6-4649-aaf6-d9727685569c
type: kubernetes.io/tls
echo 'tls.crt...LS0tLS1CRUdJTiB' |base64 -d > lol.cert
openssl x509 -in ./lol.cert -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
04:b4:24:ae:61:c9:24:b6:50:5d:c2:50:0c:28:0f:c1:d5:17
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Let's Encrypt, CN = R3
Validity
Not Before: May 21 21:46:14 2022 GMT
Not After : Aug 19 21:46:13 2022 GMT
Subject: CN = pychat.org
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:c7:7f:08:....
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
38:41:8D:F9:5A...
X509v3 Authority Key Identifier:
keyid:14:2E:B3..
Authority Information Access:
OCSP - URI:http://r3.o.lencr.org
CA Issuers - URI:http://r3.i.lencr.org/
X509v3 Subject Alternative Name:
DNS:*.pychat.org, DNS:pychat.org
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
Policy: 1.3.6.1.4.1.44947.1.1.1
CPS: http://cps.letsencrypt.org
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : 46:A5:55:..
Extensions: none
Signature : ecdsa-with-SHA256
30:45:02:21:00:...
Signed Certificate Timestamp:
Version : v1 (0x0)
Log ID : 6F:53:76:A...
Timestamp : May 21 22:46:14.722 2022 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:...
Signature Algorithm: sha256WithRSAEncryption
6b:21:da:3a:ea:d8:...
If I check ingress, it shows this:
: kubectl describe ingress ingress
Name: ingress
Labels: <none>
Namespace: pychat
Address: 194.195.247.104
Default backend: frontend-service:80 (10.2.0.15:80)
TLS:
pychat-tls terminates pychat.org
Rules:
Host Path Backends
---- ---- --------
pychat.org
/api backend-service:8888 (10.2.0.16:8888,10.2.0.19:8888)
/ws backend-service:8888 (10.2.0.16:8888,10.2.0.19:8888)
/ frontend-service:80 (10.2.0.15:80)
Annotations: cert-manager.io/issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 15m cert-manager-ingress-shim Successfully created Certificate "pychat-tls"
Normal Sync 15m (x2 over 15m) nginx-ingress-controller Scheduled for sync
But If I make a request, it returns self-signed kube default certificate:
: curl -vk https://pychat.org/ 2>&1 | grep -e subject: -e issuer:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
The server is hosted on Linode and uses its load balancer if it matters:
helm repo add stable https://charts.helm.sh/stable
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx
The ingress.yaml configuration looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: "letsencrypt-prod"
name: ingress
namespace: pychat
spec:
ingressClassName: nginx
tls:
- hosts:
- pychat.org
secretName: pychat-tls
defaultBackend:
service:
name: frontend-service
port:
number: 80
rules:
- host: pychat.org
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8888
- path: /ws
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8888
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Found the issue, the guides just don't tell you much about if:
All the resources you create should be in the same namespace as your server. secret cloud-flare-api-token-secret, lets-encrypt-prod issuer, pychat-domain certificate, all of these things should have metadata -> namespace of your server, rather than cert-manager or default one.
Further tip: to debug this issue, I took a look at pods logs:
kubectl get all -n cert-manager
kubectl logs pod/cert-manager-id-from-top -n cert-manager
I obtained Intermediate SSL certificate from SSL.com recently. I'm running some services in AKS (Azure Kubernetes Service) Earlier I was using Let's Encrypt as the CertManager, but I want to use SSL.com as the CA going forward. So basically, I obtained chained.crt and the private.key
The chained.crt consists of 4 certificates. Like below.
-----BEGIN CERTIFICATE-----
abc
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
def
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
ghi
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
jkl
-----END CERTIFICATE-----
The first step was, I created a secret as below. The content I added in tls.crt and tls.key was base64 decoded data.
cat chained.crt | base64 | tr -d '\n'
cat private.key | base64 | tr -d '\n'
apiVersion: v1
kind: Secret
metadata:
name: ca-key-pair
namespace: cert
data:
tls.crt: <crt>
tls.key: <pvt>
Then eventually I created the Issuer by referring the secret I created above.
apiVersion: cert-manager.io/v1beta1
kind: Issuer
metadata:
name: my-issuer
namespace: cert
spec:
ca:
secretName: ca-key-pair
The issue I'm having here is, when I create the Issuer, it gives an error like this
Status:
Conditions:
Last Transition Time: 2022-01-27T16:09:02Z
Message: Error getting keypair for CA issuer: certificate is not a CA
Reason: ErrInvalidKeyPair
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrInvalidKeyPair 18s (x2 over 18s) cert-manager Error getting keypair for CA issuer: certificate is not a CA
I searched and found this too How do I add an intermediate SSL certificate to Kubernetes ingress TLS configuration? and followed the things mentioned here too. But still getting the same error.
Perfect! After spending more time on this, I was lucky enough to make this work. So in this case, you don't need to create an Issuer or ClusterIssuer.
First, create a TLS secret by specifying your private.key and the certificate.crt.
kubectl create secret tls ssl-secret --key private.key --cert certificate.crt
After creating the secret, you can directly refer to that Secret in the Ingres.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: 8m
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- <domain-name>
secretName: ssl-secret
rules:
- host: <domain-name>
http:
paths:
- backend:
serviceName: backend
servicePort: 80
path: /(.*)
Then verify if everything's working. For me the above process worked.
I team I have followed this link to configure cert manager in for My Istio but still I am not able to access the app through Istio ingress.
my manifest file look like this:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-cert
namespace: testing
spec:
secretName: test-cert
dnsNames:
- "example.com"
issuerRef:
name: test-letsencrypt
kind: ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: test-letsencrypt
namespace: testing
spec:
acme:
email: abc#example.com
privateKeySecretRef:
name: testing-letsencrypt-private-key
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: istio
selector: {}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: test-letsencrypt
name: test-gateway
namespace: testing
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "example.com"
tls:
mode: SIMPLE
credentialName: test-cert
Can anyone help me with what I am missing here?
Error from browser :
Secure Connection Failed
An error occurred during a connection to skydeck-test.asteria.co.in. PR_CONNECT_RESET_ERROR
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the web site owners to inform them of this problem.
Learn moreā¦
these are few logs may be helpful:
Normal Generated 5m13s cert-manager Stored new private key in temporary Secret resource "test-cert-sthkc"
Normal Requested 5m13s cert-manager Created new CertificateRequest resource "test-cert-htxcr"
Normal Issuing 4m33s cert-manager The certificate has been successfully issued
samirparhi#Samirs-Mac ~ % k get certificate -n testing
NAME READY SECRET AGE
test-cert True test-cert 19m
Note: this Namespace (testing) has Istio side car injection enabled and all the http request is working but HTTPS when I try to setup , it fails
I encountered the same problem when my certificate was not authenticated by a trusted third party but instead signed by me. I had to add an exception to my browser in order to access the site. So a simple money issue.
Also I was able to add my certificate to the /etc/ssl directory of the client machine to connect without problems.
Also I was able to add certificates by using TLS secrets and adding them to my virtual service configuration. You can try them too.
Examples:
TLS Secret:
kubectl create -n istio-system secret tls my-tls-secret --key=www.example.com.key --cert=www.example.com.crt
I assumed that you already have your certificate and its key but in case you need it:
Certificate creation:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=My Company Inc./CN=example.com' -keyout example.com.key -out example.com.crt
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=World Wide Example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Just don't forget to fill -subj fields in a reasonable manner. They are the working factor of authenticity when it comes to SSL certs as I understand. For example the first line of certificate creation creates a key and certificate for your organisation. Which is not approved by authorities to be added to Mozilla's or Chrome's or OS's ssl database.
That is why you get your "Untrusted certificate" message. So, for that reasons you can simply create a key and create your dns records on a trusted third parties dns zone and database and by paying them, you can use their trusted organisation certificates for authenticating your own site.
Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: my-tls-secret # must be the same as secret
hosts:
- www.example.com
Hope it helps.
Feel free to share "app" details.
I am trying to create an API that serves HTTPS traffic publicly and is reachable by an IP address (not the domain), using a GKE cluster. Docker images have been tested locally. They were capable of serving HTTPS on their own, but as far as I've come to realize, this is not necessary for the setup that I am imagining.
So what I've come up with so far is to have a Kubernetes Service exposing it's 8443 port and having an Ingress load balancer mapping to that port and using self-signed certificates created using this tutorial - basic-ingress-secret referred in the template. The only thing I have skipped is the domain binding given I am not in the possession of a domain. I hoped it would bind the certificate to the external IP, but this is unfortunately not the case (have tried to attach an IP to a CN of the certificate, as some users have noted here).
This is my yaml for service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- protocol: "TCP"
port: 8443
targetPort: 8443
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-27417/some:latest"
This is my yaml for Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
It is way simpler than these documentations explain.
1.- Create the self-signed certs
openssl req -newkey rsa:2048 -nodes -keyout tls.key -out tls.csr
openssl x509 -in tls.csr -out tls.crt -req -signkey tls.key
2.- Create the secret
kubectl create secret tls basic-ingress-secret --cert tls.crt --key tls.key
3.- Create the Ingress object
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
Note: To make is work on GKE, your service must be of type NodePort
$ kubectl describe svc some-node
Name: some-node
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: NodePort
IP: 10.60.6.214
Port: <unset> 8443/TCP
TargetPort: 80/TCP
NodePort: <unset> 30250/TCP
Endpoints: 10.56.0.17:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I have found a final solution that suits my current needs.
The issue with the setup above was that I didn't had a nodePort: value setup and that the SSL certificate was not properly working so I have purchased a domain, secured a static IP for the Ingress load balancer using gcloud compute addresses create some-static-ip --globaland pointed that domain to the IP. I have then created a self-signed SSL certificate again following this tutorial and using my new domain.
The final yaml for the service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- port: 80
targetPort: 30041
nodePort: 30041
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-project/some:v1"
The final yaml for the LB:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "some-static-ip"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- host: my.domain.com
http:
paths:
- path: /some/*
backend:
serviceName: some-node
servicePort: 80
The service now serves HTTPS traffic (only 443 port) on my.domain.com/some/* (placeholder domain and path). It is a simple HTTPS service setup that would require only a purchase of CA issued SSL certificate and a proper scaling configuration to be fully productionalized, from the DevOps standpoint. Unless someone has found some serious drawbacks to this setup.
I've kubernetes installed on Ubuntu 19.10.
I've setup ingress-nginx and can access my test service using http.
However, I get a "Connection refused" when I try to access via https.
[Edit] I'm trying to get https to terminate in the ingress and pass unencrypted traffic to my service the same way http does. I've implemented the below based on many examples I've seen but with little luck.
Yaml
kind: Service
apiVersion: v1
metadata:
name: messagemanager-service
namespace: default
labels:
name: messagemanager-service
spec:
type: NodePort
selector:
app: messagemanager
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 31212
name: http
externalIPs:
- 192.168.0.210
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: messagemanager
labels:
app: messagemanager
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: messagemanager
template:
metadata:
labels:
app: messagemanager
version: v1
spec:
containers:
- name: messagemanager
image: test/messagemanager:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messagemanager-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: false
ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /message
backend:
serviceName: messagemanager-service
servicePort: 8080
https test
curl -kL https://192.168.0.210/message -verbose
* Trying 192.168.0.210:443...
* TCP_NODELAY set
* connect to 192.168.0.210 port 443 failed: Connection refused
* Failed to connect to 192.168.0.210 port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.0.210 port 443: Connection refused
http test
curl -kL http://192.168.0.210/message -verbose
* Trying 192.168.0.210:80...
* TCP_NODELAY set
* Connected to 192.168.0.210 (192.168.0.210) port 80 (#0)
> GET /message HTTP/1.1
> Host: 192.168.0.210
> User-Agent: curl/7.65.3
> Accept: */*
> Referer: rbose
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=UTF-8
< Date: Fri, 24 Apr 2020 18:44:07 GMT
< connection: keep-alive
< content-length: 50
<
* Connection #0 to host 192.168.0.210 left intact
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.105.92.236 <pending> 80:31752/TCP,443:32035/TCP 2d
ingress-nginx-controller-admission ClusterIP 10.100.223.87 <none> 443/TCP 2d
$ kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
messagemanager-ingress <none> * 80, 443 37m
key creation
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
$ kubectl describe ingress
Name: messagemanager-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
*
/message messagemanager-service:8080 ()
Annotations: Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 107s nginx-ingress-controller Ingress default/messagemanager-ingress
I was under the assumption that TLS would terminate in the ingress and the request would be passed on to the service as http.
I had to add the external IPs in the service to get HTTP to work.
Am I missing something similar for HTTPS?
Any help and guidance is appreciated.
Thanks
Mark
I've reproduced your scenario in my lab and after a few changes in your ingress it's working as you described.
In my lab I used an nginx image that serves a default landing page on port 80 and with this Ingress rule, it's possible to serve it on port 80 and 443.
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
labels:
app: nginx
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
namespace: default
labels:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx
labels:
app: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx-service
servicePort: 80
The only difference between my ingress and yours is that I removed nginx.ingress.kubernetes.io/ssl-passthrough: false. In the documentation we can read:
note SSL Passthrough is disabled by default
So there is no need for you to specify that.
I used the same secret as you:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
In your question I have the impression that you are trying to reach your ingress through the IP 192.168.0.210. This is your service IP and not your Ingress IP.
If you are using Cloud managed Kubernetes you have to run the following command to find your Ingress IP:
$ kubectl get ingresses nginx
NAME HOSTS ADDRESS PORTS AGE
nginx * 34.89.108.48 80, 443 6m32s
If you are running on Bare Metal without any LoadBalancer solution as MetalLB, you can see that your ingress-nginx service will be with EXTERNAL-IP on Pending forever.
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 <pending> 80:31024/TCP,443:30039/TCP 23s
You can do the same thing as you did with your service and add an externalIP manually:
kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 10.156.0.24 80:31024/TCP,443:30039/TCP 9m14s
After this change, your ingress will have the same IP as you defined in your Ingress Service:
$ kubectl get ingress nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> * 10.156.0.24 80, 443 118s
$ curl -kL https://10.156.0.24/nginx --verbose
* Trying 10.156.0.24...
* TCP_NODELAY set
* Connected to 10.156.0.24 (10.156.0.24) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Apr 27 09:49:19 2020 GMT
* expire date: Apr 27 09:49:19 2021 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x560cee14fe90)
> GET /nginx HTTP/1.1
> Host: 10.156.0.24
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< server: nginx/1.17.10
< date: Mon, 27 Apr 2020 10:01:29 GMT
< content-type: text/html
< content-length: 612
< vary: Accept-Encoding
< last-modified: Tue, 14 Apr 2020 14:19:26 GMT
< etag: "5e95c66e-264"
< accept-ranges: bytes
< strict-transport-security: max-age=15724800; includeSubDomains
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host 10.156.0.24 left intact
EDIT:
There does not seem to be a way to manually set the "external IPs" for
the ingress as can be done in the service. If you know of one please
let me know :-). Looks like my best bet is to try MetalLB.
MetalLB would be the best option for production. If you are running it for lab only, you have the option to add your node public IP (the same you can get by running kubectl get nodes -o wide) and attach it to your NGINX ingress controller.
Adding your node IP to your NGINX ingress controller
spec:
externalIPs:
- 192.168.0.210
Create a file called ingress-nginx-svc-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat ingress-nginx-svc-patch.yaml)"
And as result:
$ kubectl get service -n kube-system ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.97.0.243 192.168.0.210 80:31409/TCP,443:30341/TCP 39m
I'm no expert, but any time I've seen a service handle http, and https traffic, it's specified two ports in the svc yaml, one for http and one for https. Apparently there are ways to get around this, reading here might be a good start.
For two ports, look at the official k8s example here
I was struggling with the same situation for a while and was about to start using metalLB but realised after running kubectl -n ingress-nginx get svc the ingress-nginx had created its own nodePort and i could access the cluster through that.
Im not an expert but its probably not good for production but i don't see why not.