I am using AWS NLB and therefore SSL should happen at the argocd (1.7.8) side. However it seems nothing I do argocd always uses self-signed cert.
➜ curl -vvI https://argocd-dev.example.com
* Trying 54.18.49.47:443...
* Connected to argocd-dev.example.com (54.18.49.47) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: self signed certificate
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
this is my ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd-dev.example.com
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
this is how I start argocd-server:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: server
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
name: argocd-server
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
template:
metadata:
labels:
app.kubernetes.io/name: argocd-server
spec:
containers:
- command:
- argocd-server
- --staticassets
- /shared/app
- --loglevel
- debug
- --client-certificate
- /var/ssl-cert/tls.crt
- --client-key
- /var/ssl-cert/tls.key
image: argoproj/argocd:v1.7.8
imagePullPolicy: Always
name: argocd-server
ports:
- containerPort: 8080
- containerPort: 8083
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts
- mountPath: /app/config/tls
name: tls-certs
- mountPath: /var/ssl-cert
name: ssl-cert
readOnly: true
serviceAccountName: argocd-server
volumes:
- emptyDir: {}
name: static-files
- configMap:
name: argocd-ssh-known-hosts-cm
name: ssh-known-hosts
- configMap:
name: argocd-tls-certs-cm
name: tls-certs
- name: ssl-cert
secret:
secretName: tls-secret
You should take a look at https://argoproj.github.io/argo-cd/operator-manual/ingress/
ArgoCD has some unusual configuration that is required.
Either you need to start Argo in http (insecure) mode if your load balancer is doing the SSL or you need to pass your secret into the Kubernetes Ingress.
Argo CD expects certificate and certificate key in the tls.crt and tls.key keys of argocd-secret Secret: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-secret.yaml#L11
Restart is not required - a new certificate should be used as soon as the secret is updated.
Related
I have created a GKE Cluster 1.18.17-gke.1901 and I have installed Istio 1.9.5 on it. My Ingress Gateway Service is of type: LoadBalancer.
I am trying to implement MUTUAL TLS mode in my istio-ingressgateway. The Gateway configuration looks like this:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: mutual-domain
namespace: test
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- mutual.domain.com
port:
name: mutual-domain-https
number: 443
protocol: HTTPS
tls:
credentialName: mutual-secret
minProtocolVersion: TLSV1_2
mode: MUTUAL
I have also setup a corresponding VirtualService and DestinationRule too.
Now, whenever I try to connect to https://mutual.domain.com I get the following error:
* Trying 100.50.76.97...
* TCP_NODELAY set
* Connected to mutual.domain.com (100.50.76.97) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mutual.domain.com:443
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mutual.domain.com:443
If I change the tls: mode: to SIMPLE I am able to reach the service via the domain name but when it's MUTUAL the error above shows up.
The mutual-secret is a tls type Kubernets secret and it contains the tls.crt and tls.key.
$ kubectl describe mutual-secret
Name: mutual-secret
Namespace: istio-system
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 4585 bytes
tls.key: 1674 bytes
Is there something missing? Why can't I access my service in MUTUAL mode but the same secret works for SIMPLE mode?
Assuming you are following this.
It seems you are missing ca.crt in your secret. Create a new secret with tls.crt, tsl.key and ca.crt, and try again.
The ERR_BAD_SSL_CLIENT_AUTH_CERT error mentioned in the comments is Chrome/Chromium specific. It means the browser does not recognise Certificate Authority.
Add your CA certificate (.pem file) to Chrome/Chromium, you can follow this. Hopefully it will solve your problem.
I’m trying to setup istio (v1.7.3) on AKS (v1.16.13) in a way that for all HTTPS requests within my domain, the TLS handshake is performed transparently by egress gateway.
I ended up with something like this (abc.mydomain.com is an external URL so that’s why I created a ServiceEntry for it):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.mydomain.com"
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mydomain
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: mydomain
trafficPolicy:
tls:
mode: SIMPLE
caCertificates: /etc/istio/mydomain-ca-certs/mydomain.crt
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mydomain-through-egress-gateway
spec:
hosts:
- "*.mydomain.com"
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- "*.mydomain.com"
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mydomain
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
sniHosts:
- "*.mydomain.com"
route:
- destination:
host: abc.mydomain.com
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: www-mydomain
spec:
hosts:
- abc.mydomain.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
I have mounted my certificate in the egress gateway and verified with: kubectl exec -n istio-system “$(kubectl -n istio-system get pods -l istio=egressgateway -o jsonpath=’{.items[0].metadata.name}’)” – ls -al /etc/istio/mydomain-ca-certs
I’m getting the following when invoking curl -vvI https://abc.mydomain.com from one of the pods running in another namespace:
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
I’ve also tried what’s described here (Trust custom Root CA on Egress Gateway) but I’m getting the error as above.
Any idea what I might be doing wrong?
UPDATE1
Here is an output of istioctl proxy-status (egress rds is STALE):
istio-egressgateway-695dc4fc7c-p5p42.istio-system SYNCED SYNCED SYNCED STALE istiod-5c6b7b5b8f-csggg 1.7.3 41d
istio-ingressgateway-5689f7c67-j54m7.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3 118d
Output of curl -vvI https://abc.mydomain.com:
* Expire in 0 ms for 1 (transfer 0x55ce54104f50)
* Trying 10.223.24.254...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55ce54104f50)
* Connected to abc.mydomain.com (10.223.24.254) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
Output of openssl s_client -connect abc.mydomain.com:443
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 328 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
I am trying to create an API that serves HTTPS traffic publicly and is reachable by an IP address (not the domain), using a GKE cluster. Docker images have been tested locally. They were capable of serving HTTPS on their own, but as far as I've come to realize, this is not necessary for the setup that I am imagining.
So what I've come up with so far is to have a Kubernetes Service exposing it's 8443 port and having an Ingress load balancer mapping to that port and using self-signed certificates created using this tutorial - basic-ingress-secret referred in the template. The only thing I have skipped is the domain binding given I am not in the possession of a domain. I hoped it would bind the certificate to the external IP, but this is unfortunately not the case (have tried to attach an IP to a CN of the certificate, as some users have noted here).
This is my yaml for service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- protocol: "TCP"
port: 8443
targetPort: 8443
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-27417/some:latest"
This is my yaml for Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
It is way simpler than these documentations explain.
1.- Create the self-signed certs
openssl req -newkey rsa:2048 -nodes -keyout tls.key -out tls.csr
openssl x509 -in tls.csr -out tls.crt -req -signkey tls.key
2.- Create the secret
kubectl create secret tls basic-ingress-secret --cert tls.crt --key tls.key
3.- Create the Ingress object
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- http:
paths:
- path: /
backend:
serviceName: some-node
servicePort: 8443
Note: To make is work on GKE, your service must be of type NodePort
$ kubectl describe svc some-node
Name: some-node
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: NodePort
IP: 10.60.6.214
Port: <unset> 8443/TCP
TargetPort: 80/TCP
NodePort: <unset> 30250/TCP
Endpoints: 10.56.0.17:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I have found a final solution that suits my current needs.
The issue with the setup above was that I didn't had a nodePort: value setup and that the SSL certificate was not properly working so I have purchased a domain, secured a static IP for the Ingress load balancer using gcloud compute addresses create some-static-ip --globaland pointed that domain to the IP. I have then created a self-signed SSL certificate again following this tutorial and using my new domain.
The final yaml for the service:
apiVersion: v1
kind: Service
metadata:
name: some-node
spec:
selector:
app: some
ports:
- port: 80
targetPort: 30041
nodePort: 30041
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-node-deploy
spec:
selector:
matchLabels:
app: some
replicas: 3
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-container
image: "gcr.io/some-project/some:v1"
The final yaml for the LB:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "some-static-ip"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: basic-ingress-secret
rules:
- host: my.domain.com
http:
paths:
- path: /some/*
backend:
serviceName: some-node
servicePort: 80
The service now serves HTTPS traffic (only 443 port) on my.domain.com/some/* (placeholder domain and path). It is a simple HTTPS service setup that would require only a purchase of CA issued SSL certificate and a proper scaling configuration to be fully productionalized, from the DevOps standpoint. Unless someone has found some serious drawbacks to this setup.
I've kubernetes installed on Ubuntu 19.10.
I've setup ingress-nginx and can access my test service using http.
However, I get a "Connection refused" when I try to access via https.
[Edit] I'm trying to get https to terminate in the ingress and pass unencrypted traffic to my service the same way http does. I've implemented the below based on many examples I've seen but with little luck.
Yaml
kind: Service
apiVersion: v1
metadata:
name: messagemanager-service
namespace: default
labels:
name: messagemanager-service
spec:
type: NodePort
selector:
app: messagemanager
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 31212
name: http
externalIPs:
- 192.168.0.210
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: messagemanager
labels:
app: messagemanager
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: messagemanager
template:
metadata:
labels:
app: messagemanager
version: v1
spec:
containers:
- name: messagemanager
image: test/messagemanager:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messagemanager-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: false
ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /message
backend:
serviceName: messagemanager-service
servicePort: 8080
https test
curl -kL https://192.168.0.210/message -verbose
* Trying 192.168.0.210:443...
* TCP_NODELAY set
* connect to 192.168.0.210 port 443 failed: Connection refused
* Failed to connect to 192.168.0.210 port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.0.210 port 443: Connection refused
http test
curl -kL http://192.168.0.210/message -verbose
* Trying 192.168.0.210:80...
* TCP_NODELAY set
* Connected to 192.168.0.210 (192.168.0.210) port 80 (#0)
> GET /message HTTP/1.1
> Host: 192.168.0.210
> User-Agent: curl/7.65.3
> Accept: */*
> Referer: rbose
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=UTF-8
< Date: Fri, 24 Apr 2020 18:44:07 GMT
< connection: keep-alive
< content-length: 50
<
* Connection #0 to host 192.168.0.210 left intact
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.105.92.236 <pending> 80:31752/TCP,443:32035/TCP 2d
ingress-nginx-controller-admission ClusterIP 10.100.223.87 <none> 443/TCP 2d
$ kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
messagemanager-ingress <none> * 80, 443 37m
key creation
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
$ kubectl describe ingress
Name: messagemanager-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
*
/message messagemanager-service:8080 ()
Annotations: Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 107s nginx-ingress-controller Ingress default/messagemanager-ingress
I was under the assumption that TLS would terminate in the ingress and the request would be passed on to the service as http.
I had to add the external IPs in the service to get HTTP to work.
Am I missing something similar for HTTPS?
Any help and guidance is appreciated.
Thanks
Mark
I've reproduced your scenario in my lab and after a few changes in your ingress it's working as you described.
In my lab I used an nginx image that serves a default landing page on port 80 and with this Ingress rule, it's possible to serve it on port 80 and 443.
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
labels:
app: nginx
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
namespace: default
labels:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx
labels:
app: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx-service
servicePort: 80
The only difference between my ingress and yours is that I removed nginx.ingress.kubernetes.io/ssl-passthrough: false. In the documentation we can read:
note SSL Passthrough is disabled by default
So there is no need for you to specify that.
I used the same secret as you:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
In your question I have the impression that you are trying to reach your ingress through the IP 192.168.0.210. This is your service IP and not your Ingress IP.
If you are using Cloud managed Kubernetes you have to run the following command to find your Ingress IP:
$ kubectl get ingresses nginx
NAME HOSTS ADDRESS PORTS AGE
nginx * 34.89.108.48 80, 443 6m32s
If you are running on Bare Metal without any LoadBalancer solution as MetalLB, you can see that your ingress-nginx service will be with EXTERNAL-IP on Pending forever.
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 <pending> 80:31024/TCP,443:30039/TCP 23s
You can do the same thing as you did with your service and add an externalIP manually:
kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 10.156.0.24 80:31024/TCP,443:30039/TCP 9m14s
After this change, your ingress will have the same IP as you defined in your Ingress Service:
$ kubectl get ingress nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> * 10.156.0.24 80, 443 118s
$ curl -kL https://10.156.0.24/nginx --verbose
* Trying 10.156.0.24...
* TCP_NODELAY set
* Connected to 10.156.0.24 (10.156.0.24) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Apr 27 09:49:19 2020 GMT
* expire date: Apr 27 09:49:19 2021 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x560cee14fe90)
> GET /nginx HTTP/1.1
> Host: 10.156.0.24
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< server: nginx/1.17.10
< date: Mon, 27 Apr 2020 10:01:29 GMT
< content-type: text/html
< content-length: 612
< vary: Accept-Encoding
< last-modified: Tue, 14 Apr 2020 14:19:26 GMT
< etag: "5e95c66e-264"
< accept-ranges: bytes
< strict-transport-security: max-age=15724800; includeSubDomains
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host 10.156.0.24 left intact
EDIT:
There does not seem to be a way to manually set the "external IPs" for
the ingress as can be done in the service. If you know of one please
let me know :-). Looks like my best bet is to try MetalLB.
MetalLB would be the best option for production. If you are running it for lab only, you have the option to add your node public IP (the same you can get by running kubectl get nodes -o wide) and attach it to your NGINX ingress controller.
Adding your node IP to your NGINX ingress controller
spec:
externalIPs:
- 192.168.0.210
Create a file called ingress-nginx-svc-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat ingress-nginx-svc-patch.yaml)"
And as result:
$ kubectl get service -n kube-system ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.97.0.243 192.168.0.210 80:31409/TCP,443:30341/TCP 39m
I'm no expert, but any time I've seen a service handle http, and https traffic, it's specified two ports in the svc yaml, one for http and one for https. Apparently there are ways to get around this, reading here might be a good start.
For two ports, look at the official k8s example here
I was struggling with the same situation for a while and was about to start using metalLB but realised after running kubectl -n ingress-nginx get svc the ingress-nginx had created its own nodePort and i could access the cluster through that.
Im not an expert but its probably not good for production but i don't see why not.
I installed Kubernetes NGINX Ingress in kubernetes cluster
I deployed everything on AWS EC2 Instance and Classic Load balancer is in front to Ingress controller. I am able to access the service with http port but not able to access it with https.
I have a valid domain purchase from godaddy and i got AWS SSL certificate from Certificate Manager
Load balancer listeners are configured as below.
I modified the Ingress NGINX Service (added certificate ARN)
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
# Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
# NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
# increased to '3600' to avoid any potential issues.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-2:297483230626:certificate/ffe5a2b3-ceff-4ef2-bf13-8da5b4212121"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
Ingress rules
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: practice-ingress
namespace: practice
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: kdhut.com
http:
paths:
- backend:
serviceName: customer-service
servicePort: 9090
path: /customer
- backend:
serviceName: prac-service
servicePort: 8000
path: /prac
I am able to access the service in http but https is not working.
i tried curl
curl -v https://kdhut.com -H 'Host: kdhut.com'
* Rebuilt URL to: https://kdhut.com/
* Trying 3.12.176.17...
* TCP_NODELAY set
* Connected to kdhut.com (3.12.176.17) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=kdhut.com
* start date: Mar 20 00:00:00 2020 GMT
* expire date: Apr 20 12:00:00 2021 GMT
* subjectAltName: host "kdhut.com" matched cert's "kdhut.com"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: kdhut.com
> User-Agent: curl/7.58.0
> Accept: */*
I think thats an issue with the AWS load balancer. I ran into something awhile back with a AWS NLB and found this link for a 'workaround/hack':
Workaround
HTH