I am running nginx as load balancer in OpenShift, for this i've created configmap, deployment, exposed it as service of type load balancer and created a relevant route.
Please note that, ssl is also setup in its configurations. When I allocate a public IP to it, it's giving error connection refused.
The route seems to be correct but it is not working as intended.
Config map file
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: nginx-ingress
data:
nginx.conf: |-
user nginx;
worker_processes 10;
events {
worker_connections 10240;
}
http {
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html; # root path for file
index index.html index.htm;
}
}
}
default.conf: |-
# file mounted externally
server {
listen 80;
server_name localhost;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/letsencrypt/server.crt;
ssl_certificate_key /etc/letsencrypt/server.key;
ssl_trusted_certificate /etc/letsencrypt/rootCA.pem;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
index.html: |-
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx. Response from ingress/proxy.</em></p>
</body>
</html>
Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
compute1: worker1
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
- key: default.conf
path: default.conf
- key: index.html
path: index.html
- name: ca-pem
configMap:
name: ca-pem
- name: ca-crt
configMap:
name: ca-crt
- name: ca-key
configMap:
name: ca-key
containers:
- name: nginx-alpine-perl
image: docker.io/library/nginx#sha256:51212c2cc0070084b2061106d5711df55e8aedfc6091c6f96fabeff3e083f355
ports:
- containerPort: 80
- containerPort: 443
securityContext:
allowPrivilegeEscalation: false
#runAsUser: 0
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx
#subPath: nginx.conf
readOnly: true
- name: nginx-conf
mountPath: /etc/nginx/conf.d
readOnly: true
- name: nginx-conf
mountPath: /usr/share/nginx/html
#subPath: nginx.conf
readOnly: true
- name: ca-pem
mountPath: /etc/letsencrypt/rootCA.pem
subPath: rootCA.pem
readOnly: false
- name: ca-crt
mountPath: /etc/letsencrypt/server.crt
subPath: server.crt
readOnly: false
- name: ca-key
mountPath: /etc/letsencrypt/server.key
subPath: server.key
readOnly: true
Svc file
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
#nodePort: 30008
- port: 443
name: https
protocol: TCP
targetPort: 443
#nodePort: 30009
selector:
app: nginx
status:
loadBalancer:
ingress:
- ip: <Public IP>
route file
apiVersion: route.openshift.io/v1
kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
name: nginx-ingress
namespace: nginx-ingress
selfLink: /apis/route.openshift.io/v1/namespaces/nginx-ingress/routes/nginx-ingress
spec:
host: <Some url: nginx-ingress.app>
to:
kind: Service
name: nginx
weight: 100
wildcardPolicy: None
Hard to tell from your config what is breaking here. As a sensible debugging step, validate whether the problem is with the Ingress Controller or with the routing from your Loadbalancer's public IP:
Run a curl on a Pod that goes directly against the Ingress Controller Service with the external URL:
curl -v -H "Host: <your external URL>" http://nginx.default
If it works, you know it's the routing, e.g. cluster network configuration. If it fails, it must be the Ingress Controller, e.g. Openshift Route, Service or Pod configuration.
Related
I am using kube-prometheus-stack to monitor my system in gcp. Due to new requirements all my ingress need to be secured with tls. As a first step I wanted to make the grafana webpage available via https.
I created a tls secret and updated my values.yaml. After helm upgrade everything seems to work fine but page is still available via http only.
Hope you can support me here.
grafana:
enabled: true
namespaceOverride: ""
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: true
## Annotations for Grafana Ingress
##
# annotations: {
# kubernetes.io/ingress.class: gce-internal
# kubernetes.io/tls-acme: "true"
# }
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
# path: /*
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls:
- secretName: monitoring-tls-secret
# hosts:
# - grafana.example.com
in the meantime I decided to create the ingress a different way.
I created a ssl-certificate and try to use that instead.
When starting up I get the failure down below. Which is strange as kubernetes.io/ingress.allow-http is configured.
kubectl describe ingress monitoring-cl2-grafana -n monitoring-cl2
Name: monitoring-cl2-grafana
Namespace: monitoring-cl2
Address: x.x.x.x
Default backend: default-http-backend:80 (y.y.y.y:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* monitoring-cl2-grafana:80 (<deleted>)
Annotations: ingress.gcp.kubernetes.io/pre-shared-cert: monitoring-ssl
ingress.kubernetes.io/backends:
{"k8s1-613c3440-kube-system-default-http-backend-80-240d1018":"HEALTHY","k8s1-613c3440-mtx-monitoring--mtx-monitoring-cl2-gra-8-f146f2b2":...
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
ingress.kubernetes.io/https-target-proxy: k8s2-ts-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
ingress.kubernetes.io/ssl-cert: monitoring-ssl
ingress.kubernetes.io/url-map: k8s2-um-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.class: gce-internal
kubernetes.io/ingress.global-static-ip-name: grafana-cl2
meta.helm.sh/release-name: monitoring-cl2
meta.helm.sh/release-namespace: monitoring-cl2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Sync 34m (x12 over 35m) loadbalancer-controller Error syncing to GCP: error running load balancer syncing routine: loadbalancer 3s1rnwzg-mtx-monitoring--monitoring-cl2-gr-hgx28ojy does not exist: error invalid internal ingress https config
Warning WillNotConfigureFrontend 26m (x18 over 35m) loadbalancer-controller gce-internal Ingress class does not currently support both HTTP and HTTPS served on the same IP (kubernetes.io/ingress.allow-http must be false when using HTTPS).
Normal Sync 3m34s loadbalancer-controller TargetProxy "k8s2-ts-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy" certs updated
Normal Sync 3m29s (x9 over 35m) loadbalancer-controller Scheduled for sync
grafana:
image:
repository: europe-west3-docker.pkg.dev/<deleted info>/grafana
tag: 7.5.5
sha: ""
sidecar:
image:
repository: europe-west3-docker.pkg.dev/<deleted info>/prometheus/k8s-sidecar
tag: 1.10.7
sha: ""
imagePullPolicy: IfNotPresent
service:
enabled: true
type: NodePort
annotations: {
cloud.google.com/neg: '{"ingress": true}'
}
labels: {}
portName: service
ingress:
enabled: true
path: /*
pathType: ImplementationSpecific
annotations: {
ingress.gcp.kubernetes.io/pre-shared-cert: "monitoring-ssl",
kubernetes.io/ingress.allow-http: "false",
kubernetes.io/ingress.class: "gce-internal",
kubernetes.io/ingress.global-static-ip-name: "grafana-cl2"
}
WORKING NOW WITH FOLLOWING CONFIG
grafana:
image:
repository: europe-west3-docker.pkg.dev/del/mtx-monitoring/prometheus/grafana
tag: 7.5.5
sha: ""
sidecar:
image:
repository: europe-west3-docker.pkg.dev/del/mtx-monitoring/prometheus/k8s-sidecar
tag: 1.10.7
sha: ""
imagePullPolicy: IfNotPresent
service:
enabled: true
type: NodePort
# port: 80
# targetPort: 3000
annotations: {
cloud.google.com/neg: '{"ingress": true}'
}
labels: {}
portName: service
ingress:
enabled: true
path: /*
pathType: ImplementationSpecific
annotations: {
ingress.gcp.kubernetes.io/pre-shared-cert: "monitoring-ssl",
kubernetes.io/ingress.allow-http: "false",
kubernetes.io/ingress.class: "gce-internal",
kubernetes.io/ingress.global-static-ip-name: "grafana-cl2"
}
spec:
rules:
- host: grafana.monitoring.com
http:
paths:
- backend:
service:
name: mtx-monitoring-cl2-grafana
port:
number: 80
I'm running Traefik 1.7.3 on a single node Kubernetes cluster and I'm trying to get the real user IP from the X-Forwarded-For header but what I get instead is X-Forwarded-For: 10.244.0.1 which is an IP in my k8s cluster.
Here's my Traefik deployment and service:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
data:
traefik.toml: |
# traefik.toml
debug = true
logLevel = "DEBUG"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.forwardedHeaders]
trustedIPs = [ "0.0.0.0/0" ]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.forwardedHeaders]
trustedIPs = [ "0.0.0.0/0" ]
[entryPoints.https.tls]
[acme]
email = "xxxx"
storage = "/acme/acme.json"
entryPoint = "https"
onHostRule = true
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
acmeLogging = true
[[acme.domains]]
main = "xxxx"
[acme.dnsChallenge]
provider = "route53"
delayBeforeCheck = 0
[persistence]
enabled = true
existingClaim = "pvc0"
annotations = {}
accessMode = "ReadWriteOnce"
size = "1Gi"
[kubernetes]
namespaces = ["default"]
[accessLog]
filePath = "/acme/access.log"
[accessLog.fields]
defaultMode = "keep"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: default
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
env:
- name: AWS_ACCESS_KEY_ID
value: xxxx
- name: AWS_SECRET_ACCESS_KEY
value: xxxx
- name: AWS_REGION
value: us-west-2
- name: AWS_HOSTED_ZONE_ID
value: xxxx
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --configfile=/config/traefik.toml
volumeMounts:
- mountPath: /config
name: config
- mountPath: /acme
name: acme
volumes:
- name: config
configMap:
name: traefik-conf
- name: acme
persistentVolumeClaim:
claimName: "pvc0"
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: default
spec:
externalIPs:
- x.x.x.x
externalTrafficPolicy: Local
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: https
- protocol: TCP
port: 8080
name: admin
type: NodePort
And here's my ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: headers-test
namespace: default
annotations:
ingress.kubernetes.io/proxy-body-size: 500m
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: xxxx
http:
paths:
- path: /
backend:
serviceName: headers-test
servicePort: 8080
I'd read that I only needed to add [entryPoints.http.forwardedHeaders] and a list of trustedIPs but that doesn't seem to work. Am I missing something?
If you use NodePort for the Traefik Ingress Service, you will have to set service.spec.externalTrafficPolicy to "Local". Otherwise you will have a SNAT when your connection enters the K8s-cluster. This SNAT is necessary to forward the incoming connection to your pod if it is not running on the same node.
But be aware that having set service.spec.externalTrafficPolicy to "Local", only the node on which the Traefik pod is executed will accept requests on 80, 443, 8080. There is no forwarding to the pod from the other nodes anymore. This can result in odd delays when connecting to your service. To avoid that your Traefik would need to run in a HA setup (DaemonSet). Just keep in mind that you need a K/V-Store for a distributed Traefik setup to make Letsencrypt work well.
If the service.spec.externalTrafficPolicy setting does not yet resolve your problem you might also need to configure the kubernetes overlay network to not do any SNAT.
service.spec.externalTrafficPolicy is nicely explained here:
https://kubernetes.io/docs/tutorials/services/source-ip/
I'm setting up an instance of ghost and I'm trying to secure the /ghost path with client cert verification.
I've got an initial ingress up and running that serves the site quite happily with the path specified as /.
I'm trying to add a second ingress (that's mostly the same) for the /ghost path. If I do this and add the annotations for basic auth, everything seems to work. i.e. If I browse to /ghost I am prompted for credentials in the basic-auth secret, if I browse to any other URL it is served without auth.
I then switched to client cert verification based on this example: https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/client-certs
When I try this either the whole site or none of the site is secured, rather than the path-based separation, I got with basic-auth. Looking at the nginx.conf from the running pod the proxy_set_header ssl-client-verify, proxy_set_header ssl-client-subject-dn & proxy_set_header ssl-client-issuer-dn elements are added under the root / path and the /ghost path. I've tried removing those (from the root only) and copying the config directly back to the pod but not luck there either.
I'm pulling nginx-ingress (Chart version 0.23.0) in as a dependency via Helm
Ingress definition for / location - this one works
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
app: my-app
chart: my-app-0.1.1
heritage: Tiller
release: my-app
name: my-app
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-app
servicePort: http
path: /
tls:
- hosts:
- example.com
secretName: mysite-tls
Ingress definition for /ghost location - this one doesn't work
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.example.com/error-cert.html"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
kubernetes.io/ingress.class: "nginx"
labels:
app: my-app
chart: my-app-0.1.1
heritage: Tiller
release: my-app
name: my-app-secure
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-app
servicePort: http
path: /ghost
tls:
- hosts:
- example.com
secretName: mysite-tls
You need a '*' on your path on your second ingress if you want to serve all the pages securely under /ghost and if you want just /ghost you need another rule. Something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.example.com/error-cert.html"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
kubernetes.io/ingress.class: "nginx"
labels:
app: my-app
chart: my-app-0.1.1
heritage: Tiller
release: my-app
name: my-app-secure
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-app
servicePort: http
path: /ghost
- backend:
serviceName: my-app
servicePort: http
path: /ghost/*
tls:
- hosts:
- example.com
secretName: mysite-tls
However, if you want something like / unsecured and /ghost secured, I believe you won't be able to do it. For example, if you are using nginx, this is a limitation from nginx itself, when you configure a server {} block with TLS in nginx it looks something like this:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate example.com.crt;
ssl_certificate_key example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
...
}
The ingress controller creates paths like this:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate example.com.crt;
ssl_certificate_key example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
...
location / {
...
}
location /ghost {
...
}
}
So when you configure another server {} block with the same hostname and with no SSL it will override the first one.
You could do it with different - host: rules in your ingress for example ghost.example.com with TLS and main.example.com without TLS. So in your nginx.conf you would have different server {} blocks.
You can always shell into the ingress controller pod to check the configs, for example:
$ kubectl exec -it nginx-ingress-controller-xxxxxxxxx-xxxxx bash
www-data#nginx-ingress-controller-6bd7c597cb-8kzjh:/etc/nginx$ cat nginx.conf
You could add a location-snippet
annotations:
nginx.ingress.kubernetes.io/location-snippet: |
if ($location ^~ ghost) {
set $ban_info = "location";
}
if ($ssl_client_s_dn !~ "CN=ok_client") {
set $ban_info = "${ban_info}+no_client";
}
if ($ban_info = "location+no_client") {
return 403;
}
I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.
I think the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.
Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.
In my attempt to do this I've set up:
The cluster itself
An nginx-ingress controller
An ingress resource
Here's the related yaml:
Cluster:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-03T03:20:47Z
labels:
run: my-es
name: my-es
namespace: default
resourceVersion: "3159488"
selfLink: /api/v1/namespaces/default/services/my-es
uid: 373047e0-96cc-11e8-932b-42010a800043
spec:
clusterIP: 10.63.241.39
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 9200
selector:
run: my-es
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
The ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca
https://myOtherDomain.ca
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: 2018-08-12T08:44:29Z
generation: 16
name: es-ingress
namespace: default
resourceVersion: "3159625"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress
uid: ece0071d-9e0b-11e8-8a45-42001a8000fc
spec:
rules:
- http:
paths:
- backend:
serviceName: my-es
servicePort: 8080
path: /
tls:
- hosts:
- mydomain.ca
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 130.211.179.225
The nginx-ingress controller:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-12T00:41:32Z
labels:
app: nginx-ingress
chart: nginx-ingress-0.23.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
resourceVersion: "2781955"
selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller
uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc
spec:
clusterIP: 10.63.250.256
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32084
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31182
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.212.6.131
I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...
To get my certificate, I just requested one for mydomain.ca from godaddy.
Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name?
It doesn't seem possible to verify ownership of an IP.
I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.
Here are some logs from the nginx-controller:
This one is talking about a PEM with the tls-secret, but it's only a warning.
{
insertId: "1kvvhm7g1q7e0ej"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T02:58:42.135388365Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "WARNING"
textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found"
timestamp: "2018-08-14T02:58:37Z"
}
I have a few occurences of this handshake error, which may be a result of the last warning...
{
insertId: "148t6rfg1xmz978"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T15:55:52.438035706Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "ERROR"
textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442"
timestamp: "2018-08-14T15:55:50Z"
}
The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.
aaronmw#project-7d320:~$ kubectl describe ing
Name: es-ingress
Namespace: default
Address: 130.221.179.212
Default backend: default-http-backend:80 (10.61.3.7:8080)
TLS:
my-tls-secret terminates mydomain.ca
Rules:
Host Path Backends
---- ---- --------
*
/ my-es:8080 (<none>)
Annotations:
Events: <none>
I figured it out!
What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command
helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress
Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.
So what I did was:
Make my load balancer IP static
Add an A record to my domain, to map a subdomain to that IP
Re-key my cert to match that new subdomain
And I'm in business!
Thanks to #Crou, who's comment reminded me to look at the logs and got me on the right track.
Im currently trying to create an ingress, following the ssl-termination approach, which allows me to connect to a service both via http and https.
I managed to create a working ingress for http, partly for https, but not both together..
heres my config
Ingress Controller: Deployment & Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
env:
<!-- default-config ommitted -->
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17"
imagePullPolicy: Always
livenessProbe:
<!-- omitted -->
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx-ssl/tls
name: tls-vol
terminationGracePeriodSeconds: 60
volumes:
- name: tls-vol
secret:
secretName: tls-test-project-secret
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
nodePort: 31115
- name: https
port: 443
targetPort: https
nodePort: 31116
selector:
k8s-app: nginx-ingress-lb
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/secure-backends: "false"
# modified this to false for http & https-scenario
ingress.kubernetes.io/ssl-redirect: "true"
# modified this to false for http & https-scenario
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/add-base-url: "true"
spec:
tls:
- hosts:
- author.k8s-test
secretName: tls-test-project-secret
rules:
- host: author.k8s-test
http:
paths:
- path: /
backend:
serviceName: cms-author
servicePort: 8080
Backend - Service
apiVersion: v1
kind: Service
metadata:
name: cms-author
spec:
selector:
run: cms-author
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
Backend-Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms-author
spec:
selector:
matchLabels:
run: cms-author
replicas: 1
template:
metadata:
labels:
run: cms-author
spec:
containers:
- name: cms-author
image: <someDockerRegistryUrl>/magnolia:kube-dev
imagePullPolicy: Always
ports:
- containerPort: 8080
I have several issues, when follwing the https only scenario, i can reach the application via the ingress https nodePort, but cant login, as the follwing request goes via http instead of https.. If i put manually https before the url in browser, it is working again and any further request goes via https., but I dont know why :(
The final setting (supporting http and https) is completely not working, as if I try to access the app via http-nodePort of Ingress, it always redirects to ssl, but in this scenario, I configured to ssl-redirect to false, but still not working.
I have read many posts on github, dealing with that, but none of them worked for me
I've changed the nginx-controller images from gce_containers to quay.io, also not working
I've tried some older versions, also not working.
Deploy the nginx ingress controller from the official kubernetes charts repo https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress by setting the helm arguments controller.service.targetPorts.https and controller.service.nodePorts.https. Once they are set, the appropriate NodePort (443) will be configured by helm.
Helm uses the YAML files in https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress/templates.
Along with the nginx ingress controller, you'll need an ingress resource too. Refer https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for examples.