I'm setting up an instance of ghost and I'm trying to secure the /ghost path with client cert verification.
I've got an initial ingress up and running that serves the site quite happily with the path specified as /.
I'm trying to add a second ingress (that's mostly the same) for the /ghost path. If I do this and add the annotations for basic auth, everything seems to work. i.e. If I browse to /ghost I am prompted for credentials in the basic-auth secret, if I browse to any other URL it is served without auth.
I then switched to client cert verification based on this example: https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/client-certs
When I try this either the whole site or none of the site is secured, rather than the path-based separation, I got with basic-auth. Looking at the nginx.conf from the running pod the proxy_set_header ssl-client-verify, proxy_set_header ssl-client-subject-dn & proxy_set_header ssl-client-issuer-dn elements are added under the root / path and the /ghost path. I've tried removing those (from the root only) and copying the config directly back to the pod but not luck there either.
I'm pulling nginx-ingress (Chart version 0.23.0) in as a dependency via Helm
Ingress definition for / location - this one works
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
app: my-app
chart: my-app-0.1.1
heritage: Tiller
release: my-app
name: my-app
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-app
servicePort: http
path: /
tls:
- hosts:
- example.com
secretName: mysite-tls
Ingress definition for /ghost location - this one doesn't work
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.example.com/error-cert.html"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
kubernetes.io/ingress.class: "nginx"
labels:
app: my-app
chart: my-app-0.1.1
heritage: Tiller
release: my-app
name: my-app-secure
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-app
servicePort: http
path: /ghost
tls:
- hosts:
- example.com
secretName: mysite-tls
You need a '*' on your path on your second ingress if you want to serve all the pages securely under /ghost and if you want just /ghost you need another rule. Something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.example.com/error-cert.html"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
kubernetes.io/ingress.class: "nginx"
labels:
app: my-app
chart: my-app-0.1.1
heritage: Tiller
release: my-app
name: my-app-secure
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-app
servicePort: http
path: /ghost
- backend:
serviceName: my-app
servicePort: http
path: /ghost/*
tls:
- hosts:
- example.com
secretName: mysite-tls
However, if you want something like / unsecured and /ghost secured, I believe you won't be able to do it. For example, if you are using nginx, this is a limitation from nginx itself, when you configure a server {} block with TLS in nginx it looks something like this:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate example.com.crt;
ssl_certificate_key example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
...
}
The ingress controller creates paths like this:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate example.com.crt;
ssl_certificate_key example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
...
location / {
...
}
location /ghost {
...
}
}
So when you configure another server {} block with the same hostname and with no SSL it will override the first one.
You could do it with different - host: rules in your ingress for example ghost.example.com with TLS and main.example.com without TLS. So in your nginx.conf you would have different server {} blocks.
You can always shell into the ingress controller pod to check the configs, for example:
$ kubectl exec -it nginx-ingress-controller-xxxxxxxxx-xxxxx bash
www-data#nginx-ingress-controller-6bd7c597cb-8kzjh:/etc/nginx$ cat nginx.conf
You could add a location-snippet
annotations:
nginx.ingress.kubernetes.io/location-snippet: |
if ($location ^~ ghost) {
set $ban_info = "location";
}
if ($ssl_client_s_dn !~ "CN=ok_client") {
set $ban_info = "${ban_info}+no_client";
}
if ($ban_info = "location+no_client") {
return 403;
}
Related
I am running nginx as load balancer in OpenShift, for this i've created configmap, deployment, exposed it as service of type load balancer and created a relevant route.
Please note that, ssl is also setup in its configurations. When I allocate a public IP to it, it's giving error connection refused.
The route seems to be correct but it is not working as intended.
Config map file
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: nginx-ingress
data:
nginx.conf: |-
user nginx;
worker_processes 10;
events {
worker_connections 10240;
}
http {
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html; # root path for file
index index.html index.htm;
}
}
}
default.conf: |-
# file mounted externally
server {
listen 80;
server_name localhost;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/letsencrypt/server.crt;
ssl_certificate_key /etc/letsencrypt/server.key;
ssl_trusted_certificate /etc/letsencrypt/rootCA.pem;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
index.html: |-
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx. Response from ingress/proxy.</em></p>
</body>
</html>
Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
compute1: worker1
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
- key: default.conf
path: default.conf
- key: index.html
path: index.html
- name: ca-pem
configMap:
name: ca-pem
- name: ca-crt
configMap:
name: ca-crt
- name: ca-key
configMap:
name: ca-key
containers:
- name: nginx-alpine-perl
image: docker.io/library/nginx#sha256:51212c2cc0070084b2061106d5711df55e8aedfc6091c6f96fabeff3e083f355
ports:
- containerPort: 80
- containerPort: 443
securityContext:
allowPrivilegeEscalation: false
#runAsUser: 0
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx
#subPath: nginx.conf
readOnly: true
- name: nginx-conf
mountPath: /etc/nginx/conf.d
readOnly: true
- name: nginx-conf
mountPath: /usr/share/nginx/html
#subPath: nginx.conf
readOnly: true
- name: ca-pem
mountPath: /etc/letsencrypt/rootCA.pem
subPath: rootCA.pem
readOnly: false
- name: ca-crt
mountPath: /etc/letsencrypt/server.crt
subPath: server.crt
readOnly: false
- name: ca-key
mountPath: /etc/letsencrypt/server.key
subPath: server.key
readOnly: true
Svc file
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
#nodePort: 30008
- port: 443
name: https
protocol: TCP
targetPort: 443
#nodePort: 30009
selector:
app: nginx
status:
loadBalancer:
ingress:
- ip: <Public IP>
route file
apiVersion: route.openshift.io/v1
kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
name: nginx-ingress
namespace: nginx-ingress
selfLink: /apis/route.openshift.io/v1/namespaces/nginx-ingress/routes/nginx-ingress
spec:
host: <Some url: nginx-ingress.app>
to:
kind: Service
name: nginx
weight: 100
wildcardPolicy: None
Hard to tell from your config what is breaking here. As a sensible debugging step, validate whether the problem is with the Ingress Controller or with the routing from your Loadbalancer's public IP:
Run a curl on a Pod that goes directly against the Ingress Controller Service with the external URL:
curl -v -H "Host: <your external URL>" http://nginx.default
If it works, you know it's the routing, e.g. cluster network configuration. If it fails, it must be the Ingress Controller, e.g. Openshift Route, Service or Pod configuration.
I have two services in Kubernetes which are exposed through nginx controller. Service a wants to invoke content on domain b but at the same time both services need to be authenticated through Google using the oauth-proxy service.
So I have managed to enable CORS and a can invoke b without any issues. But the problem is when I include the authentication as well, I am constantly getting:
Access to manifest at 'https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=<obscured>.apps.googleusercontent.com&redirect_uri=https%3A%2F%2Fa.example.com%2Foauth2%2Fcallback&response_type=code&scope=profile+email&state=<obscured>manifest.json' (redirected from 'https://a.example.com/manifest.json') from origin 'https://a.example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Here are the ingresses:
Service a
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
name: a
spec:
rules:
- host: a.example.com
http:
paths:
- backend:
service:
name: a-svc
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- a.example.com
secretName: a-tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
labels:
k8s-app: oauth2-proxy
name: a-oauth
spec:
rules:
- host: a.example.com
http:
paths:
- backend:
service:
name: oauth2-proxy
port:
number: 4180
path: /oauth2
pathType: Prefix
tls:
- hosts:
- a.example.com
secretName: a-oauth-tls
Service b
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
name: b
spec:
rules:
- host: b.example.com
http:
paths:
- backend:
service:
name: b-svc
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- b.example.com
secretName: b-tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
labels:
k8s-app: oauth2-proxy
name: b-oauth
spec:
rules:
- host: b.example.com
http:
paths:
- backend:
service:
name: oauth2-proxy
port:
number: 4180
path: /oauth2
pathType: Prefix
tls:
- hosts:
- b.example.com
secretName: b-oauth-tls
Obviously, there is only one difference between these two and that is the cors annotations nginx.ingress.kubernetes.io/enable-cors: "true" in the service b ingresses.
I am not sure of what is causing the issue but I am guessing that the authentication done against Google in service a is not being passed to the CORS request so that the service b can also be authenticate with the same token/credentials.
What am I doing wrong and how can I resolve this?
Based on the documentation, it looks like you lack of annotations for using HTTP.
Try to add fragment like this in service configuration file:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
But it can be a problem with CORS. In such case you should add the following line:
--cors-allowed-origins=["http://*"]
to /etc/default/kube-apiserver or /etc/kubernetes/manifests/kube-apiserver.yaml file (depending on the location your kube-apiserver configuration file).
After that restart kube-apiserver.
See also this similar question.
So, it turns out that the whole Kubernetes and Nginx config was correct, so the solution was implementing the usage of the saved cookie on client side when invoking a CORS request to the second service.
Essentially, this was already answered here: Set cookies for cross origin requests
Excerpt from the answer:
Front-end (client): Set the XMLHttpRequest.withCredentials flag to
true, this can be achieved in different ways depending on the
request-response library used:
jQuery 1.5.1 xhrFields: {withCredentials: true}
ES6 fetch() credentials: 'include'
axios: withCredentials: true
Greetings fellow humans,
i am trying to route all traffic incoming to my cluster with the following annotation:
nginx.ingress.kubernetes.io/auth-url: http://my-auth-service/
I followed the tutorials and i still have not achieved to route every request to my auth module. I am following a strategy of master-minion. When I check the constructed nginx file the annotation is not found.
I tried as well something like this in one of my minon ingress files
auth_request /auth;
auth_request_set $auth_service $upstream_http_auth_service;
proxy_pass $request_uri
proxy_set_header X-foo-Token $auth_service;
I am using the following ingress controller version
Image: nginx/nginx-ingress:1.8.1
Ports: 80/TCP, 443/TCP, 9113/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Samples:
Master ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "master"
kubernetes.io/ingress.global-static-ip-name: <cluster-ip>
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/auth-url: http://my-auth-service/
spec:
tls:
- hosts:
- app.myurl.com
secretName: secret-tls
rules:
- host: app.myurl.com
Minion ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pod-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "minion"
# nginx.ingress.kubernetes.io/auth-url: http://my-auth-service/
# nginx.ingress.kubernetes.io/auth-snippet: |
# auth_request /new-auth-service;
# auth_request_set $new_auth_service $upstream_http_new_auth_service;
# proxy_pass $request_uri
# proxy_set_header X-foo-Token $new_auth_service;
nginx.org/rewrites: "serviceName={{ .Values.serviceName }} rewrite=/"
spec:
rules:
- host: {{ .Values.clusterHost }}
http:
paths:
- path: /{{ .Values.serviceName }}/
backend:
serviceName: {{ .Values.serviceName }}
servicePort: 80
so i was able to make it work. First of all, the urls provided by matt-j helped be a lot to figure out a solution.
Turns out that i was using nginx-stable for my ingress controller, as in the documentation here suggested, i needed to use the new ingress controller version. i followed the instructions for a full reset, since i am working on my staging env. (for production i might go with a 0 downtime approach). Once installed, I ran into an known issue which is related to the webhooks, similar error can be seen here. Basically one solution for overcoming this error is to delete the validatingwebhookconfigurations. Finally I applied the ingress config and made some adjustments to use the proper annotations, which made the magic.
NOTE: I ran into an issue regarding of how forwarding the auth request to my internal cluster service, to fix that i am using the FQDN of kubernetes pod.
NOTE 2: I removed the concept of master minion, since they the merging in kubernetes/ingress-nginx happens automatically more info here
Here are the fixed samples:
MAIN INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: <PUBLIC IP>
spec:
rules:
- host: domain.com
CHILD INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.serviceName }}-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/auth-url: http://<SERVICE NAME>.<NAMESPACE>.svc.cluster.local # using internal FQDN
spec:
rules:
- host: {{ .Values.clusterHost }}
http:
paths:
- path: /{{ .Values.serviceName }}(/|$)(.*)
backend:
serviceName: {{ .Values.serviceName }}
servicePort: 80
I'm following the guide and I'm able to access my website via HTTP and HTTPs however redirect is not working for me, any ideas on what might be wrong?
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: whoami
namespace: default
spec:
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: websecure
scheme: https
permanent: true
websecure:
address: :443
routes:
- match: Host(`hello.mydomain.io`)
kind: Rule
services:
- name: whoami
port: 80
tls: {}
I use Docker compose, so might not be spot on for you. But suggestion is to add a scheme redirect middleware to your dynamic config file.
http:
middlewares:
https_redirect:
redirectScheme:
scheme: https
permanent: true
Or just add the middleware to your service if you don't have access to the Traefik configs.
I prefer the dynamic config, because then you can register it on any service as required using https_redirect#file.
You do need a router per entrypoint though, using this method. And register the middleware on only the http router.
I'm sure there are other, better ways. But if you need some apps automatically redirecting and some not this is the best solution I've found thusfar.
I have a working configuration which uses Ingress objects instead of IngressRoute , but I hope this will help some people :
Thus, here is a working configuration :
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirect
namespace: some_namespace
spec:
redirectScheme:
scheme: https
permanent: true
and
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress
namespace: your_app_namespace
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: some_namespace-redirect#kubernetescrd
spec:
tls:
- secretName: your_certificate
hosts:
- www.your_website.com
rules:
- host: www.your_website.com
http:
paths:
- path: /
backend:
service:
name: your_service
port:
number: 80
pathType: ImplementationSpecific
So the trick is to :
define a Middleware object (in any namespace you want, but that may be in the same one as your app)
reference it in traefik.ingress.kubernetes.io/router.middlewares with the syntax <NAMESPACE>-<NAME>#kubernetescrd (where NAMESPACE and NAME are those of the Middleware object)
I have a service that has HTTP Basic Auth. In front of it I have nginx Ingress, who also has basic-auth. How can I attach Authorization header with the credentials after Sign In with the Ingress, to achieve Single-Sign-On?
This is the configuration of my Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: kibana-user-basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: kibana-user
namespace: {{.Release.Namespace}}
spec:
tls:
- secretName: kibana-tls
hosts:
- {{.Values.ingress.user.host}}
rules:
- host: {{.Values.ingress.user.host}}
http:
paths:
- backend:
serviceName: kibana-logging
servicePort: {{ .Values.kibana.service.internalPort }}
path: /
You could use the annotation nginx.ingress.kubernetes.io/configuration-snippet: proxy_set_header Authorization $http_authorization; to forward the Authorization header to the back end service.
The Ingress resource should looks like this
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: kibana-user-basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
name: kibana-user
namespace: {{.Release.Namespace}}
spec:
tls:
- secretName: kibana-tls
hosts:
- {{.Values.ingress.user.host}}
rules:
- host: {{.Values.ingress.user.host}}
http:
paths:
- backend:
serviceName: kibana-logging
servicePort: {{ .Values.kibana.service.internalPort }}
path: /
I guess that you can propagate Authorization header within nginx.ingress.kubernetes.io/auth-response-headers annotation:
nginx.ingress.kubernetes.io/auth-response-headers: Authorization
Or, alternative way you can achieve the same approach by applying proxy_set_header inside the target Ingress location via configuration snippet annotation as described here:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Authorization "Basic base64 encode value";