Traefik Catch all route for ssl and redirects to 443? - ssl

I'm trying to set up the acme client for my traefik server and I'm trying to create a catch all route that redirects port 80 to port 443 and also provisions an ssl certificate. This is my config so far
entryPoints:
web:
address: :80
web-secure:
address: :443
providers:
docker: true
api:
dashboard: true
insecure: true
http:
routers:
catchall:
rule: HostSNI(`gateway.dogma.net`)
tls:
certResolver: private
certificatesResolvers:
private:
acme:
email: "#####" # redacted
storage: "acme.json"
caServer: "https://ca.dogma.net:9000/acme/acme/directory"
httpChallenge:
entryPoint: web
When I set up my containers with a PathPrefix(/nameofthecontainer) routing rule I don't get redirected to port 443 and I don't get an ssl certificate
I've already set up my step-ca certificate authority and my dns points to it via the url ca.dogma.net

Related

Inconsistent behaviour while achieving stickiness using Kong Ingress controller

I am using Kong ingress controller on EKS.
High level flow:
NLB → Kong ingress controller and proxy(running in the same pod) → k8s service → backend pods
I am trying to achieve stickiness using hash_on cookies configuration on upstream.
I am using session and hmac_auth plugin for generating session/cookie.
1st request from the client: First time when the client sends a message to the NLB, NLB sends the traffic to Kong ingress controller and from there it’s goes to one of the backend pods. This is the first time and so Kong will generate a cookie and send it back in the response to the client.
2nd request from the client: Now second time when client is sending the request it is including the cookie as well it got from the response of 1st request. Now when the request comes to Kong it forwards the request to some other pod, other than the pod it forwarded the request for the first time.
On 3rd, 4th…nth request Kong is forwarding the request to the same pod it forwarded to in the 2nd request.
How can we achieve stickiness for every request ?
My expectation was first time when Kong receives a request from a client it will generate a Cookie and it will put some detail specific to the pod it is sending traffic to and next time whenever the same client sends a request it will send the cookie with it, kong should use the cookie and forward the request to the same pod it forwarded the first time…but this is not happening…I am getting stickiness after 2nd to nth request but not for the 1st request.
`Ingress resource used for defining path:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
konghq.com/strip-path: "true"
name: kong-ingress-bk-srvs
namespace: default
spec:
ingressClassName: kong
rules:
- http:
paths:
- backend:
service:
name: httpserver-service-cip
port:
number: 8084
path: /api/v1/serverservice
pathType: Prefix
- backend:
service:
name: httpserver-service-cip-health
port:
number: 8084
path: /api/v1/healthservice
pathType: Prefix`
`upstream config:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: stickiness-upstream
upstream:
hash_on: cookie
hash_on_cookie: my-test-cookie
hash_on_cookie_path: /`
`session plugin:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: session-plugin
config:
cookie_path: /
cookie_name: my-test-cookie
storage: cookie
cookie_secure: false
cookie_httponly: false
cookie_samesite: None
plugin: session`
`hmac plugin
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: hmac-plugin
config:
validate_request_body: true
enforce_headers:
- date
- request-line
- digest
algorithms:
- hmac-sha512
plugin: hmac-auth`
`consumer:
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
name: kong-consumer
annotations:
kubernetes.io/ingress.class: kong
username: consumer-user-3
custom_id: consumer-id-3
credentials:
- kong-cred
`
`Pod service config:(ingress backend service)
apiVersion: v1
kind: Service
metadata:
annotations:
konghq.com/override: stickiness-upstream
konghq.com/plugins: session-plugin,hmac-plugin
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"configuration.konghq.com":"stickiness-upstream"},"labels":{"app":"httpserver"},"name":"httpserver-service-cip","namespace":"default"},"spec":{"ports":[{"name":"comm-port","port":8085,"targetPort":8085},{"name":"dur-port","port":8084,"targetPort":8084}],"selector":{"app":"httpserver"},"sessionAffinity":"ClientIP","sessionAffinityConfig":{"clientIP":{"timeoutSeconds":10000}}}}
creationTimestamp: "2023-02-04T16:44:00Z"
labels:
app: httpserver
name: httpserver-service-cip
namespace: default
resourceVersion: "6729057"
uid: 481b7d8c-1f07-4293-809c-3b4b7dca41e0
spec:
clusterIP: 10.101.99.87
clusterIPs:
- 10.101.99.87
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: comm-port
port: 8085
protocol: TCP
targetPort: 8085
- name: dur-port
port: 8084
protocol: TCP
targetPort: 8084
selector:
app: httpserver
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
type: ClusterIP
status:
loadBalancer: {}`

Traefik 2.0 301 Redirect Not Working Load Balancing

I have the below config file that I am using to load balance my application. HTTPS is working fine, but there are some parts of the application that still produce a 302/301 response and the application breaks. I have been able to find a work around with HAProxy, but I cannot seem to make it work with Traefik. As you can tell I have tried a few things below and it either breaks altogether or doens't work.
http:
#region routers
routers:
app-rtr:
entryPoints:
- https
- http
# https:
# address: ":443"
# http:
# address: ":80"
# http:
# redirections:
# entryPoint:
# to: https
# schema: https
rule: "Host(`services.domain.io`)"
middlewares:
# - redirect
# - https-redirectscheme
- middlewares-compress
tls:
certResolver: "cf"
domains:
- main: "services.neotericservices.io"
service: app
#endregion
#region services
services:
app:
loadBalancer:
healthCheck:
path: /health
interval: "10s"
timeout: "5s"
scheme: http
sticky:
cookie: {}
servers:
- url: "http://172.16.9.90:16005"
- url: "http://172.16.9.90:16006"
- url: "http://172.16.9.91:16009"
- url: "http://172.16.9.91:16010"
- url: "http://172.16.9.93:16007"
- url: "http://172.16.9.93:16008"
passHostHeader: true
#endregion
middlewares:
redirect:
redirectRegex:
regex: "^http://*(services.domain.io)(.*)"
replacement: "https://$1$2"
permanent: true
https-redirectscheme:
redirectScheme:
scheme: https
permanent: true

HTTPS redirect not working for default backend of nginx-ingress-controller

I'm having trouble getting an automatic redirect to occur from HTTP -> HTTPS for the default backend of the NGINX ingress controller for kubernetes where the controller is behind an AWS Classic ELB; is it possible?
According to the guide it seems like by default, HSTS is enabled
HTTP Strict Transport Security
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
HSTS is enabled by default.
And redirecting HTTP -> HTTPS is enabled
Server-side HTTPS enforcement through redirect
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
However, when I deploy the controller as configured below and navigate to http://<ELB>.elb.amazonaws.com I am unable to get any response (curl reports Empty reply from server). What I would expect to happen instead is I should see a 308 redirect to https then a 404.
This question is similar: Redirection from http to https not working for custom backend service in Kubernetes Nginx Ingress Controller but they resolved it by deploying a custom backend and specifying on the ingress resource to use TLS. I am trying to avoid deploying a custom backend and just simply want to use the default so this solution is not applicable in my case.
I've shared my deployment files on gist and have copied them here as well:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
spec:
minReadySeconds: 2
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: '50%'
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --ingress-class=$(POD_NAMESPACE)
- --election-id=leader
- --watch-namespace=$(POD_NAMESPACE)
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
data:
hsts: "true"
ssl-redirect: "true"
use-proxy-protocol: "false"
use-forwarded-headers: "true"
enable-access-log-for-default-backend: "true"
enable-owasp-modsecurity-crs: "true"
proxy-real-ip-cidr: "10.0.0.0/24,10.0.1.0/24" # restrict this to the IP addresses of ELB
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
annotations:
# replace with the correct value of the generated certificate in the AWS console
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<region>:<account>:certificate/<id>"
# Specify the ssl policy to apply to the ELB
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
# the backend instances are HTTP
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
# Terminate ssl on https port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "*"
# Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
# NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
# increased to '3600' to avoid any potential issues.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# Security group used for the load balancer.
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-xxxxx"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
loadBalancerSourceRanges:
# Restrict allowed source IP ranges
- "192.168.1.1/16"
ports:
- name: http
port: 80
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30080
- name: https
port: 443
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30443
I think I found the problem.
For some reason the default server has force_ssl_redirect set to false when determining if it should redirect the incoming request to HTTPS:
cat /etc/nginx/nginx.conf notice the rewrite_by_lua_block sends force_ssl_redirect = false
...
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
listen 443 default_server reuseport backlog=511 ssl http2;
# PEM sha: 601213c2dd57a30b689e1ccdfaa291bf9cc264c3
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "0";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
...
Then, the LUA code requires force_ssl_redirect and redirect_to_https()
cat /etc/nginx/lua/lua_ingress.lua
...
if location_config.force_ssl_redirect and redirect_to_https() then
local uri = string_format("https://%s%s", redirect_host(), ngx.var.request_uri)
if location_config.use_port_in_redirects then
uri = string_format("https://%s:%s%s", redirect_host(), config.listen_ports.https, ngx.var.request_uri)
end
ngx_redirect(uri, config.http_redirect_code)
end
...
From what I can tell the force_ssl_redirect setting is only controlled at the Ingress resource level through the annotation nginx.ingress.kubernetes.io/force-ssl-redirect: "true". Because I don't have an ingress rule setup (this is meant to be the default server for requests that don't match any ingress), I have no way of changing this setting.
So what I determined I have to do is define my own custom server snippet on a different port that has force_ssl_redirect set to true and then point the Service Load Balancer to that custom server instead of the default. Specifically:
Added to the ConfigMap:
...
http-snippet: |
server {
server_name _ ;
listen 8080 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
server_tokens off;
location / {
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
}
location /healthz {
access_log off;
return 200;
}
}
server-snippet: |
more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
Note I also added the server-snippet to enable HSTS correctly. I think because the traffic from the ELB to NGINX is HTTP not HTTPS, the HSTS headers were not being correctly added by default.
Added to the DaemonSet:
...
ports:
- name: http
containerPort: 80
- name: http-redirect
containerPort: 8080
...
Modified the Service:
...
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
...
ports:
- name: http
port: 80
targetPort: http-redirect
# The range of valid ports is 30000-32767
nodePort: 30080
- name: https
port: 443
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30443
...
And now things seem to be working. I've updated the Gist so it includes the full configuration that I am using.

Issues obtaining ssl certificate

BACKGROUND
I am trying to setup the traefik dashboard to be accessible at sub.domain.com, and secure it automatically via a Let's Encrypt SSL certificate. Using the configuration files below, I am successful in setting up the container and making the dashboard accessible via https://sub.domain.com.
I have multiple A records pointing to the same IP, which is a VPS:
sub.domain.com
server1.domain.com
PROBLEM
Upon loading the dashboard page I get an untrusted certificate error.
LOGS & CONFIGS
Examining the Traefik dashboard certificate shows it's a Traefik self-signed cert.
Looking at the container logs, I can see the following
time="2018-01-23T04:47:53Z" level=info msg="Generating ACME Account..."
time="2018-01-23T04:48:11Z" level=debug msg="Building ACME client..."
time="2018-01-23T04:48:11Z" level=info msg=Register...
time="2018-01-23T04:48:12Z" level=debug msg=AgreeToTOS...
time="2018-01-23T04:48:12Z" level=info msg="Preparing server traefik &{Network: Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc4202a2940} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-01-23T04:48:12Z" level=info msg="Retrieving ACME certificates..."
time="2018-01-23T04:48:12Z" level=info msg="Retrieved ACME certificates"
time="2018-01-23T04:48:12Z" level=info msg="Starting provider *docker.Provider {"Watch":true,"Filename":"","Constraints":null,"Trace":false,"DebugLogGeneratedTemplate":false,"Endpoint":"unix:///var/run/docker.sock","Domain":"bendwyer.net","TLS":null,"ExposedByDefault":false,"UseBindPortIP":false,"SwarmMode":false}"
time="2018-01-23T04:48:12Z" level=info msg="Starting server on :443"
time="2018-01-23T04:48:12Z" level=info msg="Starting server on :8080"
time="2018-01-23T04:48:12Z" level=info msg="Testing certificate renew..."
Checking acme.json I can see that the file has been populated with Let's Encrypt information, but the certificate sections are blank.
traefik.toml
defaultEntryPoints = ["http", "https"]
debug = true
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "name#domain.com"
storage = "acme.json"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
OnHostRule = true
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "domain.com"
watch = true
exposedbydefault = false
docker-compose.yml
version: '2'
services:
traefik:
image: traefik:1.5-alpine
command: --web
ports:
- "80:80"
- "443:443"
restart: always
labels:
- "traefik.enable=true"
- "traefik.backend=sub"
- "traefik.frontend.rule=Host:sub.domain.com"
- "traefik.port=8080"
- "traefik.frontend.auth.basic=user:htpasswd"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./traefik.toml:/traefik.toml"
- "./acme.json:/acme.json"
container_name: traefik
networks:
- default
you must change your configuration like that:
[acme]
email = "name#domain.com"
storage = "acme.json"
entryPoint = "https"
OnHostRule = true # <-----------
[acme.httpChallenge]
entryPoint = "http"

Cannot add SSL certificate on ELB using Ansible

I'm trying to create a elastic load balancer and use existing SSL certificate to secure it as follows -
---
- name: Setting up Elastic Load Balancer
hosts: local
connection: local
gather_facts: False
vars_files:
- vars/global_vars.yml
tasks:
- local_action:
name: "TestLoadbalancer"
module: ec2_elb_lb
state: present
region: 'us-east-1'
zones:
- us-east-1c
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
listeners:
- protocol: ssl
load_balancer_port: 443
instance_protocol: tcp
instance_port: 7286
ssl_certificate_id: "arn:aws:iam::xxxxxx:server-certificate/LB_cert"
- local_action:
name: "TestLoadbalancer"
module: ec2_elb_lb
state: present
region: 'us-east-1'
zones:
- us-east-1c
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/"
response_timeout: 5
interval: 30
unhealthy_threshold: 2
healthy_threshold: 10
But it is not adding the listener: SSL - TCP
Another listener is added and is visible in the console: HTTP/80
Why the SSL one is missing? Am I missing any required parameters?
You are adding multiple keys with the name "listeners":
listeners:
- protocol: http
...
listeners:
- protocol: ssl
...
But glancing at the example in the documentation it should be:
listeners:
- protocol: http
...
- protocol: ssl
...