I am testing Traefik 2.0 in Azure Service Fabric, i am getting duplicate entry for HTTP routers, Services and Middlewares. Here with sharing the traefik.yaml and dyn.yaml which used. Please let me know any additional configuration required
traefik.yaml
global:
checkNewVersion: false
sendAnonymousUsage: false
defaultEntryPoints : "web"
entryPoints:
web:
address: :443
http:
middlewares:
- test-errorpage#file
ping:
entryPoint: "web"
api:
dashboard: true
log:
level: DEBUG
filePath: "log/traefik.log"
format: json
serversTransport:
insecureSkipVerify: true
providers:
file:
directory: dyn.yaml
watch: true
dyn.yaml
http:
routers:
dashboard:
rule: PathPrefix(`/api`) || PathPrefix(`/dashboard`)
service: api#internal
middlewares:
test-errorpage:
errors:
status:
- 400
- 402-599
service: serviceError
query: "/Maintenance"
services:
serviceError:
loadBalancer:
sticky:
cookie:
httpOnly: true
name: stickycookie
sameSite: none
secure: true
servers:
- url: https://demo.com/Maintenance/
tls:
certificates:
- certFile: "./traefik.crt"
keyFile: "./traefik.key"
- certFile: ./sfc.crt
keyFile: ./sfc.key
options:
foobar:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Dashboard Screenshot
Pinger ServiceManifest
<Extensions>
<Extension Name="traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.enable">true</Label>
<Label Key="traefik.http.PingerEndpoint0">true</Label>
<Label Key="traefik.http.PingerEndpoint0.router.rule">PathPrefix(`/pinger`)</Label>
<Label Key="traefik.http.PingerEndpoint0.middlewares.1.stripPrefix.prefixes">/pinger</Label>
<Label Key="traefik.http.PingerEndpoint0.router.tls.options">foobar</Label>
<Label Key="traefik.http.PingerEndpoint0.service.loadbalancer.healthcheck.path">/</Label>
<Label Key="traefik.http.PingerEndpoint0.service.loadbalancer.healthcheck.interval">10s</Label>
<Label Key="traefik.http.PingerEndpoint0.service.loadbalancer.healthcheck.scheme">http</Label>
</Labels>
</Extension>
Related
I deployed minio and the console in K8S, used ClusterIP to expose ports 9000 & 5000
Listening for port 80 and 5000 forwarding requests to minio.service(ClusterIP)
Request console all right through port 5000
By requesting the console on port 80, you can see the console, but the request is 404 in the browser
enter image description here
enter image description here
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Release.Namespace }}
name: minio-headless
labels:
app: minio-headless
spec:
type: ClusterIP
clusterIP: None
ports:
- name: server
port: 9000
targetPort: 9000
- name: console
port: 5000
targetPort: 5000
selector:
app: minio
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingress-route-minio
namespace: {{ .Release.Namespace }}
spec:
entryPoints:
- minio
- web
routes:
- kind: Rule
match: Host(`minio-console.{{ .Release.Namespace }}.k8s.zszc`)
priority: 10
services:
- kind: Service
name: minio-headless
namespace: {{ .Release.Namespace }}
port: 5000
responseForwarding:
flushInterval: 1ms
scheme: http
strategy: RoundRobin
weight: 10
traefik access log
{
"ClientAddr": "192.168.4.250:55485",
"ClientHost": "192.168.4.250",
"ClientPort": "55485",
"ClientUsername": "-",
"DownstreamContentSize": 19,
"DownstreamStatus": 404,
"Duration": 688075,
"OriginContentSize": 19,
"OriginDuration": 169976,
"OriginStatus": 404,
"Overhead": 518099,
"RequestAddr": "minio-console.etb-0-0-1.k8s.zszc",
"RequestContentSize": 0,
"RequestCount": 1018,
"RequestHost": "minio-console.etb-0-0-1.k8s.zszc",
"RequestMethod": "GET",
"RequestPath": "/api/v1/login",
"RequestPort": "-",
"RequestProtocol": "HTTP/1.1",
"RequestScheme": "http",
"RetryAttempts": 0,
"RouterName": "traefik-traefik-dashboard-6e26dcbaf28841493448#kubernetescrd",
"StartLocal": "2023-01-27T13:20:06.337540015Z",
"StartUTC": "2023-01-27T13:20:06.337540015Z",
"entryPointName": "web",
"level": "info",
"msg": "",
"time": "2023-01-27T13:20:06Z"
}
It looks to me like the request for /api is conflicting with rules for the Traefik dashboard. If you look at the access log in your question, we see:
"RouterName": "traefik-traefik-dashboard-6e26dcbaf28841493448#kubernetescrd",
If you have installed Traefik from the Helm chart, it installs an IngressRoute with the following rules:
- kind: Rule
match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
services:
- kind: TraefikService
name: api#internal
In theory those are bound only to the traefik entrypoint, but it looks like you may have customized your entrypoint configuration.
Take a look at the IngressRoute resource for your Traefik dashboard and ensure that it's not sharing an entrypoint with minio.
We are using - ingress-controller(nginx) as elb on aws.
for the server we have nodeJs app(express) on kubernetes.
Also we store the session on redis.
On client side the - the client didn't got an cookies from server.
Please need help with the configuration that solve this bug.
The app.js:
app.enable('trust proxy');
app.use(session({
//store: new RedisStore({ client: redisClient }),
secret: 'To-change!',
domain: sessionDomain,
resave: true,
proxy: true,
saveUninitialized: true,
cookie: {
sameSite: 'none',
secure: true,
httpOnly: true,
maxAge: 3600 * 1000 * 24
}
}));
The ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: basic-routing
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-webapp
port:
number: 80
please help me to solve this case
I am using kube-prometheus-stack to monitor my system in gcp. Due to new requirements all my ingress need to be secured with tls. As a first step I wanted to make the grafana webpage available via https.
I created a tls secret and updated my values.yaml. After helm upgrade everything seems to work fine but page is still available via http only.
Hope you can support me here.
grafana:
enabled: true
namespaceOverride: ""
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: true
## Annotations for Grafana Ingress
##
# annotations: {
# kubernetes.io/ingress.class: gce-internal
# kubernetes.io/tls-acme: "true"
# }
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
# path: /*
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls:
- secretName: monitoring-tls-secret
# hosts:
# - grafana.example.com
in the meantime I decided to create the ingress a different way.
I created a ssl-certificate and try to use that instead.
When starting up I get the failure down below. Which is strange as kubernetes.io/ingress.allow-http is configured.
kubectl describe ingress monitoring-cl2-grafana -n monitoring-cl2
Name: monitoring-cl2-grafana
Namespace: monitoring-cl2
Address: x.x.x.x
Default backend: default-http-backend:80 (y.y.y.y:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* monitoring-cl2-grafana:80 (<deleted>)
Annotations: ingress.gcp.kubernetes.io/pre-shared-cert: monitoring-ssl
ingress.kubernetes.io/backends:
{"k8s1-613c3440-kube-system-default-http-backend-80-240d1018":"HEALTHY","k8s1-613c3440-mtx-monitoring--mtx-monitoring-cl2-gra-8-f146f2b2":...
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
ingress.kubernetes.io/https-target-proxy: k8s2-ts-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
ingress.kubernetes.io/ssl-cert: monitoring-ssl
ingress.kubernetes.io/url-map: k8s2-um-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.class: gce-internal
kubernetes.io/ingress.global-static-ip-name: grafana-cl2
meta.helm.sh/release-name: monitoring-cl2
meta.helm.sh/release-namespace: monitoring-cl2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Sync 34m (x12 over 35m) loadbalancer-controller Error syncing to GCP: error running load balancer syncing routine: loadbalancer 3s1rnwzg-mtx-monitoring--monitoring-cl2-gr-hgx28ojy does not exist: error invalid internal ingress https config
Warning WillNotConfigureFrontend 26m (x18 over 35m) loadbalancer-controller gce-internal Ingress class does not currently support both HTTP and HTTPS served on the same IP (kubernetes.io/ingress.allow-http must be false when using HTTPS).
Normal Sync 3m34s loadbalancer-controller TargetProxy "k8s2-ts-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy" certs updated
Normal Sync 3m29s (x9 over 35m) loadbalancer-controller Scheduled for sync
grafana:
image:
repository: europe-west3-docker.pkg.dev/<deleted info>/grafana
tag: 7.5.5
sha: ""
sidecar:
image:
repository: europe-west3-docker.pkg.dev/<deleted info>/prometheus/k8s-sidecar
tag: 1.10.7
sha: ""
imagePullPolicy: IfNotPresent
service:
enabled: true
type: NodePort
annotations: {
cloud.google.com/neg: '{"ingress": true}'
}
labels: {}
portName: service
ingress:
enabled: true
path: /*
pathType: ImplementationSpecific
annotations: {
ingress.gcp.kubernetes.io/pre-shared-cert: "monitoring-ssl",
kubernetes.io/ingress.allow-http: "false",
kubernetes.io/ingress.class: "gce-internal",
kubernetes.io/ingress.global-static-ip-name: "grafana-cl2"
}
WORKING NOW WITH FOLLOWING CONFIG
grafana:
image:
repository: europe-west3-docker.pkg.dev/del/mtx-monitoring/prometheus/grafana
tag: 7.5.5
sha: ""
sidecar:
image:
repository: europe-west3-docker.pkg.dev/del/mtx-monitoring/prometheus/k8s-sidecar
tag: 1.10.7
sha: ""
imagePullPolicy: IfNotPresent
service:
enabled: true
type: NodePort
# port: 80
# targetPort: 3000
annotations: {
cloud.google.com/neg: '{"ingress": true}'
}
labels: {}
portName: service
ingress:
enabled: true
path: /*
pathType: ImplementationSpecific
annotations: {
ingress.gcp.kubernetes.io/pre-shared-cert: "monitoring-ssl",
kubernetes.io/ingress.allow-http: "false",
kubernetes.io/ingress.class: "gce-internal",
kubernetes.io/ingress.global-static-ip-name: "grafana-cl2"
}
spec:
rules:
- host: grafana.monitoring.com
http:
paths:
- backend:
service:
name: mtx-monitoring-cl2-grafana
port:
number: 80
I have two services in Kubernetes which are exposed through nginx controller. Service a wants to invoke content on domain b but at the same time both services need to be authenticated through Google using the oauth-proxy service.
So I have managed to enable CORS and a can invoke b without any issues. But the problem is when I include the authentication as well, I am constantly getting:
Access to manifest at 'https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=<obscured>.apps.googleusercontent.com&redirect_uri=https%3A%2F%2Fa.example.com%2Foauth2%2Fcallback&response_type=code&scope=profile+email&state=<obscured>manifest.json' (redirected from 'https://a.example.com/manifest.json') from origin 'https://a.example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Here are the ingresses:
Service a
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
name: a
spec:
rules:
- host: a.example.com
http:
paths:
- backend:
service:
name: a-svc
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- a.example.com
secretName: a-tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
labels:
k8s-app: oauth2-proxy
name: a-oauth
spec:
rules:
- host: a.example.com
http:
paths:
- backend:
service:
name: oauth2-proxy
port:
number: 4180
path: /oauth2
pathType: Prefix
tls:
- hosts:
- a.example.com
secretName: a-oauth-tls
Service b
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
name: b
spec:
rules:
- host: b.example.com
http:
paths:
- backend:
service:
name: b-svc
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- b.example.com
secretName: b-tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
labels:
k8s-app: oauth2-proxy
name: b-oauth
spec:
rules:
- host: b.example.com
http:
paths:
- backend:
service:
name: oauth2-proxy
port:
number: 4180
path: /oauth2
pathType: Prefix
tls:
- hosts:
- b.example.com
secretName: b-oauth-tls
Obviously, there is only one difference between these two and that is the cors annotations nginx.ingress.kubernetes.io/enable-cors: "true" in the service b ingresses.
I am not sure of what is causing the issue but I am guessing that the authentication done against Google in service a is not being passed to the CORS request so that the service b can also be authenticate with the same token/credentials.
What am I doing wrong and how can I resolve this?
Based on the documentation, it looks like you lack of annotations for using HTTP.
Try to add fragment like this in service configuration file:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
But it can be a problem with CORS. In such case you should add the following line:
--cors-allowed-origins=["http://*"]
to /etc/default/kube-apiserver or /etc/kubernetes/manifests/kube-apiserver.yaml file (depending on the location your kube-apiserver configuration file).
After that restart kube-apiserver.
See also this similar question.
So, it turns out that the whole Kubernetes and Nginx config was correct, so the solution was implementing the usage of the saved cookie on client side when invoking a CORS request to the second service.
Essentially, this was already answered here: Set cookies for cross origin requests
Excerpt from the answer:
Front-end (client): Set the XMLHttpRequest.withCredentials flag to
true, this can be achieved in different ways depending on the
request-response library used:
jQuery 1.5.1 xhrFields: {withCredentials: true}
ES6 fetch() credentials: 'include'
axios: withCredentials: true
Middlewares are not being detected and therefore paths are not being stripped resulting in 404s in the backend api.
Middleware exists in k8s apps namespace
$ kubectl get -n apps middlewares
NAME AGE
traefik-middlewares-backend-users-service 1d
configuration for middleware and ingress route
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: apps-services
namespace: apps
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`example.com`) && PathPrefix(`/users/`)
middlewares:
- name: traefik-middlewares-backend-users-service
priority: 0
services:
- name: backend-users-service
port: 8080
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: traefik-middlewares-backend-users-service
namespace: apps
spec:
stripPrefix:
prefixes:
- /users
Static configuration
global:
checkNewVersion: true
sendAnonymousUsage: true
entryPoints:
http:
address: :80
traefik:
address: :8080
providers:
providersThrottleDuration: 2s
kubernetesIngress: {}
api:
# TODO: make this secure later
insecure: true
ping:
entryPoint: http
log: {}
Traefik dasboard has no middlewares
Spring Boot 404 page. Route is on example.com/actuator/health
The /users is not being stripped. This worked for me in traefik v1 perfectly fine.
Note: actual domain has been replaced with example.com and domain.com in the examples.
To get this working, I had to:
Add the Kubernetes CRD provider with the namespaces where the custom k8s CRDs for traefik v2 exist
Add TLSOption resource definition
Update cluster role for traefik to have permissions for listing and watching new v2 resources
Make sure all namespaces with new resources are configured
Traefik Static Configuration File
providers:
providersThrottleDuration: 2s
kubernetesCRD:
namespaces:
- apps
- traefik
TLSOption CRD
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced
Updated Static Configuration for Traefik
global:
checkNewVersion: true
sendAnonymousUsage: true
entryPoints:
http:
address: :80
traefik:
address: :8080
providers:
providersThrottleDuration: 2s
kubernetesCRD:
namespaces:
- apps
- traefik
api:
# TODO: make this secure later
insecure: true
ping:
entryPoint: http
log: {}