For a .net core application, I need the internal IP address of the nginx ingress to trust the proxy and process its forwarded headers.
This is done with the following code in my application:
forwardedHeadersOptions.KnownProxies.Add(IPAddress.Parse("10.244.0.16"));
Now it is hard-coded. But how can I get this IP address into an environment variable for my container?
It seems like the given IP address is the endpoint of the ingress-nginx service in the ingress-nginx namespace:
❯ kubectl describe service ingress-nginx -n ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: LoadBalancer
IP: 10.0.91.124
LoadBalancer Ingress: 40.127.224.177
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30756/TCP
Endpoints: 10.244.0.16:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31719/TCP
Endpoints: 10.244.0.16:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32003
Events: <none>
FYI: this is my deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: uwgazon-web
spec:
replicas: 1
paused: true
template:
metadata:
labels:
app: uwgazon-web
spec:
containers:
- name: uwgazon-web
image: uwgazon/web
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: UWGAZON_RECAPTCHA__SITEKEY
valueFrom:
secretKeyRef:
name: uwgazon-recaptcha
key: client-id
- name: UWGAZON_RECAPTCHA__SERVERKEY
valueFrom:
secretKeyRef:
name: uwgazon-recaptcha
key: client-secret
- name: UWGAZON_MAILGUN__BASEADDRESS
valueFrom:
secretKeyRef:
name: uwgazon-mailgun
key: base-address
- name: UWGAZON_APPLICATIONINSIGHTS__INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: uwgazon-appinsights
key: instrumentationkey
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: uwgazon-appinsights
key: instrumentationkey
- name: UWGAZON_MAILGUN__APIKEY
valueFrom:
secretKeyRef:
name: uwgazon-mailgun
key: api-key
- name: UWGAZON_MAILGUN__TOADDRESS
valueFrom:
secretKeyRef:
name: uwgazon-mailgun
key: to-address
- name: UWGAZON_BLOG__NAME
valueFrom:
configMapKeyRef:
name: uwgazon-config
key: sitename
- name: UWGAZON_BLOG__OWNER
valueFrom:
configMapKeyRef:
name: uwgazon-config
key: owner
- name: UWGAZON_BLOG__DESCRIPTION
valueFrom:
configMapKeyRef:
name: uwgazon-config
key: description
And my ingress configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: uwgazon-web-ingress
annotations:
cert-manager.io/issuer: "uwgazon-tls-issuer"
spec:
tls:
- hosts:
- uwgazon.sdsoftware.be
secretName: uwgazon-sdsoftware-be-tls
rules:
- host: uwgazon.sdsoftware.be
http:
paths:
- backend:
serviceName: uwgazon-web
servicePort: 80
I found the solution to this, specific for Asp.net core.
First of all, you MUST whitelist the proxy, otherwise the forwarded headers middleware will not work.
I found out, you can actually whitelist an entire network. That way, you are trusting everything inside your cluster. Kubernetes uses the 10.0.0.0/8 network (subnet mask 0.255.255.255). Trusting it, can be done with the following code:
services.Configure<ForwardedHeadersOptions>(forwardedHeadersOptions =>
{
forwardedHeadersOptions.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
forwardedHeadersOptions.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("10.0.0.0"), 8));
});
Related
I'm trying to deploy an ASP-Net Core API and make it available from outside cluster trough an ingress. I have followed the steps mentioned in the learn page. All the steps are working fine, however, I'm unable to access my ingress on the route /api/opportunities/. Below I'm describing my K8S files, might I be missing something?
apiVersion: apps/v1
kind: Deployment
metadata:
name: opportunities-api
spec:
replicas: 1
selector:
matchLabels:
component: opportunities-api
template:
metadata:
labels:
component: opportunities-api
spec:
containers:
- name: opportunities-api
image: mycontainer.azurecr.io/opportunities-api:{BUILD_NO}
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: opportunities-api
spec:
ports:
- port : 80
protocol: TCP
targetPort: 80
selector:
component: opportunities-api
type: ClusterIP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opportunities-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: opportunities-api
port:
number: 80
I see that host field is missing in above ingress yaml. Did you try adding .spec.rules.host in the ingress yaml as below and see if it helps?
As per the nginx document, it is one of the restrictions.
Also, if AKS v>=1.24, then can you check what is the value set for annotation service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path in ingress controller service. It should be /healthz as discussed in AKS Ingress-Nginx ingress controller failing to route by host
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opportunities-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: abc.com #your host name here
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: opportunities-api
port:
number: 80
I'm trying to build ALB -> Kube -> Dex using Load Balancer Controller. As a result, I have ALB with correctly binding instances into the target group, but the instance is Unhealthy.
The load Balancer Controller uses the 31845 as a health check port. A tried the port 5556, but still unhealthy.
So I can assume the setting is correct. But I'm not sure.
Another possibility, the DEX container isn't set up correctly.
And yet another version, I configured everything in the wrong way.
Does anyone have already configured DEX in this way and can prompt me?
Dex service
apiVersion: v1
kind: Service
metadata:
name: dex
...
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 5556
targetPort: http
nodePort: 31845
...
selector:
app.kubernetes.io/instance: dex
app.kubernetes.io/name: dex
clusterIP: 172.20.97.132
clusterIPs:
- 172.20.97.132
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
DEX pod
containerStatuses:
- name: dex
state:
running:
startedAt: '2022-09-19T17:41:43Z'
...
containers:
- name: dex
image: ghcr.io/dexidp/dex:v2.34.0
args:
- dex
- serve
- '--web-http-addr'
- 0.0.0.0:5556
- '--telemetry-addr'
- 0.0.0.0:5558
- /etc/dex/config.yaml
ports:
- name: http
containerPort: 5556
protocol: TCP
- name: telemetry
containerPort: 5558
protocol: TCP
env:
- name: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
load Balancer Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${name_http_ingress}
namespace: ${namespace}
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/part-of: argocd
app.kubernetes.io/name: argocd-server
annotations:
alb.ingress.kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/backend-protocol-version: HTTP1
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '10'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '3'
alb.ingress.kubernetes.io/success-codes: 200,301,302,307
alb.ingress.kubernetes.io/conditions.argogrpc: >-
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["^application/grpc.*$"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: >-
{"type":"redirect","redirectConfig":{"port":"443","protocol":"HTTPS","statusCode":"HTTP_301"}}
# external-dns.alpha.kubernetes.io/hostname: ${domain_name_public}
alb.ingress.kubernetes.io/certificate-arn: ${domain_certificate}
# alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: ${name_http_ingress}
alb.ingress.kubernetes.io/target-type: instance
# alb.ingress.kubernetes.io/target-type: ip # require to enable sticky sessions ,stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests
alb.ingress.kubernetes.io/target-node-labels: ${tolerations_key}=${tolerations_value}
alb.ingress.kubernetes.io/tags: Environment=${tags_env},Restricted=false,Customer=customer,Project=ops,Name=${name_http_ingress}
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=180
spec:
ingressClassName: alb
tls:
- hosts:
- ${domain_name_public}
- ${domain_name_public_dex}
rules:
- host: ${domain_name_public}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dex
port:
number: 5556
Objects below, this worked fine getting a staging cert from Let's Encrypt, but now that I've cut over to prod it does not work and is hanging for >12hrs
The public IP is the correct one (I can navigate to it via https://website.gg and get the expected cert errors, same with ping etc.)
All subdomains are correctly routed to this same public/external IP
CertificateIssuer has been hanging for 12 hours and seems to have no chance to resolve.
Relevant objects:
Certificate:
apiVersion: cert-manager.io/v1alpha3
kind: Certificate
metadata:
creationTimestamp: "2022-01-21T06:01:42Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: mysecret-tls
namespace: mine
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: Ingress
name: mine-api-ingress
uid: fbbfdbac-5480-469e-8230-6d018d5ef151
resourceVersion: "11962900"
uid: 2710c043-7f95-4576-982f-03b702038105
spec:
dnsNames:
- website.gg
- '*.website.gg'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-prod
secretName: mysecret-tls
status:
conditions:
- lastTransitionTime: "2022-01-21T06:01:42Z"
message: Waiting for CertificateRequest "mysecret-tls-3406177836" to complete
reason: InProgress
status: "False"
type: Ready
ClusterIssuer:
apiVersion: cert-manager.io/v1alpha3
kind: ClusterIssuer
metadata:
annotations:
meta.helm.sh/release-name: mine-api
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-01-17T03:43:44Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: letsencrypt-prod
resourceVersion: "10876278"
uid: 27b34bfe-16e7-4d51-9229-ad12abb52df6
spec:
acme:
email: myemail#email.com
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx
status:
acme:
lastRegisteredEmail: myemail#email.com
uri: https://acme-v02.api.letsencrypt.org/acme/acct/367378310
conditions:
- lastTransitionTime: "2022-01-17T03:43:44Z"
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
CertificateRequest:
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-prod
Status:
Conditions:
Last Transition Time: 2022-01-21T06:01:44Z
Message: Waiting on certificate issuance from order mine/mysecret-tls-3406177836-1310394537: "pending"
Reason: Pending
Status: False
Type: Ready
Events: <none>
Ingress:
...
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
...
spec:
rules:
- http:
paths:
- backend:
service:
name: mine-api-service
port:
number: 80
path: /api/(.*)
pathType: Prefix
- backend:
service:
name: mine-bot-service
port:
number: 80
path: /bot/(.*)
pathType: Prefix
- backend:
service:
name: mine-ui-service
port:
number: 80
path: /(.*)
pathType: Prefix
tls:
- hosts:
- website.gg
- '*.website.gg'
secretName: mysecret-tls
status:
loadBalancer:
ingress:
- hostname: a605c42f1adc44435bb1cab34ee9bf3a
ip: {my-public-ip}
I'm working on the Canary Deployment Strategy.
I use the Service Mesh Interface, after installing trafik mesh.
When starting the program for the first time with the command
kubectl apply -f applications.yaml
It should deploy the entire application i.e. 4 replicas, but it deploys only 20% (1 replica) of the application,
and it goes into progressing state with an error:
TrafficRoutingErro: the server could not find the requested resource (post trafficsplits.splits.smi-spec.io)
TrafficSplitNotCreated: Unable to create traffic Split 'demo-traefficsplit'
Here is my manifest:
argocd-rollout.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: demo
labels:
app: demo
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: "1m"
- setWeight: 50
- pause:
duration: "2m"
canaryService: demo-canary
stableService: demo
trafficRouting:
smi:
rootService: demo-smi
trafficSplitName: demo-trafficsplit
replicas: 4
revisionHistoryLimit: 2
selector:
matchLabels:
app: demo
version: blue
template:
metadata:
labels:
app: demo
version: blue
spec:
containers:
- name: demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "140m"
---
apiVersion: split.smi-spec.io/v1alpha3
kind: TrafficSplit
metadata:
name: demo-trafficsplit
spec:
service: demo-smi # controller uses the stableService if Rollout does not specify the rootService field
backends:
- service: demo
weight: 10
- service: demo-canary
weight: 90
---
apiVersion: v1
kind: Service
metadata:
name: demo-smi
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rollout-ing
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`mycompagny.com`)
services:
- name: demo-smi
port: 80
tls:
certResolver: myresolver
applications.yaml
apiVersion: v1
kind: Namespace
metadata:
name: net
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rollout
namespace: argocd
spec:
project: default
source:
repoURL: git#github.com:telemaqueHQ/DevOps.git
targetRevision: master
path: gitOps/test/argocd
destination:
server: https://kubernetes.default.svc
namespace: net
syncPolicy:
automated:
prune: true
selfHeal: true
I have problems with Kubernetes. I try to deploy my service for two days now bu I'm doing something wrong.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Does anybody knows what the problem could be?
Here is also my yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}-service
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}-service
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
Can anybody help me with this? I'm stuck and I've no idea what could be the problem. This is also my first Kubernetes project.
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
.. means just what it says: your request to the kubernetes api was not authenticated (that's the system:anonymous part), and your RBAC configuration does not tolerate the anonymous user making any requests to the API
No one here is going to be able to help you straighten out that problem, because fixing that depends on a horrific number of variables. Perhaps ask your cluster administrator to provide you with the correct credentials.
I have explained it in this post. You will need ServiceAccount, ClusterRole and RoleBinding. You can find explanation in this article. Or as Matthew L Daniel mentioned in the Kubernetes documentation.
If you still have problems, provide the method/tutorial you have used to deploy the cluster (as "Gitlab Kubernetes integration" does not tell much on the method you have used).