DEX and Amazonn ALB Load Balancer Controller and Argo Workflows - aws-application-load-balancer

I'm trying to build ALB -> Kube -> Dex using Load Balancer Controller. As a result, I have ALB with correctly binding instances into the target group, but the instance is Unhealthy.
The load Balancer Controller uses the 31845 as a health check port. A tried the port 5556, but still unhealthy.
So I can assume the setting is correct. But I'm not sure.
Another possibility, the DEX container isn't set up correctly.
And yet another version, I configured everything in the wrong way.
Does anyone have already configured DEX in this way and can prompt me?
Dex service
apiVersion: v1
kind: Service
metadata:
name: dex
...
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 5556
targetPort: http
nodePort: 31845
...
selector:
app.kubernetes.io/instance: dex
app.kubernetes.io/name: dex
clusterIP: 172.20.97.132
clusterIPs:
- 172.20.97.132
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
DEX pod
containerStatuses:
- name: dex
state:
running:
startedAt: '2022-09-19T17:41:43Z'
...
containers:
- name: dex
image: ghcr.io/dexidp/dex:v2.34.0
args:
- dex
- serve
- '--web-http-addr'
- 0.0.0.0:5556
- '--telemetry-addr'
- 0.0.0.0:5558
- /etc/dex/config.yaml
ports:
- name: http
containerPort: 5556
protocol: TCP
- name: telemetry
containerPort: 5558
protocol: TCP
env:
- name: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
load Balancer Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${name_http_ingress}
namespace: ${namespace}
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/part-of: argocd
app.kubernetes.io/name: argocd-server
annotations:
alb.ingress.kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/backend-protocol-version: HTTP1
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '10'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '3'
alb.ingress.kubernetes.io/success-codes: 200,301,302,307
alb.ingress.kubernetes.io/conditions.argogrpc: >-
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["^application/grpc.*$"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: >-
{"type":"redirect","redirectConfig":{"port":"443","protocol":"HTTPS","statusCode":"HTTP_301"}}
# external-dns.alpha.kubernetes.io/hostname: ${domain_name_public}
alb.ingress.kubernetes.io/certificate-arn: ${domain_certificate}
# alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: ${name_http_ingress}
alb.ingress.kubernetes.io/target-type: instance
# alb.ingress.kubernetes.io/target-type: ip # require to enable sticky sessions ,stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests
alb.ingress.kubernetes.io/target-node-labels: ${tolerations_key}=${tolerations_value}
alb.ingress.kubernetes.io/tags: Environment=${tags_env},Restricted=false,Customer=customer,Project=ops,Name=${name_http_ingress}
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=180
spec:
ingressClassName: alb
tls:
- hosts:
- ${domain_name_public}
- ${domain_name_public_dex}
rules:
- host: ${domain_name_public}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dex
port:
number: 5556

Related

ingress in AKS for API

I'm trying to deploy an ASP-Net Core API and make it available from outside cluster trough an ingress. I have followed the steps mentioned in the learn page. All the steps are working fine, however, I'm unable to access my ingress on the route /api/opportunities/. Below I'm describing my K8S files, might I be missing something?
apiVersion: apps/v1
kind: Deployment
metadata:
name: opportunities-api
spec:
replicas: 1
selector:
matchLabels:
component: opportunities-api
template:
metadata:
labels:
component: opportunities-api
spec:
containers:
- name: opportunities-api
image: mycontainer.azurecr.io/opportunities-api:{BUILD_NO}
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: opportunities-api
spec:
ports:
- port : 80
protocol: TCP
targetPort: 80
selector:
component: opportunities-api
type: ClusterIP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opportunities-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: opportunities-api
port:
number: 80
I see that host field is missing in above ingress yaml. Did you try adding .spec.rules.host in the ingress yaml as below and see if it helps?
As per the nginx document, it is one of the restrictions.
Also, if AKS v>=1.24, then can you check what is the value set for annotation service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path in ingress controller service. It should be /healthz as discussed in AKS Ingress-Nginx ingress controller failing to route by host
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opportunities-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: abc.com #your host name here
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: opportunities-api
port:
number: 80

ingress controller path based routing for apache applications deployed on kubernetes

I have a tomcat image with deployed SampleWebApp.war in conf/webapps
I am deploying this image inside pod on kubernetes cluster.
I want to expose clusterIP service pointing to tomcat application through ingress controller.
I can't use "/" in my ingress controller for redirection as already another application is using same host and path "/"
I tried giving path as "tomcat" . but it is not accessible when i tried to open UI on web
Below are my yaml's. can someone suggest what can be done here ?
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatinfra
namespace: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcatinfra
template:
metadata:
name: tomcatinfra
labels:
app: tomcatinfra
spec:
containers:
- image: saravak/tomcat8
name: tomcatapp
Sevice.yaml
kind: Service
apiVersion: v1
metadata:
name: tomcat-service
namespace: tomcat
spec:
type: ClusterIP
selector:
app: tomcatinfra
ports:
- protocol: TCP
port: 3000
targetPort: 8080
Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat
namespace: tomcat
spec:
rules:
- host: build.com
http:
paths:
- backend:
serviceName: tomcat-service
servicePort: 8080
path: /tomcat
pathType: ImplementationSpecific
Try adding the annotation of ingress class
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: /tomcat
backend:
service:
name: service1
port:
number: 80

Can't get kubernetes to pass my tls certificate to browsers

I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.
I think the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.
Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.
In my attempt to do this I've set up:
The cluster itself
An nginx-ingress controller
An ingress resource
Here's the related yaml:
Cluster:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-03T03:20:47Z
labels:
run: my-es
name: my-es
namespace: default
resourceVersion: "3159488"
selfLink: /api/v1/namespaces/default/services/my-es
uid: 373047e0-96cc-11e8-932b-42010a800043
spec:
clusterIP: 10.63.241.39
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 9200
selector:
run: my-es
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
The ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca
https://myOtherDomain.ca
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: 2018-08-12T08:44:29Z
generation: 16
name: es-ingress
namespace: default
resourceVersion: "3159625"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress
uid: ece0071d-9e0b-11e8-8a45-42001a8000fc
spec:
rules:
- http:
paths:
- backend:
serviceName: my-es
servicePort: 8080
path: /
tls:
- hosts:
- mydomain.ca
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 130.211.179.225
The nginx-ingress controller:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-12T00:41:32Z
labels:
app: nginx-ingress
chart: nginx-ingress-0.23.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
resourceVersion: "2781955"
selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller
uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc
spec:
clusterIP: 10.63.250.256
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32084
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31182
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.212.6.131
I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...
To get my certificate, I just requested one for mydomain.ca from godaddy.
Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name?
It doesn't seem possible to verify ownership of an IP.
I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.
Here are some logs from the nginx-controller:
This one is talking about a PEM with the tls-secret, but it's only a warning.
{
insertId: "1kvvhm7g1q7e0ej"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T02:58:42.135388365Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "WARNING"
textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found"
timestamp: "2018-08-14T02:58:37Z"
}
I have a few occurences of this handshake error, which may be a result of the last warning...
{
insertId: "148t6rfg1xmz978"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T15:55:52.438035706Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "ERROR"
textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442"
timestamp: "2018-08-14T15:55:50Z"
}
The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.
aaronmw#project-7d320:~$ kubectl describe ing
Name: es-ingress
Namespace: default
Address: 130.221.179.212
Default backend: default-http-backend:80 (10.61.3.7:8080)
TLS:
my-tls-secret terminates mydomain.ca
Rules:
Host Path Backends
---- ---- --------
*
/ my-es:8080 (<none>)
Annotations:
Events: <none>
I figured it out!
What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command
helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress
Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.
So what I did was:
Make my load balancer IP static
Add an A record to my domain, to map a subdomain to that IP
Re-key my cert to match that new subdomain
And I'm in business!
Thanks to #Crou, who's comment reminded me to look at the logs and got me on the right track.

Apache Ignite activating cluster takes a long time

I am trying to set up a cluster of Apache Ignite with persistence enabled. I am trying to start the cluster on Azure Kubernetes with 10 nodes. The problem is that the cluster activation seems to get stuck, but I am able to activate a cluster with 3 nodes in less than 5 minutes.
Here is the configuration I am using to start the cluster:
apiVersion: v1
kind: Service
metadata:
name: ignite-main
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
main: ignite-main
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- port: 10800 # JDBC port
targetPort: 10800
name: jdbc
- port: 11211 # Activating the baseline (port)
targetPort: 11211
name: control
- port: 8080 # REST port
targetPort: 8080
name: rest
selector:
main: ignite-main
---
#########################################
# Ignite service configuration
#########################################
# Service for discovery of ignite nodes
apiVersion: v1
kind: Service
metadata:
name: ignite
labels:
app: ignite
spec:
clusterIP: None
# externalTrafficPolicy: Cluster
ports:
# - port: 9042 # custom value.
# name: discovery
- port: 47500
name: discovery
- port: 47100
name: communication
- port: 11211
name: control
selector:
app: ignite
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ignite-cluster
labels:
app: ignite
main: ignite-main
spec:
selector:
matchLabels:
app: ignite
main: ignite-main
replicas: 5
template:
metadata:
labels:
app: ignite
main: ignite-main
spec:
volumes:
- name: ignite-storage
persistentVolumeClaim:
claimName: ignite-volume-claim # Must be equal to the PersistentVolumeClaim created before.
containers:
- name: ignite-node
image: ignite.azurecr.io/apacheignite/ignite:2.7.0-SNAPSHOT
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://file-location
- name: IGNITE_H2_DEBUG_CONSOLE
value: 'true'
- name: IGNITE_QUIET
value: 'false'
- name: java.net.preferIPv4Stack
value: 'true'
- name: JVM_OPTS
value: -server -Xms10g -Xmx10g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
ports:
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 8080 # REST port number.
- containerPort: 10800 # SQL port number.
- containerPort: 11211 # Activating the baseline (port)
imagePullSecrets:
- name: docker-cred
I was trying to activate the cluster remotely by providing --host parameter, like:
./control.sh --host x.x.x.x --activate
Instead, I tried activating the cluster by logging into one of the kubernetes nodes and activating from there. The detailed steps are mentioned here

Trouble at configuring http(s) for an nginx-ingress

Im currently trying to create an ingress, following the ssl-termination approach, which allows me to connect to a service both via http and https.
I managed to create a working ingress for http, partly for https, but not both together..
heres my config
Ingress Controller: Deployment & Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
env:
<!-- default-config ommitted -->
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17"
imagePullPolicy: Always
livenessProbe:
<!-- omitted -->
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx-ssl/tls
name: tls-vol
terminationGracePeriodSeconds: 60
volumes:
- name: tls-vol
secret:
secretName: tls-test-project-secret
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
nodePort: 31115
- name: https
port: 443
targetPort: https
nodePort: 31116
selector:
k8s-app: nginx-ingress-lb
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/secure-backends: "false"
# modified this to false for http & https-scenario
ingress.kubernetes.io/ssl-redirect: "true"
# modified this to false for http & https-scenario
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/add-base-url: "true"
spec:
tls:
- hosts:
- author.k8s-test
secretName: tls-test-project-secret
rules:
- host: author.k8s-test
http:
paths:
- path: /
backend:
serviceName: cms-author
servicePort: 8080
Backend - Service
apiVersion: v1
kind: Service
metadata:
name: cms-author
spec:
selector:
run: cms-author
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
Backend-Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms-author
spec:
selector:
matchLabels:
run: cms-author
replicas: 1
template:
metadata:
labels:
run: cms-author
spec:
containers:
- name: cms-author
image: <someDockerRegistryUrl>/magnolia:kube-dev
imagePullPolicy: Always
ports:
- containerPort: 8080
I have several issues, when follwing the https only scenario, i can reach the application via the ingress https nodePort, but cant login, as the follwing request goes via http instead of https.. If i put manually https before the url in browser, it is working again and any further request goes via https., but I dont know why :(
The final setting (supporting http and https) is completely not working, as if I try to access the app via http-nodePort of Ingress, it always redirects to ssl, but in this scenario, I configured to ssl-redirect to false, but still not working.
I have read many posts on github, dealing with that, but none of them worked for me
I've changed the nginx-controller images from gce_containers to quay.io, also not working
I've tried some older versions, also not working.
Deploy the nginx ingress controller from the official kubernetes charts repo https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress by setting the helm arguments controller.service.targetPorts.https and controller.service.nodePorts.https. Once they are set, the appropriate NodePort (443) will be configured by helm.
Helm uses the YAML files in https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress/templates.
Along with the nginx ingress controller, you'll need an ingress resource too. Refer https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for examples.