Expose every pod in Redis cluster in Kubernetes - redis

I'm trying to setup Redis cluster in Kubernetes. The major requirement is that all of nodes from Redis cluster have to be available from outside of Kubernetes. So clients can connect every node directly. But I got no idea how to configure service that way.
Basic config of cluster right now. It's ok for services into k8s but no full access from outside.
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly no
protected-mode no
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "false"
name: redis-cluster
labels:
app: redis-cluster
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
template:
metadata:
labels:
app: redis-cluster
spec:
hostNetwork: true
containers:
- name: redis-cluster
image: redis:4.0.10
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["redis-server"]
args: ["/conf/redis.conf"]
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
items:
- key: redis.conf
path: redis.conf

Given:
spec:
hostNetwork: true
containers:
- name: redis-cluster
ports:
- containerPort: 6379
name: client
It appears that your StatefulSet is misconfigured, since if hostNetwork is true, you have to provide hostPort, and that value should match containerPort, according to the PodSpec docs:
hostPort integer - Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#containerport-v1-core

Related

DEX and Amazonn ALB Load Balancer Controller and Argo Workflows

I'm trying to build ALB -> Kube -> Dex using Load Balancer Controller. As a result, I have ALB with correctly binding instances into the target group, but the instance is Unhealthy.
The load Balancer Controller uses the 31845 as a health check port. A tried the port 5556, but still unhealthy.
So I can assume the setting is correct. But I'm not sure.
Another possibility, the DEX container isn't set up correctly.
And yet another version, I configured everything in the wrong way.
Does anyone have already configured DEX in this way and can prompt me?
Dex service
apiVersion: v1
kind: Service
metadata:
name: dex
...
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 5556
targetPort: http
nodePort: 31845
...
selector:
app.kubernetes.io/instance: dex
app.kubernetes.io/name: dex
clusterIP: 172.20.97.132
clusterIPs:
- 172.20.97.132
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
DEX pod
containerStatuses:
- name: dex
state:
running:
startedAt: '2022-09-19T17:41:43Z'
...
containers:
- name: dex
image: ghcr.io/dexidp/dex:v2.34.0
args:
- dex
- serve
- '--web-http-addr'
- 0.0.0.0:5556
- '--telemetry-addr'
- 0.0.0.0:5558
- /etc/dex/config.yaml
ports:
- name: http
containerPort: 5556
protocol: TCP
- name: telemetry
containerPort: 5558
protocol: TCP
env:
- name: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
load Balancer Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${name_http_ingress}
namespace: ${namespace}
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/part-of: argocd
app.kubernetes.io/name: argocd-server
annotations:
alb.ingress.kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/backend-protocol-version: HTTP1
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '10'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '3'
alb.ingress.kubernetes.io/success-codes: 200,301,302,307
alb.ingress.kubernetes.io/conditions.argogrpc: >-
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["^application/grpc.*$"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: >-
{"type":"redirect","redirectConfig":{"port":"443","protocol":"HTTPS","statusCode":"HTTP_301"}}
# external-dns.alpha.kubernetes.io/hostname: ${domain_name_public}
alb.ingress.kubernetes.io/certificate-arn: ${domain_certificate}
# alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: ${name_http_ingress}
alb.ingress.kubernetes.io/target-type: instance
# alb.ingress.kubernetes.io/target-type: ip # require to enable sticky sessions ,stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests
alb.ingress.kubernetes.io/target-node-labels: ${tolerations_key}=${tolerations_value}
alb.ingress.kubernetes.io/tags: Environment=${tags_env},Restricted=false,Customer=customer,Project=ops,Name=${name_http_ingress}
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=180
spec:
ingressClassName: alb
tls:
- hosts:
- ${domain_name_public}
- ${domain_name_public_dex}
rules:
- host: ${domain_name_public}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dex
port:
number: 5556

why i get the error "backend - 404 error" when trying to deploy tls ingress in kubernetes with no errors on events

I'm trying to deploy a simple Ingress service and works when is Ingress without the Secure function(tls), but when I include the cert tls it always returns me "backend - 404 error"
I already installed "cert manager", "ingress-nginx" and already checked if this install is ok
EDIT: I explained all the steps I'm doing
EDIT2: I updated the cert-manager's version to v1.5.4
these were the steps:
1.- install nginx controller for my ip
helm install bitnami/nginx-ingress-controller --set controller.service.loadBalancerIP="[MY-STATIC-IP]",rbac.create=true --generate-name
2.- Apply deployment and service (app.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: taxisbahiadeploy
labels:
type: endpoints-app
spec:
replicas: 1
selector:
matchLabels:
app: taxisbahiadeploy
template:
metadata:
labels:
app: taxisbahiadeploy
spec:
containers:
- name: taxisbahiadeploy
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: taxisbahia
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: taxisbahiadeploy
3.- Configure let's encrypt
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager \
--namespace cert-manager \
--version v1.5.4 \
jetstack/cert-manager
4- Apply the Issuer (issuer.yaml)
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: 'fco#ggggg.com'
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: 'fco#ggggg.com'
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
5.- Final Step, this is the Ingress where it fails (ingress-tls.yaml)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: esp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- domain.com
secretName: esp-tls
rules:
- host: domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: taxisbahia
port:
number: 8080
i think your TLS domain part should be something like check your host
spec:
tls:
- hosts:
- example.example.com
secretName: quickstart-example-tls
Reference : https://cert-manager.io/docs/tutorials/acme/ingress/
First of all make sure that you are actually visiting https://yourapp.com
Had the same issue but then I realized I was actually trying HTTP, which is no longer available after TLS is added.

Can't get real user's IP from X-Forwarded-For

I'm running Traefik 1.7.3 on a single node Kubernetes cluster and I'm trying to get the real user IP from the X-Forwarded-For header but what I get instead is X-Forwarded-For: 10.244.0.1 which is an IP in my k8s cluster.
Here's my Traefik deployment and service:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
data:
traefik.toml: |
# traefik.toml
debug = true
logLevel = "DEBUG"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.forwardedHeaders]
trustedIPs = [ "0.0.0.0/0" ]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.forwardedHeaders]
trustedIPs = [ "0.0.0.0/0" ]
[entryPoints.https.tls]
[acme]
email = "xxxx"
storage = "/acme/acme.json"
entryPoint = "https"
onHostRule = true
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
acmeLogging = true
[[acme.domains]]
main = "xxxx"
[acme.dnsChallenge]
provider = "route53"
delayBeforeCheck = 0
[persistence]
enabled = true
existingClaim = "pvc0"
annotations = {}
accessMode = "ReadWriteOnce"
size = "1Gi"
[kubernetes]
namespaces = ["default"]
[accessLog]
filePath = "/acme/access.log"
[accessLog.fields]
defaultMode = "keep"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: default
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
env:
- name: AWS_ACCESS_KEY_ID
value: xxxx
- name: AWS_SECRET_ACCESS_KEY
value: xxxx
- name: AWS_REGION
value: us-west-2
- name: AWS_HOSTED_ZONE_ID
value: xxxx
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --configfile=/config/traefik.toml
volumeMounts:
- mountPath: /config
name: config
- mountPath: /acme
name: acme
volumes:
- name: config
configMap:
name: traefik-conf
- name: acme
persistentVolumeClaim:
claimName: "pvc0"
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: default
spec:
externalIPs:
- x.x.x.x
externalTrafficPolicy: Local
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: https
- protocol: TCP
port: 8080
name: admin
type: NodePort
And here's my ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: headers-test
namespace: default
annotations:
ingress.kubernetes.io/proxy-body-size: 500m
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: xxxx
http:
paths:
- path: /
backend:
serviceName: headers-test
servicePort: 8080
I'd read that I only needed to add [entryPoints.http.forwardedHeaders] and a list of trustedIPs but that doesn't seem to work. Am I missing something?
If you use NodePort for the Traefik Ingress Service, you will have to set service.spec.externalTrafficPolicy to "Local". Otherwise you will have a SNAT when your connection enters the K8s-cluster. This SNAT is necessary to forward the incoming connection to your pod if it is not running on the same node.
But be aware that having set service.spec.externalTrafficPolicy to "Local", only the node on which the Traefik pod is executed will accept requests on 80, 443, 8080. There is no forwarding to the pod from the other nodes anymore. This can result in odd delays when connecting to your service. To avoid that your Traefik would need to run in a HA setup (DaemonSet). Just keep in mind that you need a K/V-Store for a distributed Traefik setup to make Letsencrypt work well.
If the service.spec.externalTrafficPolicy setting does not yet resolve your problem you might also need to configure the kubernetes overlay network to not do any SNAT.
service.spec.externalTrafficPolicy is nicely explained here:
https://kubernetes.io/docs/tutorials/services/source-ip/

Apache Ignite activating cluster takes a long time

I am trying to set up a cluster of Apache Ignite with persistence enabled. I am trying to start the cluster on Azure Kubernetes with 10 nodes. The problem is that the cluster activation seems to get stuck, but I am able to activate a cluster with 3 nodes in less than 5 minutes.
Here is the configuration I am using to start the cluster:
apiVersion: v1
kind: Service
metadata:
name: ignite-main
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
main: ignite-main
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- port: 10800 # JDBC port
targetPort: 10800
name: jdbc
- port: 11211 # Activating the baseline (port)
targetPort: 11211
name: control
- port: 8080 # REST port
targetPort: 8080
name: rest
selector:
main: ignite-main
---
#########################################
# Ignite service configuration
#########################################
# Service for discovery of ignite nodes
apiVersion: v1
kind: Service
metadata:
name: ignite
labels:
app: ignite
spec:
clusterIP: None
# externalTrafficPolicy: Cluster
ports:
# - port: 9042 # custom value.
# name: discovery
- port: 47500
name: discovery
- port: 47100
name: communication
- port: 11211
name: control
selector:
app: ignite
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ignite-cluster
labels:
app: ignite
main: ignite-main
spec:
selector:
matchLabels:
app: ignite
main: ignite-main
replicas: 5
template:
metadata:
labels:
app: ignite
main: ignite-main
spec:
volumes:
- name: ignite-storage
persistentVolumeClaim:
claimName: ignite-volume-claim # Must be equal to the PersistentVolumeClaim created before.
containers:
- name: ignite-node
image: ignite.azurecr.io/apacheignite/ignite:2.7.0-SNAPSHOT
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://file-location
- name: IGNITE_H2_DEBUG_CONSOLE
value: 'true'
- name: IGNITE_QUIET
value: 'false'
- name: java.net.preferIPv4Stack
value: 'true'
- name: JVM_OPTS
value: -server -Xms10g -Xmx10g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
ports:
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 8080 # REST port number.
- containerPort: 10800 # SQL port number.
- containerPort: 11211 # Activating the baseline (port)
imagePullSecrets:
- name: docker-cred
I was trying to activate the cluster remotely by providing --host parameter, like:
./control.sh --host x.x.x.x --activate
Instead, I tried activating the cluster by logging into one of the kubernetes nodes and activating from there. The detailed steps are mentioned here

Trouble at configuring http(s) for an nginx-ingress

Im currently trying to create an ingress, following the ssl-termination approach, which allows me to connect to a service both via http and https.
I managed to create a working ingress for http, partly for https, but not both together..
heres my config
Ingress Controller: Deployment & Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
env:
<!-- default-config ommitted -->
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17"
imagePullPolicy: Always
livenessProbe:
<!-- omitted -->
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx-ssl/tls
name: tls-vol
terminationGracePeriodSeconds: 60
volumes:
- name: tls-vol
secret:
secretName: tls-test-project-secret
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
nodePort: 31115
- name: https
port: 443
targetPort: https
nodePort: 31116
selector:
k8s-app: nginx-ingress-lb
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/secure-backends: "false"
# modified this to false for http & https-scenario
ingress.kubernetes.io/ssl-redirect: "true"
# modified this to false for http & https-scenario
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/add-base-url: "true"
spec:
tls:
- hosts:
- author.k8s-test
secretName: tls-test-project-secret
rules:
- host: author.k8s-test
http:
paths:
- path: /
backend:
serviceName: cms-author
servicePort: 8080
Backend - Service
apiVersion: v1
kind: Service
metadata:
name: cms-author
spec:
selector:
run: cms-author
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
Backend-Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms-author
spec:
selector:
matchLabels:
run: cms-author
replicas: 1
template:
metadata:
labels:
run: cms-author
spec:
containers:
- name: cms-author
image: <someDockerRegistryUrl>/magnolia:kube-dev
imagePullPolicy: Always
ports:
- containerPort: 8080
I have several issues, when follwing the https only scenario, i can reach the application via the ingress https nodePort, but cant login, as the follwing request goes via http instead of https.. If i put manually https before the url in browser, it is working again and any further request goes via https., but I dont know why :(
The final setting (supporting http and https) is completely not working, as if I try to access the app via http-nodePort of Ingress, it always redirects to ssl, but in this scenario, I configured to ssl-redirect to false, but still not working.
I have read many posts on github, dealing with that, but none of them worked for me
I've changed the nginx-controller images from gce_containers to quay.io, also not working
I've tried some older versions, also not working.
Deploy the nginx ingress controller from the official kubernetes charts repo https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress by setting the helm arguments controller.service.targetPorts.https and controller.service.nodePorts.https. Once they are set, the appropriate NodePort (443) will be configured by helm.
Helm uses the YAML files in https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress/templates.
Along with the nginx ingress controller, you'll need an ingress resource too. Refer https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for examples.