I am trying to set up a cluster of Apache Ignite with persistence enabled. I am trying to start the cluster on Azure Kubernetes with 10 nodes. The problem is that the cluster activation seems to get stuck, but I am able to activate a cluster with 3 nodes in less than 5 minutes.
Here is the configuration I am using to start the cluster:
apiVersion: v1
kind: Service
metadata:
name: ignite-main
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
main: ignite-main
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- port: 10800 # JDBC port
targetPort: 10800
name: jdbc
- port: 11211 # Activating the baseline (port)
targetPort: 11211
name: control
- port: 8080 # REST port
targetPort: 8080
name: rest
selector:
main: ignite-main
---
#########################################
# Ignite service configuration
#########################################
# Service for discovery of ignite nodes
apiVersion: v1
kind: Service
metadata:
name: ignite
labels:
app: ignite
spec:
clusterIP: None
# externalTrafficPolicy: Cluster
ports:
# - port: 9042 # custom value.
# name: discovery
- port: 47500
name: discovery
- port: 47100
name: communication
- port: 11211
name: control
selector:
app: ignite
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ignite-cluster
labels:
app: ignite
main: ignite-main
spec:
selector:
matchLabels:
app: ignite
main: ignite-main
replicas: 5
template:
metadata:
labels:
app: ignite
main: ignite-main
spec:
volumes:
- name: ignite-storage
persistentVolumeClaim:
claimName: ignite-volume-claim # Must be equal to the PersistentVolumeClaim created before.
containers:
- name: ignite-node
image: ignite.azurecr.io/apacheignite/ignite:2.7.0-SNAPSHOT
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://file-location
- name: IGNITE_H2_DEBUG_CONSOLE
value: 'true'
- name: IGNITE_QUIET
value: 'false'
- name: java.net.preferIPv4Stack
value: 'true'
- name: JVM_OPTS
value: -server -Xms10g -Xmx10g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
ports:
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 8080 # REST port number.
- containerPort: 10800 # SQL port number.
- containerPort: 11211 # Activating the baseline (port)
imagePullSecrets:
- name: docker-cred
I was trying to activate the cluster remotely by providing --host parameter, like:
./control.sh --host x.x.x.x --activate
Instead, I tried activating the cluster by logging into one of the kubernetes nodes and activating from there. The detailed steps are mentioned here
Related
I'm trying to build ALB -> Kube -> Dex using Load Balancer Controller. As a result, I have ALB with correctly binding instances into the target group, but the instance is Unhealthy.
The load Balancer Controller uses the 31845 as a health check port. A tried the port 5556, but still unhealthy.
So I can assume the setting is correct. But I'm not sure.
Another possibility, the DEX container isn't set up correctly.
And yet another version, I configured everything in the wrong way.
Does anyone have already configured DEX in this way and can prompt me?
Dex service
apiVersion: v1
kind: Service
metadata:
name: dex
...
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 5556
targetPort: http
nodePort: 31845
...
selector:
app.kubernetes.io/instance: dex
app.kubernetes.io/name: dex
clusterIP: 172.20.97.132
clusterIPs:
- 172.20.97.132
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
DEX pod
containerStatuses:
- name: dex
state:
running:
startedAt: '2022-09-19T17:41:43Z'
...
containers:
- name: dex
image: ghcr.io/dexidp/dex:v2.34.0
args:
- dex
- serve
- '--web-http-addr'
- 0.0.0.0:5556
- '--telemetry-addr'
- 0.0.0.0:5558
- /etc/dex/config.yaml
ports:
- name: http
containerPort: 5556
protocol: TCP
- name: telemetry
containerPort: 5558
protocol: TCP
env:
- name: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
load Balancer Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${name_http_ingress}
namespace: ${namespace}
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/part-of: argocd
app.kubernetes.io/name: argocd-server
annotations:
alb.ingress.kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/backend-protocol-version: HTTP1
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '10'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '3'
alb.ingress.kubernetes.io/success-codes: 200,301,302,307
alb.ingress.kubernetes.io/conditions.argogrpc: >-
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["^application/grpc.*$"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: >-
{"type":"redirect","redirectConfig":{"port":"443","protocol":"HTTPS","statusCode":"HTTP_301"}}
# external-dns.alpha.kubernetes.io/hostname: ${domain_name_public}
alb.ingress.kubernetes.io/certificate-arn: ${domain_certificate}
# alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: ${name_http_ingress}
alb.ingress.kubernetes.io/target-type: instance
# alb.ingress.kubernetes.io/target-type: ip # require to enable sticky sessions ,stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests
alb.ingress.kubernetes.io/target-node-labels: ${tolerations_key}=${tolerations_value}
alb.ingress.kubernetes.io/tags: Environment=${tags_env},Restricted=false,Customer=customer,Project=ops,Name=${name_http_ingress}
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=180
spec:
ingressClassName: alb
tls:
- hosts:
- ${domain_name_public}
- ${domain_name_public_dex}
rules:
- host: ${domain_name_public}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- host: ${domain_name_public_dex}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dex
port:
number: 5556
I have an Openshift 3 Cluster containing the two following containers: selenium-hub and selenium-node-chrome. Please see below the attached deployment and service yaml files.
Hub Deployment:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: selenium-hub
selenium-hub: master
name: selenium-hub
spec:
replicas: 1
selector:
type: selenium-hub
template:
metadata:
labels:
type: selenium-hub
name: selenium-hub
spec:
containers:
- image: 'selenium/hub:latest'
imagePullPolicy: IfNotPresent
name: master
ports:
- containerPort: 4444
protocol: TCP
- containerPort: 4442
protocol: TCP
- containerPort: 4443
protocol: TCP
triggers:
- type: ConfigChange
Hub Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: selenium-hub
selenium-hub: master
name: selenium-hub
spec:
ports:
- name: selenium-hub
port: 4444
protocol: TCP
targetPort: 4444
- name: publish
port: 4442
protocol: TCP
targetPort: 4442
- name: subscribe
port: 4443
protocol: TCP
targetPort: 4443
selector:
type: selenium-hub
type: ClusterIP
Node Deployment:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: selenium-node-chrome
name: selenium-node-chrome
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
browser: chrome
template:
metadata:
labels:
app: node-chrome
browser: chrome
name: selenium-node-chrome-master
spec:
containers:
- env:
- name: SE_EVENT_BUS_HOST
value: selenium-hub
- name: SE_EVENT_BUS_PUBLISH_PORT
value: '4442'
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: '4443'
- name: SE_NODE_HOST
value: node-chrome
- name: SE_NODE_PORT
value: '5555'
image: 'selenium/node-chrome:4.0.0-20211102'
imagePullPolicy: IfNotPresent
name: master
ports:
- containerPort: 5555
protocol: TCP
triggers:
- type: ConfigChange
Node Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: selenium-node-chrome
name: selenium-node-chrome
spec:
ports:
- name: node-port
port: 5555
protocol: TCP
targetPort: 5555
- name: node-port-grid
port: 4444
protocol: TCP
targetPort: 4444
selector:
browser: chrome
type: ClusterIP
My Issue:
The hub and the node are starting, but the node just keeps sending the registration event and the hub is logging some infos, which i dont really understand. Please see the logs attached below.
Node Log:
Setting up SE_NODE_GRID_URL...
Selenium Grid Node configuration:
[events]
publish = "tcp://selenium-hub:4442"
subscribe = "tcp://selenium-hub:4443"
[server]
host = "node-chrome"
port = "5555"
[node]
session-timeout = "300"
override-max-sessions = false
detect-drivers = false
max-sessions = 1
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "95.0", "platformName": "Linux"}'
max-sessions = 1
Starting Selenium Grid Node...
11:34:31.635 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
11:34:31.643 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
11:34:31.774 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://selenium-hub:4442 and tcp://selenium-hub:4443
11:34:31.843 INFO [UnboundZmqEventBus.<init>] - Sockets created
11:34:32.854 INFO [UnboundZmqEventBus.<init>] - Event bus ready
11:34:33.018 INFO [NodeServer.createHandlers] - Reporting self as: http://node-chrome:5555
11:34:33.044 INFO [NodeOptions.getSessionFactories] - Detected 1 available processors
11:34:33.115 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "95.0","browserName": "chrome","platformName": "Linux","se:vncEnabled": true} 1 times
11:34:33.130 INFO [Node.<init>] - Binding additional locator mechanisms: name, relative, id
11:34:33.471 INFO [NodeServer$1.start] - Starting registration process for node id 2832e819-cf31-4bd9-afcc-cd2b27578d58
11:34:33.473 INFO [NodeServer.execute] - Started Selenium node 4.0.0 (revision 3a21814679): http://node-chrome:5555
11:34:33.476 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
11:34:43.479 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
11:34:53.481 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
Hub Log:
2021-12-07 11:14:22,663 INFO spawned: 'selenium-grid-hub' with pid 11
2021-12-07 11:14:23,664 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
11:14:23.953 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
11:14:23.961 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
11:14:24.136 INFO [BoundZmqEventBus.<init>] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://XXXXXXX:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://XXXXXX:4443]
11:14:24.246 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://XXXXXX:4442 and tcp://XXXXXXX:4443
11:14:24.275 INFO [UnboundZmqEventBus.<init>] - Sockets created
11:14:25.278 INFO [UnboundZmqEventBus.<init>] - Event bus ready
11:14:26.232 INFO [Hub.execute] - Started Selenium Hub 4.1.0 (revision 87802e897b): http://XXXXXXX:4444
11:14:46.965 INFO [Node.<init>] - Binding additional locator mechanisms: name, relative, id
11:15:46.916 INFO [Node.<init>] - Binding additional locator mechanisms: relative, name, id
11:17:52.377 INFO [Node.<init>] - Binding additional locator mechanisms: relative, id, name
Can anyone tell me why the hub wont register the node?
If you need any further informations, let me know.
Thanks alot
So, bit late, but still I had this same issue - the docker-compose example gave me selenium-hub as the host, which is correct in that scenario as it points towards the container defined by the selenium-hub service.
However, in Kubernetes, the inter-pod communication needs to go via a Service. There are multiple kinds of Service, but in order to access it from inside the cluster, it's easiest in this case to use a ClusterIP (docs here for more info).
The way I resolved it was to have a Service for both the ports that the event bus uses:
bus-publisher (port 4442)
bus-subscription (port 4443)
In a manifest yaml, this looks like:
apiVersion: v1
kind: Service
metadata:
labels:
app-name: selenium
name: bus-sub
namespace: selenium
spec:
ports:
- port: 4443
protocol: TCP
targetPort: 4443
selector:
app: selenium-hub
type: ClusterIP
you didn't expose the ports 4443 and 4442 from the hub container (see ports section of spec.containers)
You are in same machine so I think you don't need to use the environment variable: SE_NODE_HOST in the node deployment only use these variables:
SE_EVENT_BUS_HOST=selenium-hub
SE_EVENT_BUS_PUBLISH_PORT=4442
SE_EVENT_BUS_SUBSCRIBE_PORT=4443
If you think you aren't in the same VM, you need to config the node deployment correctly by using these environment variables :
SE_EVENT_BUS_HOST=<ip-of-hub-machine>
SE_EVENT_BUS_PUBLISH_PORT=4442
SE_EVENT_BUS_SUBSCRIBE_PORT=4443
SE_NODE_HOST=<ip-of-node-machine>
Please don't add unused environment variables like:'SE_NODE_PORT' because selenium image doesn't support different environment variables besides the environment variables you can read in the documents in Github 'docker-selenium' project: https://github.com/SeleniumHQ/docker-selenium.
If you are so much want to use your variable. So create your own selenium image (I don't recommend that) I succuss with what I say to you.
I'm running Traefik 1.7.3 on a single node Kubernetes cluster and I'm trying to get the real user IP from the X-Forwarded-For header but what I get instead is X-Forwarded-For: 10.244.0.1 which is an IP in my k8s cluster.
Here's my Traefik deployment and service:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
data:
traefik.toml: |
# traefik.toml
debug = true
logLevel = "DEBUG"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.forwardedHeaders]
trustedIPs = [ "0.0.0.0/0" ]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.forwardedHeaders]
trustedIPs = [ "0.0.0.0/0" ]
[entryPoints.https.tls]
[acme]
email = "xxxx"
storage = "/acme/acme.json"
entryPoint = "https"
onHostRule = true
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
acmeLogging = true
[[acme.domains]]
main = "xxxx"
[acme.dnsChallenge]
provider = "route53"
delayBeforeCheck = 0
[persistence]
enabled = true
existingClaim = "pvc0"
annotations = {}
accessMode = "ReadWriteOnce"
size = "1Gi"
[kubernetes]
namespaces = ["default"]
[accessLog]
filePath = "/acme/access.log"
[accessLog.fields]
defaultMode = "keep"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: default
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
env:
- name: AWS_ACCESS_KEY_ID
value: xxxx
- name: AWS_SECRET_ACCESS_KEY
value: xxxx
- name: AWS_REGION
value: us-west-2
- name: AWS_HOSTED_ZONE_ID
value: xxxx
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --configfile=/config/traefik.toml
volumeMounts:
- mountPath: /config
name: config
- mountPath: /acme
name: acme
volumes:
- name: config
configMap:
name: traefik-conf
- name: acme
persistentVolumeClaim:
claimName: "pvc0"
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: default
spec:
externalIPs:
- x.x.x.x
externalTrafficPolicy: Local
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: https
- protocol: TCP
port: 8080
name: admin
type: NodePort
And here's my ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: headers-test
namespace: default
annotations:
ingress.kubernetes.io/proxy-body-size: 500m
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: xxxx
http:
paths:
- path: /
backend:
serviceName: headers-test
servicePort: 8080
I'd read that I only needed to add [entryPoints.http.forwardedHeaders] and a list of trustedIPs but that doesn't seem to work. Am I missing something?
If you use NodePort for the Traefik Ingress Service, you will have to set service.spec.externalTrafficPolicy to "Local". Otherwise you will have a SNAT when your connection enters the K8s-cluster. This SNAT is necessary to forward the incoming connection to your pod if it is not running on the same node.
But be aware that having set service.spec.externalTrafficPolicy to "Local", only the node on which the Traefik pod is executed will accept requests on 80, 443, 8080. There is no forwarding to the pod from the other nodes anymore. This can result in odd delays when connecting to your service. To avoid that your Traefik would need to run in a HA setup (DaemonSet). Just keep in mind that you need a K/V-Store for a distributed Traefik setup to make Letsencrypt work well.
If the service.spec.externalTrafficPolicy setting does not yet resolve your problem you might also need to configure the kubernetes overlay network to not do any SNAT.
service.spec.externalTrafficPolicy is nicely explained here:
https://kubernetes.io/docs/tutorials/services/source-ip/
I'm trying to setup Redis cluster in Kubernetes. The major requirement is that all of nodes from Redis cluster have to be available from outside of Kubernetes. So clients can connect every node directly. But I got no idea how to configure service that way.
Basic config of cluster right now. It's ok for services into k8s but no full access from outside.
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly no
protected-mode no
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "false"
name: redis-cluster
labels:
app: redis-cluster
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
template:
metadata:
labels:
app: redis-cluster
spec:
hostNetwork: true
containers:
- name: redis-cluster
image: redis:4.0.10
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["redis-server"]
args: ["/conf/redis.conf"]
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
items:
- key: redis.conf
path: redis.conf
Given:
spec:
hostNetwork: true
containers:
- name: redis-cluster
ports:
- containerPort: 6379
name: client
It appears that your StatefulSet is misconfigured, since if hostNetwork is true, you have to provide hostPort, and that value should match containerPort, according to the PodSpec docs:
hostPort integer - Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#containerport-v1-core
Im currently trying to create an ingress, following the ssl-termination approach, which allows me to connect to a service both via http and https.
I managed to create a working ingress for http, partly for https, but not both together..
heres my config
Ingress Controller: Deployment & Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
env:
<!-- default-config ommitted -->
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17"
imagePullPolicy: Always
livenessProbe:
<!-- omitted -->
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx-ssl/tls
name: tls-vol
terminationGracePeriodSeconds: 60
volumes:
- name: tls-vol
secret:
secretName: tls-test-project-secret
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
nodePort: 31115
- name: https
port: 443
targetPort: https
nodePort: 31116
selector:
k8s-app: nginx-ingress-lb
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/secure-backends: "false"
# modified this to false for http & https-scenario
ingress.kubernetes.io/ssl-redirect: "true"
# modified this to false for http & https-scenario
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/add-base-url: "true"
spec:
tls:
- hosts:
- author.k8s-test
secretName: tls-test-project-secret
rules:
- host: author.k8s-test
http:
paths:
- path: /
backend:
serviceName: cms-author
servicePort: 8080
Backend - Service
apiVersion: v1
kind: Service
metadata:
name: cms-author
spec:
selector:
run: cms-author
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
Backend-Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms-author
spec:
selector:
matchLabels:
run: cms-author
replicas: 1
template:
metadata:
labels:
run: cms-author
spec:
containers:
- name: cms-author
image: <someDockerRegistryUrl>/magnolia:kube-dev
imagePullPolicy: Always
ports:
- containerPort: 8080
I have several issues, when follwing the https only scenario, i can reach the application via the ingress https nodePort, but cant login, as the follwing request goes via http instead of https.. If i put manually https before the url in browser, it is working again and any further request goes via https., but I dont know why :(
The final setting (supporting http and https) is completely not working, as if I try to access the app via http-nodePort of Ingress, it always redirects to ssl, but in this scenario, I configured to ssl-redirect to false, but still not working.
I have read many posts on github, dealing with that, but none of them worked for me
I've changed the nginx-controller images from gce_containers to quay.io, also not working
I've tried some older versions, also not working.
Deploy the nginx ingress controller from the official kubernetes charts repo https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress by setting the helm arguments controller.service.targetPorts.https and controller.service.nodePorts.https. Once they are set, the appropriate NodePort (443) will be configured by helm.
Helm uses the YAML files in https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress/templates.
Along with the nginx ingress controller, you'll need an ingress resource too. Refer https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for examples.