Canary Deployment Strategy using Argocd rollout and Service Mesh Interface (Traefik Mesh) - traefik

I'm working on the Canary Deployment Strategy.
I use the Service Mesh Interface, after installing trafik mesh.
When starting the program for the first time with the command
kubectl apply -f applications.yaml
It should deploy the entire application i.e. 4 replicas, but it deploys only 20% (1 replica) of the application,
and it goes into progressing state with an error:
TrafficRoutingErro: the server could not find the requested resource (post trafficsplits.splits.smi-spec.io)
TrafficSplitNotCreated: Unable to create traffic Split 'demo-traefficsplit'
Here is my manifest:
argocd-rollout.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: demo
labels:
app: demo
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: "1m"
- setWeight: 50
- pause:
duration: "2m"
canaryService: demo-canary
stableService: demo
trafficRouting:
smi:
rootService: demo-smi
trafficSplitName: demo-trafficsplit
replicas: 4
revisionHistoryLimit: 2
selector:
matchLabels:
app: demo
version: blue
template:
metadata:
labels:
app: demo
version: blue
spec:
containers:
- name: demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "140m"
---
apiVersion: split.smi-spec.io/v1alpha3
kind: TrafficSplit
metadata:
name: demo-trafficsplit
spec:
service: demo-smi # controller uses the stableService if Rollout does not specify the rootService field
backends:
- service: demo
weight: 10
- service: demo-canary
weight: 90
---
apiVersion: v1
kind: Service
metadata:
name: demo-smi
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rollout-ing
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`mycompagny.com`)
services:
- name: demo-smi
port: 80
tls:
certResolver: myresolver
applications.yaml
apiVersion: v1
kind: Namespace
metadata:
name: net
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rollout
namespace: argocd
spec:
project: default
source:
repoURL: git#github.com:telemaqueHQ/DevOps.git
targetRevision: master
path: gitOps/test/argocd
destination:
server: https://kubernetes.default.svc
namespace: net
syncPolicy:
automated:
prune: true
selfHeal: true

Related

ingress in AKS for API

I'm trying to deploy an ASP-Net Core API and make it available from outside cluster trough an ingress. I have followed the steps mentioned in the learn page. All the steps are working fine, however, I'm unable to access my ingress on the route /api/opportunities/. Below I'm describing my K8S files, might I be missing something?
apiVersion: apps/v1
kind: Deployment
metadata:
name: opportunities-api
spec:
replicas: 1
selector:
matchLabels:
component: opportunities-api
template:
metadata:
labels:
component: opportunities-api
spec:
containers:
- name: opportunities-api
image: mycontainer.azurecr.io/opportunities-api:{BUILD_NO}
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: opportunities-api
spec:
ports:
- port : 80
protocol: TCP
targetPort: 80
selector:
component: opportunities-api
type: ClusterIP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opportunities-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: opportunities-api
port:
number: 80
I see that host field is missing in above ingress yaml. Did you try adding .spec.rules.host in the ingress yaml as below and see if it helps?
As per the nginx document, it is one of the restrictions.
Also, if AKS v>=1.24, then can you check what is the value set for annotation service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path in ingress controller service. It should be /healthz as discussed in AKS Ingress-Nginx ingress controller failing to route by host
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opportunities-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: abc.com #your host name here
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: opportunities-api
port:
number: 80

ingress controller path based routing for apache applications deployed on kubernetes

I have a tomcat image with deployed SampleWebApp.war in conf/webapps
I am deploying this image inside pod on kubernetes cluster.
I want to expose clusterIP service pointing to tomcat application through ingress controller.
I can't use "/" in my ingress controller for redirection as already another application is using same host and path "/"
I tried giving path as "tomcat" . but it is not accessible when i tried to open UI on web
Below are my yaml's. can someone suggest what can be done here ?
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatinfra
namespace: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcatinfra
template:
metadata:
name: tomcatinfra
labels:
app: tomcatinfra
spec:
containers:
- image: saravak/tomcat8
name: tomcatapp
Sevice.yaml
kind: Service
apiVersion: v1
metadata:
name: tomcat-service
namespace: tomcat
spec:
type: ClusterIP
selector:
app: tomcatinfra
ports:
- protocol: TCP
port: 3000
targetPort: 8080
Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat
namespace: tomcat
spec:
rules:
- host: build.com
http:
paths:
- backend:
serviceName: tomcat-service
servicePort: 8080
path: /tomcat
pathType: ImplementationSpecific
Try adding the annotation of ingress class
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: /tomcat
backend:
service:
name: service1
port:
number: 80

Simple front end application on EKS via AppMesh

Kindly ask you to help to find out the problem with my configuration.
It was done on the scope of AWS WorkShop example just rewrite on another HTTP container.
Right now, after implementation of this, everything is up, but when going on NLB getting "no healthy upstream".
Have checked the logs, and see only 503 errors on my Gateway Ingress. Requests are not coming to my pod at all. Where I made mistake in my configuration?
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualGateway
metadata:
name: ingress-gw
namespace: shared
spec:
namespaceSelector:
matchLabels:
gateway: shared-gw
podSelector:
matchLabels:
app: ingress-gw
listeners:
- portMapping:
port: 8088
protocol: http
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-echo-deployment
namespace: shared
labels:
app: httpd-echo1
spec:
replicas: 1
selector:
matchLabels:
app: httpd-echo1
template:
metadata:
labels:
app: httpd-echo1
annotations:
appmesh.k8s.aws/mesh: shared-mesh
spec:
containers:
- name: httpd
image: hashicorp/http-echo
args:
- "-text=test"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
namespace: shared
name: httpd-echo-service
labels:
app: httpd-echo1
spec:
ports:
- name: "http"
port: 5678
targetPort: 5678
selector:
app: httpd-echo1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: shared-virtual-node-1
namespace: shared
spec:
podSelector:
matchLabels:
app: httpd-echo1
listeners:
- portMapping:
port: 5678
protocol: http
healthCheck:
protocol: http
path: '/'
healthyThreshold: 5
unhealthyThreshold: 5
timeoutMillis: 2000
intervalMillis: 5000
serviceDiscovery:
dns:
hostname: httpd-echo1.test.com
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: shared-virtual-service-1
namespace: shared
spec:
awsName: httpd-echo1.test.com
provider:
virtualNode:
virtualNodeRef:
name: shared-virtual-node-1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
name: shared-gw-route-1
namespace: shared
spec:
httpRoute:
match:
prefix: "/"
action:
target:
virtualService:
virtualServiceRef:
name: shared-virtual-service-1
---
apiVersion: v1
kind: Service
metadata:
name: ingress-gw
namespace: shared
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-subnets : subnet-1,subnet-2,subnet-3
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8088
name: http
selector:
app: ingress-gw
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-gw
namespace: shared
spec:
replicas: 1
selector:
matchLabels:
app: ingress-gw
template:
metadata:
labels:
app: ingress-gw
spec:
containers:
- name: envoy
image: 422531588944.dkr.ecr.eu-south-1.amazonaws.com/aws-appmesh-envoy:v1.16.1.1-prod
ports:
- containerPort: 8088
Example which I have tried to use : https://github.com/aws-containers/eks-app-mesh-polyglot-demo/tree/cf15e0d8e10c019d332f5378d132a8d620131df8/deployment
I tried to reproduce the same at my side and it worked fine. There are couple of configuration changes I did to the above yaml.
Added the gateway label “gateway: shared-gw“ to the VirtualGateway. Make sure that you have this label in the namespace as well.
Corrected the dns hostname. This should be your application clusterIp service name
serviceDiscovery:
dns:
hostname: httpd-echo1.shared.svc.cluster.local
Also, ensure that your Laodbalancer is Active and the target group listener for this LB is showing healthy status
I am adding the updated yaml below. You can try this and see if it works.
---
apiVersion: v1
kind: Namespace
metadata:
name: shared
labels:
mesh: shared-mesh
gateway: ingress-gw
appmesh.k8s.aws/sidecarInjectorWebhook: enabled
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: shared-mesh
spec:
namespaceSelector:
matchLabels:
mesh: shared-mesh
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-echo1
namespace: shared
labels:
app: httpd-echo1
spec:
replicas: 1
selector:
matchLabels:
app: httpd-echo1
template:
metadata:
labels:
app: httpd-echo1
annotations:
appmesh.k8s.aws/mesh: shared-mesh
spec:
containers:
- name: httpd
image: hashicorp/http-echo
args:
- "-text=test"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
namespace: shared
name: httpd-echo1
labels:
app: httpd-echo1
spec:
ports:
- name: "http"
port: 5678
targetPort: 5678
selector:
app: httpd-echo1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: shared-virtual-node-1
namespace: shared
spec:
podSelector:
matchLabels:
app: httpd-echo1
listeners:
- portMapping:
port: 5678
protocol: http
healthCheck:
protocol: http
path: '/'
healthyThreshold: 5
unhealthyThreshold: 5
timeoutMillis: 2000
intervalMillis: 5000
serviceDiscovery:
dns:
hostname: httpd-echo1.shared.svc.cluster.local
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: shared-virtual-service-1
namespace: shared
spec:
awsName: httpd-echo1.shared.svc.cluster.local
provider:
virtualNode:
virtualNodeRef:
name: shared-virtual-node-1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualGateway
metadata:
name: ingress-gw
namespace: shared
spec:
namespaceSelector:
matchLabels:
gateway: ingress-gw
podSelector:
matchLabels:
app: ingress-gw
listeners:
- portMapping:
port: 8088
protocol: http
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: v1
kind: Service
metadata:
name: ingress-gw
namespace: shared
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8088
name: http
selector:
app: ingress-gw
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-gw
namespace: shared
spec:
replicas: 1
selector:
matchLabels:
app: ingress-gw
template:
metadata:
labels:
app: ingress-gw
spec:
containers:
- name: envoy
image: 422531588944.dkr.ecr.eu-south-1.amazonaws.com/aws-appmesh-envoy:v1.16.1.1-prod
ports:
- containerPort: 8088
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
name: shared-gw-route-1
namespace: shared
spec:
httpRoute:
match:
prefix: "/"
action:
target:
virtualService:
virtualServiceRef:
name: shared-virtual-service-1
---

Traefik 2.0 redirect middleware with Google Kubernetes Engine

I'm trying to test and implement Traefik's https redirect feature in my kubernetes cluster per Traefik's documentation: https://docs.traefik.io/middlewares/overview/. Here's the definition of the Middleware and IngressRoute:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`your.domain.name`) && Host(`www.your.domain.name`)
kind: Rule
services:
- name: traefik-dashboard
port: 8080
middlewares:
- name: redirectscheme
tls:
secretName: cloud-tls
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirectscheme
spec:
redirectScheme:
scheme: https
However, https://your.domain.name works and http://your.domain.name gives me a 404 page not found.
Does anyone know what have I misconfigured ?
that worked for me:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`example.com`)
middlewares:
- name: test-redirectscheme
kind: Rule
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`www.example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`www.example.com`)
kind: Rule
middlewares:
- name: test-redirectscheme
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-redirectscheme
namespace: default
spec:
redirectScheme:
scheme: https

Kubernetes Ingress-controller ssl passthrough without termination

I use kubernetes 1.11.4 for ssl passthrough without termination to worker nodes. There are one deployment with two pods on each worker node
I use ingress controller as daemonset
##TODO Set up custom default backend
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
nodeSelector:
role: edge-router
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
# - --enable-ssl-passthrough=true
# - --enable-access-log=true
# - --enable-dynamic-certificates=true
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
I set up controller in separate worker node switched to drain mode
This is code for deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: meteo
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: meteo
template:
metadata:
labels:
app: meteo
spec:
containers:
- name: meteo
image: devprofi/meteo:v11
ports:
- containerPort: 443
imagePullSecrets:
- name: meteo-secret
---
apiVersion: v1
kind: Service
metadata:
name: meteo-svc
namespace: default
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: meteo
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
# kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/secure-backends: "true"
# nginx.ingress.kubernetes.io/enable-access-log: "true"
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: meteo-ingress
namespace: default
spec:
rules:
- host: meteotravel.ru
http:
paths:
- path: /
backend:
serviceName: meteo-svc
servicePort: 443
tls:
- hosts:
- meteotravel.ru
# secretName: cafe-secret
This is default backend
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- containerPort: 8080
# resources:
# limits:
# cpu: 100m
# memory: 200Mi
# requests:
# cpu: 100m
# memory: 200Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
And it is code for nodeport service for ingress-controller because I use bare metal cluster
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
But it works strange
It overload 1 node that is a router-mode(ingress controller daemon)
And worker nodes don't work
How to offload one router node and load worker nodes.
On scheme???from weave scope I see that ingress controller overload
I get the following results of ab -n 1000000 -k -c 5000
I think it is so slow It will be greate if it works as simple tcp proxy
with minimum of resources