Deploy the same deployment several times in Kubernetes - redis

I intend to deploy my online service that depends on Redis servers in Kubernetes. So far I have:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "redis"
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
I can also expose redis as a service with:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
protocol: TCP
selector:
app: redis
With this, I can run a single pod with a Redis server and expose it.
However, my application needs to contact several Redis servers (this can be configured but ideally should not change live). It does care which Redis server it talks to so I can't just use replicas and expose it a single service as the service abstraction would not allow me to know which instance I am talking too. I understand that the need for my application to know this hinders scalability and am happy to loose some flexibility as a result.
I thought of deploying the same deployment several times and declaring a service for each, but I could not find anything to do that in the Kubernetes documentation. I could of course copy-paste my deployment and service YAML files and adding a suffix for each but this seems silly and way too manual (and frankly too complex).
Is there anything in Kubernetes to help me achieve my goal ?

You should look into pet sets. A pet set will get a unique, but determinable, name per instance like redis0, redis1, redis2.
They are also treated as pets opposed to deployment pods, which are treated as cattle.
Be advised that pet sets are harder to upgrade, delete and handle in general, but thats the price of getting the reliability and determinability.
Your deployment as pet set:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 3
template:
metadata:
labels:
app: redis
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
Also consider to use volumes to make it easier to access data in the pods.

Related

how ingressroute is hooked up to traefik's ingress contoller

I am learning traefik and ingressroute. One thing confused me the most is how the two parts are connected together.
After deploying traefik and my own service, I can simply create the following ingressroute to make it work:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-service-ir
namespace: my-service-ns
spec:
entryPoints:
- web
routes:
- match: Path(`/`)
kind: Rule
services:
- name: my-service
port: 8000
But the ingressroute has nothing shared with traefik: not in the same namespace, no selector, etc.. It seems to me that the ingressroute can magically find traefik and apply on traefik. I am curious what happened behind it.
Thanks
When you deploy traefik in the kubernetes cluster, you use the rbac-k8s manifests like here. If you use helm then these are all present under that hood.
These RBACs actually create the new resource types i.e. IngressRoute over here.
They are applied at cluster level as you see in the link ClusterRole. This gives them ClusterLevel privileges. This is the reason you don't see anything special in the namespace.
You can checkout the sample task here which will give some more light on the matter.

How to configure Traefik UDP Ingress?

My UDP setup doesn't work.
In traefik pod,
--entryPoints.udp.address=:4001/udp
is added. The port is listening and on traefik UI, it shows udp entrypoints port 4001. So entry-point UDP 4001 is working.
I have applied this CRD:
kind: IngressRouteUDP
metadata:
name: udp
spec:
entryPoints:
- udp
routes:
- services:
- name: udp
port: 4001
kubrnetes service CRD:
apiVersion: v1
kind: Service
metadata:
name: udp
spec:
selector:
app: udp-server
ports:
- protocol: UDP
port: 4001
targetPort: 4001
got error on traefik UI:
NAME: default-udp-0#kubernetescrd
ENTRYPOINTS: udp
SERVICE:
ERRORS: the udp service "default-udp-0#kubernetescrd" does not exist
What did I wrong? Or is it a bug?
traefik version 2.3.1
So I ran into the trouble using k3s/rancher and traefik 2.x. The problem here was that configuring the command line switch only showed up a working environment in the traefik dashboard, - but it just did not worked.
In k3s the solution is to provide a traefik-config.yam besite the trafik.yaml. traefik.yaml is always recreated on a restart of k3s.
Put traefik-config.yaml to /var/lib/rancher/k3s/server/manifests/traefik-config.yaml is keeping changes persistent.
What misses is the entrypoint declaration. You may assume this is done as well by the command line switch, but it is not.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--entryPoints.udp.address=:55000/udp"
entryPoints:
udp:
address: ':55000/upd'
Before going further check the helm install job in the name kube-system. If one of the two helm install jobs error out, traefik won't work.
In case everything worked as above and you still have trouble. Then one option is just to configure the upd traffic as a normal kubernetes loadbalancer service. Like this example, that was successfully tested by me
apiVersion: v1
kind: Service
metadata:
name: nginx-udp-ingress-demo-svc-udp
spec:
selector:
app: nginx-udp-ingress-demo
ports:
- protocol: UDP
port: 55000
targetPort: 55000
type: LoadBalancer
The entry type: LoadBalancer will start a pod on ony kubernets node, that will send incoming UDP/55000 to the load balancer service.
This worked for me on a k3s cluster. But is not a native traefik solution asked in the question. More a work around, that make things work in the first place.
I found a source that seem to handle the Traefik solution on https://github.com/traefik/traefik/blob/master/docs/content/routing/providers/kubernetes-crd.md.
That seems to have a working solution. But it has very slim expanation and shows just the manifests. I need to test this out, and come back.
This worked on my system.

How to share an AWS NLB between two EKS Services?

We have a cross AZ deployment in an EKS cluster inside an AWS region where every AZ is independent, meaning that components do not talk to other components that are not in the same AZ.
We are using Contour as our ingress and have different Daemon Sets, one for each AZ. As a result, we also have different Services defined for every Daemon Set.
When deploying the Services to EKS, two different NLBs are created.
We would like to have only one NLB that will be shared between the Services.
The question is: can it be achieved and if it can then how?
Yes, you should be able to do this, by using an appropriate selector in your Service.
In each DaemonSet that you use, you have set the label in the Pod-template for the pods.
E.g.
template:
metadata:
labels:
app: contour
az: az-1
and
template:
metadata:
labels:
app: contour
az: az-2
Now, in your Loadbalancer Service, you need to use a selector that matches the Pods on both your DaemonSets, e.g. app: contour
Example Service
apiVersion: v1
kind: Service
metadata:
name: my-service
annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
selector:
app: contour # this needs to match the Pods in all your DaemonSets
ports:
- protocol: TCP
port: 80
type: LoadBalancer

ElastiCache Redis Cluster and Istio

I'm trying to connect to my ElastiCache Redis Cluster 5.0 from within a container in EKS that has Istio as a sidecar proxy but I constantly get MOVED error-loop.
I have 1 shard with 2 replicas and I have added a ServiceEntry and a VirtualService for each of the shards + the configuration endpoint.
Example config used for Istio routing:
kind: ServiceEntry
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
ports:
- number: 6379
name: tcp
protocol: TCP
resolution: NONE
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
http:
- timeout: 30s
route:
- destination:
host: redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
Note that Redis protocol is not HTTP, so you cannot use an http VirtualService.
To control egress access for a TCP protocol like Redis, check Egress Control for TLS section of the Consuming External MongoDB Services blog post.

Kubernetes can't connect redis on Cluster-IP of service

I got on Google cloud this setup:
Pod and service with (php) web app
Pod and service with mysql server
Pod and service with redis server
Where kubernetes configuration file for mysql server and redis server are almost identical, only what differs is name, port and image.
I can connect mysql server from the web app but I can't connect redis server.
Also I can't connect redis server from web app on its service CLUSTER-IP but I can connect redis server on its pod IP.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: launcher.gcr.io/google/redis4
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
env:
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
selector:
app: redis
role: master
tier: backend
ports:
- port: 6379
targetPort: 6379
The deployment spec is missing some labels so the service is not selecting it.
Current deployment spec:
metadata:
labels:
app: redis
include the other labels required by the service:
metadata:
labels:
app: redis
role: metadata
tier: backend
or depending on how you want to look at it the service spec is trying match labels that don't exist, you can change the service from:
selector:
app: redis
role: master
tier: backend
to:
selector:
app: redis