How to tweak official Helm charts to prevent deploying some of the resources - redis

I want to deploy Redis in Standalone mode and I believe I don't need a master Service resource type. How can I change default values.yaml to prevent deploying master Service resource?
Here is the output of kubectl get all -n redis-ns:
NAME READY STATUS RESTARTS AGE
pod/go-redis-master-0 0/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/go-redis-headless ClusterIP None <none> 6379/TCP 20s
service/go-redis-master ClusterIP 10.97.160.122 <none> 6379/TCP 20s
NAME READY AGE
statefulset.apps/go-redis-master 0/1 20s
recap: I want to change Helm default values to prevent deploying go-redis-master Service. I only need one Headless Service.

Changing some default values in Redis Helm Chart files:
architecture: replication => architecture: standalone
auth.enabled: true => auth.enabled: false
auth.sentinel: true => auth.sentinel: false
Remove helm-app/templates/master/service.yaml file to deploy Redis with one Headless Service.
Deploy with the command below:
$ helm install go-redis -f helm-folder/redis/values.yaml helm-folder/redis --namespace redis-ns

Related

Not able to access the rabbimq Cluster which is setup using rabbitmq clustor operator

I have an AWS instance where I have minkibe installed. I have also added RabbitMQ cluster operator to it. After that I started rabbit cluster with 3 nodes. I am able to see 3 pods and their logs got no error. The service for Rabbitmq is started as loadbalancer. When i try to list URL for the service I get Rabbitmq, Rabbitmq management UI and Prometheus pod on ports. The external IP is not generated for the service. I use patch command to assign external IP.
MY issue is the RabbitMQ cluster is running fine with no errors but I am not able to access it from using public IP of the AWS instance so other services can send message to it.
Here are all the files --
clientq.yml file
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: clientq
spec:
replicas: 3
image: rabbitmq:3.9-management
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
rabbitmq:
additionalConfig: |
log.console.level = info
channel_max = 1700
default_user= guest
default_pass = guest
default_user_tags.administrator = true
service:
type: LoadBalancer
all setup --
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/clientq-server-0 1/1 Running 0 11m
pod/clientq-server-1 1/1 Running 0 11m
pod/clientq-server-2 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/clientq LoadBalancer 10.108.225.186 12.27.54.12 5672:31063/TCP,15672:31340/TCP,15692:30972/TCP
service/clientq-nodes ClusterIP None <none> 4369/TCP,25672/TCP
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
NAME READY AGE
statefulset.apps/clientq-server 3/3 11m
NAME ALLREPLICASREADY RECONCILESUCCESS AGE
rabbitmqcluster.rabbitmq.com/clientq True True 11m
here 12.27.54.12 is my public ip of the instance
which i patched using
kubectl patch svc clientq -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["12.27.54.12"]}}'
the urls for service are --
minikube service clientq --url
http://192.168.49.2:31063
http://192.168.49.2:31340
http://192.168.49.2:30972
I am able to curl these from instance it self. But I am not able to access them from public ip of instance. Did i missed something or there is a way to expose these ports ? please let me know
I have enabled all ports for inbound and outbound traffic

AWS EKS with Fargate pod status pending due to PersistentVolumeClaim not found

I have deployed EKS cluster with Fargate and alb-ingress-access using the following command:
eksctl create cluster --name fargate-cluster --version 1.17 --region us-east-2 --fargate --alb-ingress-access
A Fargate namespace has also been created.
The application being deployed has four containers namely mysql, nginx, redis and web.
The YAML files have been applied to the correct namespace.
The issue I am having is that after applying the YAML files when I get the pods status I the following status:
NAMESPACE NAME READY STATUS RESTARTS AGE
flipkicks flipkicksdb-7669b44bbb-xww26 0/1 Pending 0 112m
flipkicks flipkicksredis-74bbf9bd8c-p59hb 1/1 Running 0 112m
flipkicks nginx-5b46fd5977-9d8wk 0/1 Pending 0 112m
flipkicks web-56666f5d8-64w4d 1/1 Running 0 112m
MySQL and Nginx pods go into pending status. The deployment YAML for both have the following volumeMounts values:
MYSQL
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-db
NGINX
volumeMounts:
- mountPath: "/etc/nginx/conf.d"
name: nginx-conf
- mountPath: "/var/www/html"
name: admin-panel
The output from the events part of the kubectl describe command for both pods is:
MYSQL
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> fargate-scheduler Pod not supported on Fargate: volumes not supported: mysql-db not supported because: PVC mysql-db not bound
NGINX
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> fargate-scheduler Pod not supported on Fargate: volumes not supported: admin-panel is of an unsupported volume Type
Would really appreciate any help in understanding this problem and how to resolve it.
Since your NGINX and MYSQL pods are requiring volumeMounts, you will need a PersistentVolumeClaim which is a request for storage from the PersistentVolume resource. Your pods can then use the claim as a volume, for more info see Kubernetes Persistent Volumes.
For the longest time EKS Fargate did not support persistent storage until Aug 17, 2020 when the AWS EFS CSI driver was introduced.
You will need to deploy the AWS EFS CSI driver and update your manifests to deploy the PersistentVolume, PersistentVolumeClaim and get your pods to use the claim as a volume. I would suggest starting with the Amazon EFS CSI driver guide to deploy the CSI driver into your EKS Fargate cluster and update your manifests to match the examples provided here.

Can I have Redis available in my DDEV container?

I use DDEV as a development environment for a TYPO3 project. I want to have Redis server available (for cache).
How can I achieve that?
In order to have Redis available for TYPO3 you need to have:
Redis server
To create redis server for your project, just create a file
.ddev/docker-compose.redis.yaml with following
content:
# ddev redis recipe file
#
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-redis
image: redis:4
restart: always
ports:
- 6379
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=6379
volumes: []
web:
links:
- redis:$DDEV_HOSTNAME
Configure your application to use Redis
Use redis as a host, and port 6379.
FYI! DDEV added PHP-Redis to the web container, as of DDEV v1.1.0 on 15 Aug.
https://www.drud.com/ddev-local/ddev-v1-1-0/
"More services! We’ve added PHP-Redis to the web container. We heard repeatedly that not having Redis was a major hurdle for people who wanted to use DDEV. We hope this helps!"
You can get redis with ddev get drud/ddev-redis. There's also ddev get drud/ddev-redis-commander for use with the ddev redis service.
https://ddev.readthedocs.io/en/latest/users/extend/additional-services/

How to configure kubernetes selenium container to reach external net?

I tried examples selenium via windows minikube.
https://github.com/kubernetes/kubernetes/tree/master/examples/selenium
at Inside the container, i cant install selenium, what should i do?
pip install selenium
cmd:
kubectl run selenium-hub --image selenium/hub:2.53.1 --port 4444
kubectl expose deployment selenium-hub --type=NodePort
kubectl run selenium-node-chrome --image selenium/node-chrome:2.53.1 --env="HUB_PORT_4444_TCP_ADDR=selenium-hub" --env="HUB_PORT_4444_TCP_PORT=4444"
kubectl scale deployment selenium-node-chrome --replicas=4
kubectl run selenium-python --image=google/python-hello
kubectl exec --stdin=true --tty=true selenium-python-6479976d89-ww7jv bash
display:
PS C:\Program Files\Docker Toolbox\dockerfiles> kubectl get pods
NAME READY STATUS RESTARTS AGE
selenium-hub-5ffc6ff7db-gwq95 1/1 Running 0 15m
selenium-node-chrome-8659b47488-brwb4 1/1 Running 0 8m
selenium-node-chrome-8659b47488-dnrwr 1/1 Running 0 8m
selenium-node-chrome-8659b47488-hwvvk 1/1 Running 0 11m
selenium-node-chrome-8659b47488-t8g59 1/1 Running 0 8m
selenium-python-6479976d89-ww7jv 1/1 Running 0 6m
PS C:\Program Files\Docker Toolbox\dockerfiles> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 17m
selenium-hub NodePort 10.0.0.230 <none> 4444:32469/TCP 16m
PS C:\Program Files\Docker Toolbox\dockerfiles> kubectl exec --stdin=true --tty=true selenium-python-6479976d89-ww7jv bash
root#selenium-python-6479976d89-ww7jv:/app# ping yahoo.com
ping: unknown host yahoo.com
It looks like your pod can not resolve DNS. You need to test if your cluster has working kube-dns in kube-system namespace. If it is there and operational, check if it correctly resolves names when called upon directly by pod IP and maybe verify that your containers have correct content in /etc/resolv.conf when started
You can avoid this problem by providing ConfigMap to configure kube-dns for custom dns.
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"acme.local": ["1.2.3.4"]}
upstreamNameservers: |
["8.8.8.8"]
See more details at Kubernetes Reference Doc

Kubernetes Redis Cluster issue

I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.
etcdctl get /kube-centos/network/config
{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
Here is my replication controller
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
kubectl create -f rc.yaml
NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
Redis-config file used
appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
I used the following command to create the kubernetes service.
kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort
Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.
kubectl exec -it redis-trib bash
./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379
Redis Cluster created as expected with the below message.
[OK] All 16384 slots covered.
Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.
redis-cli -p 30894 -h 192.168.240.116 -c
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.
redis-cli -c -p 30894 -h 192.168.240.116
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using NodePort service type?
Also I cannot use LoadBalancer service type as I'm not hosting it on cloud.
I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?
Thanks
Running ./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379 doesn't make sense with this setup.
The port 6379 is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.
What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the example repository from Kelsey Hightower. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - YouTube / Slideshare.
An alternative would be to use a single redis master as shown in other examples.