Kubernetes Redis Cluster issue - redis

I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.
etcdctl get /kube-centos/network/config
{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
Here is my replication controller
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
kubectl create -f rc.yaml
NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
Redis-config file used
appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
I used the following command to create the kubernetes service.
kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort
Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.
kubectl exec -it redis-trib bash
./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379
Redis Cluster created as expected with the below message.
[OK] All 16384 slots covered.
Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.
redis-cli -p 30894 -h 192.168.240.116 -c
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.
redis-cli -c -p 30894 -h 192.168.240.116
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using NodePort service type?
Also I cannot use LoadBalancer service type as I'm not hosting it on cloud.
I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?
Thanks

Running ./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379 doesn't make sense with this setup.
The port 6379 is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.
What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the example repository from Kelsey Hightower. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - YouTube / Slideshare.
An alternative would be to use a single redis master as shown in other examples.

Related

Not able to access the rabbimq Cluster which is setup using rabbitmq clustor operator

I have an AWS instance where I have minkibe installed. I have also added RabbitMQ cluster operator to it. After that I started rabbit cluster with 3 nodes. I am able to see 3 pods and their logs got no error. The service for Rabbitmq is started as loadbalancer. When i try to list URL for the service I get Rabbitmq, Rabbitmq management UI and Prometheus pod on ports. The external IP is not generated for the service. I use patch command to assign external IP.
MY issue is the RabbitMQ cluster is running fine with no errors but I am not able to access it from using public IP of the AWS instance so other services can send message to it.
Here are all the files --
clientq.yml file
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: clientq
spec:
replicas: 3
image: rabbitmq:3.9-management
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
rabbitmq:
additionalConfig: |
log.console.level = info
channel_max = 1700
default_user= guest
default_pass = guest
default_user_tags.administrator = true
service:
type: LoadBalancer
all setup --
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/clientq-server-0 1/1 Running 0 11m
pod/clientq-server-1 1/1 Running 0 11m
pod/clientq-server-2 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/clientq LoadBalancer 10.108.225.186 12.27.54.12 5672:31063/TCP,15672:31340/TCP,15692:30972/TCP
service/clientq-nodes ClusterIP None <none> 4369/TCP,25672/TCP
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
NAME READY AGE
statefulset.apps/clientq-server 3/3 11m
NAME ALLREPLICASREADY RECONCILESUCCESS AGE
rabbitmqcluster.rabbitmq.com/clientq True True 11m
here 12.27.54.12 is my public ip of the instance
which i patched using
kubectl patch svc clientq -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["12.27.54.12"]}}'
the urls for service are --
minikube service clientq --url
http://192.168.49.2:31063
http://192.168.49.2:31340
http://192.168.49.2:30972
I am able to curl these from instance it self. But I am not able to access them from public ip of instance. Did i missed something or there is a way to expose these ports ? please let me know
I have enabled all ports for inbound and outbound traffic

How to setup rabbitmq service with Github Actions?

I am trying to set up Github Actions CI for an app that is using RabbitMQ.
RabbitMQ container is started using:
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- 5672:5672
But now I need to configure it with smth like rabbitmqctl add_user user password.
How can it be done? Should I be using rabbitmq container here at all?
As this is using the rabbitmq Docker image, you can configure user credentials by passing in the RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS environment variables.
rabbitmq:
image: rabbitmq
env:
RABBITMQ_DEFAULT_USER: craiga
RABBITMQ_DEFAULT_PASS: security_is_important
ports:
- 5672:5672
If you have trouble connecting to RabbitMQ, try with a dynamic port.
Use this:
jobs:
test:
runs-on: ubuntu-latest
services:
rabbitmq:
image: rabbitmq:3.8
env:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
- 5672
steps:
- name: Run Tests
run: |
python manage.py test
env:
RABBITMQ_HOST: 127.0.0.1
RABBITMQ_PORT: ${{ job.services.rabbitmq.ports['5672'] }}

How to configure kubernetes selenium container to reach external net?

I tried examples selenium via windows minikube.
https://github.com/kubernetes/kubernetes/tree/master/examples/selenium
at Inside the container, i cant install selenium, what should i do?
pip install selenium
cmd:
kubectl run selenium-hub --image selenium/hub:2.53.1 --port 4444
kubectl expose deployment selenium-hub --type=NodePort
kubectl run selenium-node-chrome --image selenium/node-chrome:2.53.1 --env="HUB_PORT_4444_TCP_ADDR=selenium-hub" --env="HUB_PORT_4444_TCP_PORT=4444"
kubectl scale deployment selenium-node-chrome --replicas=4
kubectl run selenium-python --image=google/python-hello
kubectl exec --stdin=true --tty=true selenium-python-6479976d89-ww7jv bash
display:
PS C:\Program Files\Docker Toolbox\dockerfiles> kubectl get pods
NAME READY STATUS RESTARTS AGE
selenium-hub-5ffc6ff7db-gwq95 1/1 Running 0 15m
selenium-node-chrome-8659b47488-brwb4 1/1 Running 0 8m
selenium-node-chrome-8659b47488-dnrwr 1/1 Running 0 8m
selenium-node-chrome-8659b47488-hwvvk 1/1 Running 0 11m
selenium-node-chrome-8659b47488-t8g59 1/1 Running 0 8m
selenium-python-6479976d89-ww7jv 1/1 Running 0 6m
PS C:\Program Files\Docker Toolbox\dockerfiles> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 17m
selenium-hub NodePort 10.0.0.230 <none> 4444:32469/TCP 16m
PS C:\Program Files\Docker Toolbox\dockerfiles> kubectl exec --stdin=true --tty=true selenium-python-6479976d89-ww7jv bash
root#selenium-python-6479976d89-ww7jv:/app# ping yahoo.com
ping: unknown host yahoo.com
It looks like your pod can not resolve DNS. You need to test if your cluster has working kube-dns in kube-system namespace. If it is there and operational, check if it correctly resolves names when called upon directly by pod IP and maybe verify that your containers have correct content in /etc/resolv.conf when started
You can avoid this problem by providing ConfigMap to configure kube-dns for custom dns.
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"acme.local": ["1.2.3.4"]}
upstreamNameservers: |
["8.8.8.8"]
See more details at Kubernetes Reference Doc

Routing to different container in docker using zuul not working

I have 2 microservices (spring boot app) running in different docker containers and configured with zuul api gateway. Routing to other container is not working. Container 1 is running in 8030 port & container 2 is running on port 8030.
Below is the zuul configuration in application.yml -
server:
port: 8030
# TODO: figure out why I need this here and in bootstrap.yml
spring:
application:
name: zuul server
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
routes:
zuultest:
url: http://localhost:8080
stripPrefix: false
ribbon:
eureka:
enabled: false
When access through localhost:8030/zuultest/test am getting the exception as -
2016-09-19 09:10:14.597 INFO 1 --- [nio-8030-exec-3] hello.SimpleFilter : GET request to http://localhost:8030/zuultest/test
2016-09-19 09:10:14.600 WARN 1 --- [nio-8030-exec-3] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
Can I know why I am getting this?
you can use links option in the docker-compose.yml to link between the two containers.
demo1:
image: <demo1 image name>
links: - demo2
demo2:
image: <demo2 image name>
Then in the zuul:routs:url configuration you can use the conatiner name, demo2 instead of it's IP.
How did you start the 2 containers? Both cannot have the same port if you exposed them to the docker host.
docker run --name service A --net=host -p 8030:8030 ...
docker run --name service B --net=host -p 8030:8031 ...
Without this, if you are calling localhost:8030, you are calling the host (not the container), and you are not getting a response.
You need to map the port to the host when you start them with different ports, and call them with localhost to the right exposed port

Deploy/run a Redis service using Ansible and Docker

I'm using Ansible docker module to setup a Redis service (see ansible role below)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
volumes_from:
- redis-data
After provisioning, redis-service container is up but when I try to connect to redis using redis-cli I have the following error:
vagrant#dev1:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
NOTE: redis-service seems up and running:
vagrant#dev1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e8f27b14479 redis:3 "/entrypoint.sh redis" 12 minutes ago Up 12 minutes 6379/tcp redis-service
vagrant#dev1:~$ docker logs 3e8f27b14479
...
1:M 02 Sep 15:41:16.532 * The server is now ready to accept connections on port 6379
Do you have any idea of what might cause the problem?
I finally found the problem: ports attribute must be set too (not only expose)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
ports:
- 6379:6379
volumes_from:
- redis-data