I'm using Ansible docker module to setup a Redis service (see ansible role below)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
volumes_from:
- redis-data
After provisioning, redis-service container is up but when I try to connect to redis using redis-cli I have the following error:
vagrant#dev1:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
NOTE: redis-service seems up and running:
vagrant#dev1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e8f27b14479 redis:3 "/entrypoint.sh redis" 12 minutes ago Up 12 minutes 6379/tcp redis-service
vagrant#dev1:~$ docker logs 3e8f27b14479
...
1:M 02 Sep 15:41:16.532 * The server is now ready to accept connections on port 6379
Do you have any idea of what might cause the problem?
I finally found the problem: ports attribute must be set too (not only expose)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
ports:
- 6379:6379
volumes_from:
- redis-data
Related
I tried the following code
---
- name: Stop ssh
service:
name: sshd
state: stopped
- name: Start ssh
service:
name: sshd
state: started
and it said could not find sshd
I even tried the following code
- name: "Stop ssh"
service:
name: ssh
state: stopped
- name: "start ssh"
service:
name: ssh
state: started
Since i am not supposed to use restarted or with_items.
I still could not stop and start
I am trying to set up Github Actions CI for an app that is using RabbitMQ.
RabbitMQ container is started using:
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- 5672:5672
But now I need to configure it with smth like rabbitmqctl add_user user password.
How can it be done? Should I be using rabbitmq container here at all?
As this is using the rabbitmq Docker image, you can configure user credentials by passing in the RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS environment variables.
rabbitmq:
image: rabbitmq
env:
RABBITMQ_DEFAULT_USER: craiga
RABBITMQ_DEFAULT_PASS: security_is_important
ports:
- 5672:5672
If you have trouble connecting to RabbitMQ, try with a dynamic port.
Use this:
jobs:
test:
runs-on: ubuntu-latest
services:
rabbitmq:
image: rabbitmq:3.8
env:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
- 5672
steps:
- name: Run Tests
run: |
python manage.py test
env:
RABBITMQ_HOST: 127.0.0.1
RABBITMQ_PORT: ${{ job.services.rabbitmq.ports['5672'] }}
I'm using ansible to run a command against multiple servers at once. I want to ignore any hosts that fail because of the '"SSH Error: data could not be sent to remote host \"1.2.3.4\". Make sure this host can be reached over ssh"' error because some of the hosts in the list will be offline. How can I do this? Is there a default option in ansible to ignore offline hosts without failing the playbook? Is there an option to do this in a single ansible cli argument outside of a playbook?
Update: I am aware that the ignore_unreachable: true works for ansible 2.7 or greater, but I am working in an ansible 2.6.1 environment.
I found a good solution here. You ping each host locally to see if you can connect and then run commands against the hosts that passed:
---
- hosts: all
connection: local
gather_facts: no
tasks:
- block:
- name: determine hosts that are up
wait_for_connection:
timeout: 5
vars:
ansible_connection: ssh
- name: add devices with connectivity to the "running_hosts" group
group_by:
key: "running_hosts"
rescue:
- debug: msg="cannot connect to {{inventory_hostname}}"
- hosts: running_hosts
gather_facts: no
tasks:
- command: date
With current version on Ansible (2.8) something like this is possible:
- name: identify reachable hosts
hosts: all
gather_facts: false
ignore_errors: true
ignore_unreachable: true
tasks:
- block:
- name: this does nothing
shell: exit 1
register: result
always:
- add_host:
name: "{{ inventory_hostname }}"
group: reachable
- name: Converge
hosts: reachable
gather_facts: false
tasks:
- debug: msg="{{ inventory_hostname }} is reachable"
I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.
etcdctl get /kube-centos/network/config
{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
Here is my replication controller
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
kubectl create -f rc.yaml
NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
Redis-config file used
appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
I used the following command to create the kubernetes service.
kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort
Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.
kubectl exec -it redis-trib bash
./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379
Redis Cluster created as expected with the below message.
[OK] All 16384 slots covered.
Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.
redis-cli -p 30894 -h 192.168.240.116 -c
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.
redis-cli -c -p 30894 -h 192.168.240.116
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using NodePort service type?
Also I cannot use LoadBalancer service type as I'm not hosting it on cloud.
I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?
Thanks
Running ./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379 doesn't make sense with this setup.
The port 6379 is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.
What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the example repository from Kelsey Hightower. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - YouTube / Slideshare.
An alternative would be to use a single redis master as shown in other examples.
I have an issue with my docker-compose configuration that I cannot pinpoint: redis won't start.
My docker-compose.yml:
web:
build: ./web
links:
- db
- redis
ports:
- "8080:8080"
db:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: bignibou_dev
redis:
build: ./redis
ports:
- "63790:6379"
My ./web/Dockerfile:
FROM java:8
ADD ./bignibou-server-1.0.jar /app/bignibou-server-1.0.jar
ADD ./spring-cloud.properties /app/spring-cloud.properties
ENV SPRING_CLOUD_PROPERTIESFILE=/app/spring-cloud.properties
ENV SPRING_PROFILES_ACTIVE=cloud
ENV SPRING_CLOUD_APP_NAME=bignibou
ENV CLEARDB_DATABASE_URL=mysql://root:root#localhost:3307/bignibou_dev
ENV REDISCLOUD_URL=redis://dummy:dummy#localhost:63790
ENV DYNO=dummy
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "/app/bignibou-server-1.0.jar" ]
My ./redis/Dockerfile:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
ENTRYPOINT [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
When I run sudo docker-compose up, redis is not started by Docker although mysql/db starts properly.
Can anyone please help?
Instead of localhost , write your redis service name which in your case is redis , so connection will be like :
redis://dummy:dummy#redis:63790