When trying to create a cluster with redis-cli as follows
redis-cli --cluster create
a prompt comes up asking for configuration confirmation?
Is there a way to script this (preferably in ansible) and run it non-interactively?
I am aware of this topic however it addresses data manipulation which is not the scope of this question.
--cluster-yes is the correct option!
EDIT:
The answer of Leonardo Oliveira rightfully points out that the option --cluster-yes will avoid the prompt. For example:
redis-cli --cluster create host1:6379 host2:6379 host3:6379 --cluster-yes
As of the current Redis version (5.0.5) there doesn't seem to be a flag under --cluster that can silence or auto-answer the interactive question:
$ redis-cli --cluster help
Cluster Manager Commands:
create host1:port1 ... hostN:portN
--cluster-replicas <arg>
check host:port
--cluster-search-multiple-owners
info host:port
fix host:port
--cluster-search-multiple-owners
reshard host:port
--cluster-from <arg>
--cluster-to <arg>
--cluster-slots <arg>
--cluster-yes
--cluster-timeout <arg>
--cluster-pipeline <arg>
--cluster-replace
rebalance host:port
--cluster-weight <node1=w1...nodeN=wN>
--cluster-use-empty-masters
--cluster-timeout <arg>
--cluster-simulate
--cluster-pipeline <arg>
--cluster-threshold <arg>
--cluster-replace
add-node new_host:new_port existing_host:existing_port
--cluster-slave
--cluster-master-id <arg>
del-node host:port node_id
call host:port command arg arg .. arg
set-timeout host:port milliseconds
import host:port
--cluster-from <arg>
--cluster-copy
--cluster-replace
help
By using echo you can execute the command and auto-answer the prompt:
echo "yes" | redis-cli --cluster create host1:6379 host2:6379 host3:6379
The default Ansible Redis module only supports a few commands and not --cluster so you would have to create your own logic with command/shell tasks:
- name: Create cluster
shell: echo "yes" | redis-cli --cluster create host1:6379 host2:6379 host3:6379
run_once: true
when: not cluster_setup_done
Well, I don't know about the ansible part. But Redis official site does provide a way to create cluster with script in a little interactive-mode.
Creating a Redis Cluster using the create-cluster script (Please refer docs for more details)
If you don't want to create a Redis Cluster by configuring and executing individual instances manually as explained above, there is a much simpler system (but you'll not learn the same amount of operational details).
Just check utils/create-cluster directory in the Redis distribution. There is a script called create-cluster inside (same name as the directory it is contained into), it's a simple bash script. In order to start a 6 nodes cluster with 3 masters and 3 slaves just type the following commands:
create-cluster start
create-cluster create
Reply to yes in step 2 when the redis-cli utility wants you to accept the cluster layout.
You can now interact with the cluster, the first node will start at port 30001 by default. When you are done, stop the cluster with:
create-cluster stop
Please read the README inside this directory for more information on how to run the script.
You can try implementing this if it helps you in any way!
Cheers :)
Related
We're using redis-cluster extensively in our production env. We currently have a 30 node cluster (15 masters, 15 slaves)
We're trying to increase the cluster, for that we've created new servers & joined them to the cluster. so far all is well.
Next - we're trying to reshard the slots to the new masters. we wrote a script that does this, using the redis-trib reshard command.
However - the migration fails midway (but not very far from the start) with this error:
[ERR] Calling MIGRATE: ERR Target instance replied with error: BUSYKEY Target key name already exists.
This happens sporadically, at times it manages to move some slots before failing, at times it fails on the first slot.
Each such failure requires a manual fixing operation which makes the reshard operation very hard to manage.
We have not found any concrete example of this, nor any idea on how to prevent this other than a downtime migration. which we are trying to avoid.
Versions:
redis server 4.0.2
redis trib 3.3.3 (downgraded from 4.0.2 following this issue : redis cluster reshard [ERR] Calling MIGRATE: ERR Syntax error)
Our next step is to upgrade to latest redis (4.0.11), even though we didn't find any indication in the release notes of this issue.
Hoping to hear we're doing something wrong and how to fix it, or is redis-cluster not built for live resharding ?
Thanks
I have faced like this problem while working with redis-clustering support for our own project. I found a problem with the redis-trib reshard command. It works fine if no key is stored in slots those are migrating from one master to another.
But redis-5 (still developing, not stable yet) has it's own
`redis-cli' that has no problem with resharding command I think. Only for lower versions of 5 it happens.
If you look at the official docs for redis say redis reconfiguration and redis cluster resharding, you'll find what they do internally to reshard.
So I solved the problem by doing those tasks by running a bash script instead of running redis-trib reshard command.
Suppose you want to reshard some slots from a master node to other master node. We'll call the node that has the current ownership of the hash slot the source node, and the node where we want to migrate the destination node.
For each slot do the following steps:
Remember that the order of these steps is important here according to redis official docs.
Send CLUSTER SETSLOT <slot> IMPORTING <source-node-id> to destination node to set the slot to importing state.
Send CLUSTER SETSLOT <slot> MIGRATING <destination-node-id> to source node to set the slot to migrating state.
Get keys from the source node with CLUSTER GETKEYSINSLOT command and move them into the destination node using the following MIGRATE command.
MIGRATE target_host target_port key target_database_id timeout
In Redis Cluster there is no need to specify a database other than 0, but MIGRATE is a general command that can be used for other tasks not involving Redis Cluster.
When the migration process is finally finished, use CLUSTER SETSLOT <slot> NODE <destination-node-id> in both source node and destination node in order to set the slot to their normal state again. The same command is usually sent to all other nodes to avoid waiting for the natural propagation of the new configuration across the cluster.
A simple example bash script to do this is also given here:
source-ip: 172.17.0.5. source-id: 1f70a5107e0042a7d33a9efaf88dbdfecd78076a
destination-ip: 172.17.0.4. destination-id: 7e428bae84697a3882ecad19bd0d13ac7ee97d02
another master ip: 172.17.0.7
for i in `seq 0 5460`; do
redis-cli -c -h 172.17.0.4 cluster setslot ${i} importing 1f70a5107e0042a7d33a9efaf88dbdfecd78076a
redis-cli -c -h 172.17.0.5 cluster setslot ${i} migrating 7e428bae84697a3882ecad19bd0d13ac7ee97d02
while true; do
key=`redis-cli -c -h 172.17.0.5 cluster getkeysinslot ${i} 1`
if [ "" = "$key" ]; then
echo "there are no key in this slot ${i}"
break
fi
redis-cli -h 172.17.0.5 migrate 172.17.0.4 6379 ${key} 0 5000
done
redis-cli -c -h 172.17.0.5 cluster setslot ${i} node 7e428bae84697a3882ecad19bd0d13ac7ee97d02
redis-cli -c -h 172.17.0.4 cluster setslot ${i} node 7e428bae84697a3882ecad19bd0d13ac7ee97d02
redis-cli -c -h 172.17.0.7 cluster setslot ${i} node 7e428bae84697a3882ecad19bd0d13ac7ee97d02
done
Today I started learning ansible and first thing I came across while trying to run the command ping on remote server was
192.168.1.100 | UNREACHABLE! => {
"changed": false,
"msg": "(u'192.168.1.100', <paramiko.rsakey.RSAKey object at 0x103c8d250>, <paramiko.rsakey.RSAKey object at 0x103c62f50>)",
"unreachable": true
}
so I manually setup the SSH key, I think I faced this as no writeup or Tutorial by any devops explains the step why they don't need it or if they have manually set it up before the writing a tutorial or a video.
So I think it would be great if we can automate this step too..
If ssh keys haven't been set up you can always prompt for an ssh password
-k, --ask-pass ask for connection password
I use these commands for setting up keys on CentOS 6.8 under the root account:
cat ~/.ssh/id_rsa.pub | ssh ${user}#${1} -o StrictHostKeyChecking=no 'mkdir .ssh > /dev/null 2>&1; restorecon -R /root/; cat >> .ssh/authorized_keys'
ansible $1 -u $user -i etc/ansible/${hosts} -m raw -a "yum -y install python-simplejson"
ansible $1 -u $user -i etc/ansible/${hosts} -m yum -a "name=libselinux-python state=latest"
${1} is the first parameter passed to the script and should be the machine name.
I set ${user} elsewhere, but you could make it a parameter also.
${hosts} is my hosts file, and it has a default, but can be overridden with a parameter.
The restorecon command is to appease selinux. I just hardcoded it to run against the /root/ directory, and I can't remember exactly why. If you run this to setup a non-root user, I think that command is nonsense.
I think those installs, python-simplejson and libselinux-python are needed.
This will spam the authorized_keys files with duplicate entries if you run it repeatedly. There are probably better ways, but this is my quick and dirty run once script.
I made some slight variations in the script for CentOS 7 and Ubuntu.
Not sure what types of servers these are, but nearly all Ansible tutorials cover the fact that Ansible uses SSH and you need SSH access to use it.
Depending on how you are provisioning the server in the first place you may be able to inject an ssh key on first boot, but if you are starting with password-only login you can use the --ask-pass flag when running Playbooks. You could then have your first play use the authorized_key module to set up your key on the server.
I google it and found two solution:
CLUSTER FORGET (http://redis.io/commands/cluster-forget)
redis-trib.rb del-node
I think CLUSTER FORGET" is the right way to do.
But I really want to know the details about redis-trib.rb del-node.
Can someone explain the difference between them?
redis-trib.rb is a ruby utility script that antirez (lead redis developer) built as a reference implementation of building administrative tools on top of the basic redis cluster commands.
Under the hood redis-trib uses CLUSTER FORGET to implement it's own administrative del-node command. https://github.com/antirez/redis/blob/unstable/src/redis-trib.rb#L1374
Redis-trib is a lot friendlier to work with. If you're doing CLUSTER FORGET you'd need to loop over and send that command to every other node in the system, while del-node will automate that process for you.
As of Redis 6.2.3
WARNING: redis-trib.rb is not longer available!
You should use redis-cli instead.
All commands and features belonging to redis-trib.rb have been moved
to redis-cli.
In order to use them you should call redis-cli with the --cluster
option followed by the subcommand name, arguments and options.
Use the following syntax:
redis-cli --cluster SUBCOMMAND [ARGUMENTS] [OPTIONS]
Example:
redis-cli --cluster info 127.0.0.1:6382
~$ redis-cli
127.0.0.1:6379> CLUSTER HELP
127.0.0.1:6379> CLUSTER NODES
127.0.0.1:6379> CLUSTER FORGET <node-id>
src/redis-trib.rb del-node 192.168.0.211:6379 650e3746968e6b7c7e357f06adbde5b3b92fcceb
Note:
192.168.0.211:6379 This is any node in the cluster
650e3746968e6b7c7e357f06adbde5b3b92fcceb this is the cluster ID of the node you want to remove. You can get the value of this ID from “cluster nodes” command.
Per redis article you should use:
redis-cli --cluster del-node 127.0.0.1:7000
where the the first argument is one of your cluster nodes.
Check 'removing a node' in he following article:
https://redis.io/docs/manual/scaling/
I'm trying to create a Redis Docker image, based on the redis:3. However, I need to load a small amount of test data into the image (it is used for testing, both on developer machines and on the CI server).
My initial attempt looks like:
FROM redis:3.0.3
MAINTAINER Howard M. Lewis Ship
RUN redis-cli sadd production-stores 5555
But this fails with the error:
Step 2 : RUN redis-cli sadd production-stores 5555
---> Running in 60fb98c133c0
Could not connect to Redis at 127.0.0.1:6379: Connection refused
The command '/bin/sh -c redis-cli sadd production-stores 5555' returned a non-zero code: 1
I'm wondering if there's a trick that allows me, in a Docker file, to connect to the server started via the CMD/ENTRYPOINT of the Dockerfile. As I'm guessing right now, the RUN occurs before the command is started.
Alternately, is there a Redis command or trick that would allow me to load some data into its database.
I suspect one approach would be to mount the Redis' data directory in a volume exposed to the Docker host; this is not ideal as
I would need an extra non-Dockerfile command to load the data
The loaded data would not be shared in the image, which is to be used by others on my team
I'm on OS X and am using docker-machine, so the volumes get trickier
The error tells Redis service is not started before you run redis-cli.
Add a new line to start it first. Let me know if this can fix your issue or not.
FROM redis:3.0.3
MAINTAINER Howard M. Lewis Ship
RUN /usr/sbin/redis-server /etc/redis.conf
RUN redis-cli sadd production-stores 5555
Updates:
Review the official redis image, it has given the way to use redis-cli
# via redis-cli
$ docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
So I test the image and make it run as below:
# start redis container
$ docker run --name some-redis -d redis
# link redis container and run redis-cli command
$ docker run -it --link some-redis:redis --rm redis redis-cli -h redis
redis:6379> PING
PONG
redis:6379> exit
# Add the specified members to the set stored at key
$ docker run -it --link some-redis:redis --rm redis redis-cli -h redis sadd production-stores 5555
(integer) 1
$
I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:
CMD start.sh
In the start.sh script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
use supervisord which would control both processes. The conf file might look like this:
...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true