Redis Slave Master connections fails Slave logs show: Unable to connect to MASTER: Permission denied - redis

I have followed the instructions on how to set up a redis master server cluster but after I am done I get am not able to see why the servers are not able to see one another.
this is the second build I put together and I am stuck on the same spot. I could really use some help I never worked on REDIS before and I could use some guidance.
USING CENTOS7 Redis version
when i check the redis slave logs I get the following
[20671] 12 Jan 15:48:02.369 * Connecting to MASTER 10.10.10.10:6379
[20671] 12 Jan 15:48:02.369 # Unable to connect to MASTER: Permission denied
The config files are using the same exact password for both master and slave.
and just to test I gave the default directory full control for the redis working directory files and folder
Tested ports and they are working fine,
I also get the following when I run INFO when connecting to REDIS Slave
Replication
role:slave
master_host:10.10.10.11.
master_port:6379
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:1
master_link_down_since_seconds:1452631759
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
INFO from MASTER NODE:
Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
Both servers are running on CENTOS 7

I had this same issue when setting up a Redis cluster on CentOS 7 at AWS, and it was in fact due to SELinux being enabled. You can verify that this is your issue by checking the contents of /var/log/audit.log.
To allow Redis replication with SELinux, run the following commands as root to update the security policy. As you will likely be using Sentinel to manage the cluster, the necessary policies for Sentinel master and slaves is included as well.
Folder for policy files
Location to save new policy files
mkdir -p ~/.selinux
Redis Replication Policy
Allow data replication to slaves, include on master as well as it may become a slave at some point
cat <<SELINUX > ~/.selinux/redis_repl.te
# create new
module redis_repl 1.0;
require {
type redis_port_t;
type redis_t;
class tcp_socket name_connect;
}
#============= redis_t ==============
allow redis_t redis_port_t:tcp_socket name_connect;
SELINUX
checkmodule -m -M -o ~/.selinux/redis_repl.mod ~/.selinux/redis_repl.te
semodule_package --outfile ~/.selinux/redis_repl.pp --module ~/.selinux/redis_repl.mod
semodule -i ~/.selinux/redis_repl.pp
Redis Sentinel Master/Slave Policy, all Redis nodes
Allow Sentinel HA traffic on the Redis master/slave nodes
cat <<SELINUX > ~/.selinux/redis_ha.te
# create new
module redis_ha 1.0;
require {
type etc_t;
type redis_t;
class file write;
}
#============= redis_t ==============
allow redis_t etc_t:file write;
SELINUX
checkmodule -m -M -o ~/.selinux/redis_ha.mod ~/.selinux/redis_ha.te
semodule_package --outfile ~/.selinux/redis_ha.pp --module ~/.selinux/redis_ha.mod
semodule -i ~/.selinux/redis_ha.pp
Redis Sentinel Server Policy, all Sentinel nodes
Allow Sentinel HA traffic from the Sentinel nodes.
Note that you may need to change the Sentinel port if you aren't using the 26379 default.
# Allow Sentinel Port
semanage port -a -t redis_port_t -p tcp 26379
# Allow Sentinel Server
cat <<SELINUX > ~/.selinux/redis_sentinel.te
# create new
module redis_sentinel 1.0;
require {
type redis_port_t;
type etc_t;
type redis_t;
class tcp_socket name_connect;
class file write;
}
#============= redis_t ==============
allow redis_t redis_port_t:tcp_socket name_connect;
allow redis_t etc_t:file write;
SELINUX
checkmodule -m -M -o ~/.selinux/redis_sentinel.mod ~/.selinux/redis_sentinel.te
semodule_package --outfile ~/.selinux/redis_sentinel.pp --module ~/.selinux/redis_sentinel.mod
semodule -i ~/.selinux/redis_sentinel.pp
Restart Redis and Sentinel
service restart redis
service restart redis-sentinel

To #otaviofcs point, you're likely running into an SELinux issue. If you look in /var/log/audit/audit.log, I suspect you'll see alot of logging that looks like this:
type=AVC msg=audit(1465349491.812:28458): avc: denied { name_connect } for pid=30676 comm="redis-server" dest=6379 scontext=system_u:system_r:redis_t:s0 tcontext=system_u:object_r:redis_port_t:s0 tclass=tcp_socket
If so, you can either dive into the bowels of SELinux policy management or take the easy road: set SELinux targeted policy to permissive:
setenforce permissive
Note that you'll need to set the same in /etc/selinux/config by changing the line with SELINUX= to SELINUX=permissive.

two "new experience points"
The config is in the 2 ends of the conecction,
to add "personalized" port you can use semanage
sudo semanage port -a -t redis_port_t -p tcp 8014

Related

How can I extend redis database by redisgraph.so module?

Unable to import redisgraph module redisgraph.so indo redis database.
I successfully compiled redisgraph.so from sources.
redisgraph.so execution rights are set for everyone.
I tried:
$ redis-cli
> shutdown ((stop redis-server))
$ redis-server --loadmodule pathto/redisgraph.so
((System replies:))
# oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
# Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=2407, just started
# Configuration loaded
* Increased maximum number of open files to 10032 (it was originally set to 1024).
# Creating Server TCP listening socket *:6379: bind: Address already in use
$ redis-cli
> module list
(empty list or set)
> module load pathto/redisgraph.so
(error) ERR Error loading the extension. Please check the server logs.
((log file says: *no permission*))
redis database works fine as key-value database.
But I fail to extend it by graph functionality.
So far I am unable to drop commands like "GRAPH.QUERY" (redis replies: "unknown command").
I have no idea why redis-server seems to ignore the import command or redis-cli complains about permission rights.
The error indicates that you already have a running process bound to the same port (probably another redis-server).
Also, you'd be better off using redisgraph with the latest Redis version (i.e. v5).
It's better to have redis managed by systemd and you could configure it as follow:
Inside
update the supervised directive in /etc/redis/redis.conf to use systemd by setting supervised systemd
Creating a redis systemd file /etc/systemd/system/redis.service and set unit, service and install directive:
[Unit]
Description=Redis In-Memory Data Store
After=network.target
[Service]
User=redis
Group=redis
ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf
ExecStop=/usr/local/bin/redis-cli shutdown
Restart=always
[Install]
WantedBy=multi-user.target
Then start redis
sudo systemctl start redis
sudo systemctl status redis
If you want redis to automatically restart when your server starts then:
Assuming all of these tests worked and that you would like to start Redis automatically when your server boots, enable the systemd service:
sudo systemctl enable redis

Redis Sentinel Authentication

I have 3 Servers with Redis and Sentinel Running.
All instances have in the configuration
requirepass XXX
masterauth XXX
I can connect with redis-cli to the redis server but if I try to connect to the sentinel I can not authenticate.
root#ip-:/usr/lib/nagios/plugins# redis-cli -p 26379
127.0.0.1:26379> AUTH xxx
(error) ERR unknown command 'AUTH'
127.0.0.1:26379>
If I use the same command but with the redis port it works.
Thanks
best
You have to setup auth for sentinels, too. I mean requirepass=<password> in sentinel.conf. More info on this here and here. Be careful, as not every client supports this setup.
Also, you need to set sentinel auth-pass <master-name> <password> in that file, in order for the sentinels to be able to administrate secured redis servers. (But I'm guessing you already did that).

Running multiple instance of Redis on Centos

I want to run multiple instance of Redis on Centos 7.
Can anyone point me to proper link or post steps here.
I googled for the information but I didn't find any relevant information.
You can run multiple instances of Redis using different ports on a single machine. If this what concerns you then you can follow the below steps.
By installing the first Redis instance, it listens on localhost:6379 by default.
For Second Instance create a new working directory
The default Redis instance uses /var/lib/redis as its working directory, dumped memory content is saved under this directory with name dump.rdb if you did not change it. To avoid runtime conflicts, we need to create a new working directory.
mkdir -p /var/lib/redis2/
chown redis /var/lib/redis2/
chgrp redis /var/lib/redis2/
Generate configurations
Create a new configuration file by copying /etc/redis/redis.conf
cp /etc/redis/redis.conf /etc/redis/redis2.conf
chown redis /etc/redis/redis2.conf
Edit following settings to avoid conflicts
logfile "/var/log/redis/redis2.log"
dir "/var/lib/redis2"
pidfile "/var/run/redis/redis2.pid"
port 6380
Create service file
cp /usr/lib/systemd/system/redis.service /usr/lib/systemd/system/redis2.service
Modify the settings under Service section
[Service]
ExecStart=/usr/bin/redis-server /etc/redis/redis2.conf --daemonize no
ExecStop=/usr/bin/redis-shutdown redis2
Set to start with boot
systemctl enable redis2
Start 2nd Redis
service redis2 start
Check Status
lsof -i:6379
lsof -i:6380
By Following this you can start two Redis servers. If you want more repeat the steps again.
If I set to --daemonize no, Redis will crash when data insert.
ExecStart=/usr/bin/redis-server /etc/redis2.conf --daemonize no
Should change to
ExecStart=/usr/bin/redis-server /etc/redis2.conf --supervised systemd
My Redis is 5.0.7.
FYI.

How can I run redis on a single server on different ports?

I'm using kue which uses node_redis, but I'm also already using node_redis for my sessions, so I'd like to have kue create a server on a specific port say the default 6379 and then kue listen on port 1234.
How would I be able to do this? I found this article which talks about something similar, but I don't really want to have to create an init script to do this.
Launch redis-server and supply a different argument for 'port' which can be done on the command-line:
edd#max:~$ redis-server -h
Usage: ./redis-server [/path/to/redis.conf] [options]
./redis-server - (read config from stdin)
./redis-server -v or --version
./redis-server -h or --help
./redis-server --test-memory <megabytes>
Examples:
./redis-server (run the server with default conf)
./redis-server /etc/redis/6379.conf
./redis-server --port 7777
./redis-server --port 7777 --slaveof 127.0.0.1 8888
./redis-server /etc/myredis.conf --loglevel verbose
Sentinel mode:
./redis-server /etc/sentinel.conf --sentinel
edd#max:~$
You can do this from, say, /etc/rc.local as well so that this happens at startup.
But maybe you can also rethink your approach. Redis is so good at handling writes that you may just get by with a second database?
Very easy command:
echo "port 4000" | redis-server -
echo "port 4001" | redis-server -
You can run multiple redis instance with different ports in a single machine.this concern is right mean you can follow the below steps.
By installing the first Redis instance, It listens on localhost:6379 by default.
For Second Instance
create a new working directory
The default redis instance uses /var/lib/redis as its working directory, dumped memory content is saved under this directory with name dump.rdb if you did not change it manually.to avoid runtime conflict, we need to create a new working directory
mkdir -p /var/lib/redis2/
chown redis /var/lib/redis2/
chgrp redis /var/lib/redis2/
Generate configurations
Create a new configuration file by copying /etc/redis.conf
cp /etc/redis.conf /etc/redis2.conf
chown redis /etc/redis2.conf
Edit following settings to avoid conflicts
logfile "/var/log/redis/redis2.log"
dir "/var/lib/redis2"
pidfile "/var/run/redis/redis2.pid"
port 6380
Create service file
cp /usr/lib/systemd/system/redis.service /usr/lib/systemd/system/redis2.service
Modify the settings under Service section
[Service]
ExecStart=/usr/bin/redis-server /etc/redis2.conf --daemonize no
ExecStop=/usr/bin/redis-shutdown redis2
Set to start with boot
systemctl enable redis2
Start 2nd redis
service redis2 start
check status
lsof -i:6379
lsof -i:6380
By Following this you can start two redis server.If you want more repeat the steps again.

Redis replication through ssh not starting

Migrating bit by bit a Rails app infrastructure from one hosting zone to another (with only public internet linking them) I need to migrate a Redis instance from one side to the other.
Rather than dumping the data (even small : just queues), I'd prefer to use a master-slave setup to ensure that nothing is lost and that we don't have any down time.
Internet says SSH is my friend.
Old1 is the old server hosting the primary Redis server. Redis there is bound to a private network IP of the server.
New1 is the new server hosting the new Redis server.
On New1 I setup the ssh tunnel / port forwarding :
ssh -L 7380:<private_old1_ip>:6379 username#old1.publicname.ex
Still on New1 I check that I can connect to both instances :
redis-cli -p 6379
redis-cli -p 7380
In both bases, info works.
On New1:Redis, through redis-cli I setup the slave mode :
salveof localhost 7380
Now here is what info says on New1:Redis :
# Replication
role:slave
master_host:localhost
master_port:7380
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
master_link_down_since_seconds:1399544048
slave_priority:100
slave_read_only:1
connected_slaves:0
While Old1:Redis (through the tunnel still) :
# Replication
role:master
connected_slaves:1
slave0:<private_old1_ip>,6379,online
So the tunnel is up, and working.
There is plenty of memory available.
The slave appears connected on Old1 side (but appears with the Old1 ip).
The slave says master link status is down and that the sync is not in progress.
What am I missing ?
The problem was quite simple in fact.
The log destination was set to /dev/null so nothing was appearing in /var/log
The storage directory was set to ./ which obviously don't play well with daemon settings and was causing permissions errors on synchronisation.