Redis-server is in slave mode on startup - redis

I'm trying to create a one-master one-slave redis system.
I noticed that my master node is starting as a slave:
1) "slave"
2) "no"
3) (integer) 0
4) "connect"
5) (integer) -1
Why is this? Is this how it should work?
I tried stopping the slave node then starting the master but it still act as slave unless I run redis-cli slaveof no one.
My redis.conf:
bind 0.0.0.0 ::1
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis/redis-server.pid
loglevel notice
logfile /var/log/redis/redis-server.log
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
replica-serve-stale-data yes
replica-read-only no
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
maxmemory 200mb
maxmemory-policy allkeys-lfu
replicaof no one
slaveof no one

Related

Redis sentinel slave error link state on slave

I try to make replication using redis sentinel, on dev server wiht redis 3 all stuff works fine, but on production when i use redis 5 I had problem. First think i start to replicate using replicaof in config slave, then I configure sentinel
sentinel down-after-milliseconds mymaster 15000
sentinel failover-timeout mymaster 20000
sentinel was discover the master, but did't slave of redis, then I try manual add slave to sentinel
sentinel known-slave mymaster SLAVE-IP 6379
after change i restart sentinel, then change slave to master and make old master broken becouse
master-link-status = err and
SENTINEL failover mymaster
(error) NOGOODSLAVE No suitable slave to promote
without sentinel replication between redis works fine
redis-slave config
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile "/var/run/redis/redis-server.pid"
loglevel notice
logfile "/var/log/redis/redis-server.log"
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/var/lib/redis"
replica-serve-stale-data yes
replica-read-only no
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
maxmemory 2000mb
maxmemory-policy allkeys-lru
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
redis master config
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 60
daemonize yes
supervised no
pidfile "/var/run/redis/redis-server.pid"
loglevel notice
logfile "/var/log/redis/redis-server.log"
databases 16
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/var/lib/redis"
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
maxmemory 4000mb
maxmemory-policy allkeys-lru
appendonly no
appendfilename "appendonly.aof"
appendfsync no
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
Had similar issue. What I did was to send a SENTINEL RESET command to all Sentinel instances. One after the other, waiting at least 30 seconds between instances. That fixed the "master-link-status" and let the failover happen.
$ src/redis-cli -h 192.168.1.153 -p 26379
> SENTINEL RESET mymaster
(wait 30 seconds)
$ src/redis-cli -h 192.168.1.154 -p 26379
> SENTINEL RESET mymaster
(wait 30 seconds)
$ src/redis-cli -h 192.168.1.155 -p 26379
> SENTINEL RESET mymaster
> sentinel slaves mymaster
2) "192.168.1.155:6379"
31) "master-link-status"
32) "ok"
2) "192.168.1.154:6379"
31) "master-link-status"
32) "ok"
-> so that seems fine.
192.168.1.155:26379> sentinel failover mymaster
OK
-> finally!

Redis Cluster - can't establish connection Redis master-slave

Hello I'm trying to create 6 node Redis Cluster with 3 master and 3 slave nodes. I'm using Redis-server 5.0.5 version.
- In master nodes I have this configuration
# Redis configuration file example.
bind 192.168.77.21
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 60
################################# GENERAL #####################################
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /var/log/redis/redis.log
databases 16
always-show-logo yes
################################ SNAPSHOTTING ################################
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
################################# REPLICATION #################################
replica-serve-stale-data yes
replica-read-only yes
repl-disable-tcp-nodelay no
################################## SECURITY ###################################
requirepass somepassword
maxmemory 512mb
maxmemory-policy noeviction
ffect of a command that stores data on a key that may
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
############################## APPEND ONLY MODE ###############################
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
################################ LUA SCRIPTING ###############################
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 5000
################################## SLOW LOG ###################################
slowlog-log-slower-than 10000
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
In slave nodes I have this configuration
# Redis configuration file example.
bind 192.168.77.22
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 60
################################# GENERAL #####################################
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /var/log/redis/redis.log
databases 16
always-show-logo yes
################################ SNAPSHOTTING ################################
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
################################# REPLICATION #################################
replicaof 192.168.77.21 6379
masterauth somepassword
replica-serve-stale-data yes
replica-read-only yes
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
################################## SECURITY ###################################
requirepass somepassword
############################## MEMORY MANAGEMENT ################################
maxmemory 512mb
############################# LAZY FREEING ####################################
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
############################## APPEND ONLY MODE ###############################
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
################################ LUA SCRIPTING ###############################
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 5000
################################## SLOW LOG ###################################
slowlog-log-slower-than 10000
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
after all when I'm trying to start redis service using systemctl start redis, it fails and throws following exception:
Oct 04 14:17:57 node2 systemd[1]: Starting Redis persistent key-value database...
Oct 04 14:17:57 node2 systemd[1]: redis.service: main process exited, code=exited, status=1/FAILURE
Oct 04 14:17:57 node2 redis-shutdown[30834]: Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Oct 04 14:17:57 node2 redis-shutdown[30834]: Could not connect to Redis at 192.168.77.22:6379: Connection refused
Oct 04 14:17:57 node2 systemd[1]: redis.service: control process exited, code=exited status=1
Oct 04 14:17:57 node2 systemd[1]: Failed to start Redis persistent key-value database.
Oct 04 14:17:57 node2 systemd[1]: Unit redis.service entered failed state.
Oct 04 14:17:57 node2 systemd[1]: redis.service failed.
any suggestions or solutions?

Redis master becoming slave of another master in docker environment

I have configured redis-sentinel with one master and two slaves.
lets call this setup of three machines a cluster.
I have a lot of clusters running on a lot of docker containers.
On run time I manage the IP in the redis.conf file and sentinal.conf files.
My problem is;
The master node on cluster-1 somehow became slave of the master of cluster-2.
On Cluster-1 Master node I killed the redis and sentinel services, removed slaveof <cluster-2 master ip> 6379 and then restarted redis service with the edited conf file.
The moment I start redis service, It again becomes slave of cluster-2 master redis.
I tried slaveof no one from inside redis-cli but within seconds the node again turn into slave.
All this is happening without even starting sentinel service.
What is happening here? are there other entries that I would have to delete?
redis.conf
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile "/var/run/redis_6379.pid"
loglevel notice
logfile "/var/log/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/"
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
slaveof 192.168.60.38 6379 #this comes back again and again
For anyone who faces the same problem.
Sentinel is designed to automatically detect another sentinels in the same network.
So, Cluster-1 and Cluster-2 sentinels were all able to reach each other.
Sentinel of Cluster-1 became disloyal(pun intended) and rewrote the redis configuration of Cluster-1 redis making it slave of Cluster-2 Master redis.
Possible solutions;
1. Use Unique password between each redis setup. requirepass in redis configuration will be used.
2. Block traffic between different redis clusters.
3. Dont use sentinel all together.

Redis takes a lot of time at a particular time every hour

I am running a redis cluster with 6 nodes, 1 master/node and 1 slave/master. Dataset is around 60-80GB/node. We are experiencing a peculiar problem where every 16th minute of each hour, redis responds very slow. Usual response time is < 5ms, but at 16th minute, the avg response time shoots above 15s. Is there any background process in redis that runs hourly?
My redis.conf is as below:-
port 23000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 20000
cluster-slave-validity-factor 1
logfile redis.log
loglevel notice
slowlog-log-slower-than 1000
slowlog-max-len 64
latency-monitor-threshold 100
maxmemory-policy volatile-ttl
slave-read-only no
protected-mode no
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbchecksum yes
dbfilename dump.rdb
dir /data/redis/redis-master
appendonly yes
appendfilename redis-staging-ao.aof

Production considerations for Spring Cloud, Spring Data Redis & Eureka

I have a Spring Cloud micro services application spanning 4 server types: A security gateway, two UI servers and a REST APIs server. Each one of these will run on its own VM in a production environment: 4 server instances of the REST server and 2 instances of each other server.
The system is expected to serve around 30,000 users.
The service discovery is provided by Eureka. I have two Eureka servers for failover.
The shared HTTP session is provided by Spring Session & Spring Data Redis, using #EnableRedisHttpSession annotation on the participating servers.
I decided to go with a 3 VMs setup for Redis ("Example 2: basic setup with three boxes" at this URL: http://redis.io/topics/sentinel).
Each VM will run a Redis server and a Redis sentinel process (one of the Redis servers will be the master, two instances will be slaves)
This all works great on development machines and System Test machines, mostly running all processes on the same server.
I am now moving towards running performance testing on production-like environments, with multiple VMs. I would like some feedback and recommendations from developers who already have similar Spring Cloud setups in production:
What edge cases should I look for?
Are there any recommended configuration settings? My setup is shown below.
Are there configuration settings that might work well in testing environments but become serious issues in production environments?
In my specific scenario, I would also like a solution that would purge old data from Redis, since it only exists to save session information. If for some reason spring would not cleanup the session data on session expiration (for example the server was killed abruptly), I would like to have some cleanup of the really old data. I read about the LRU/Caching mechanism on Redis but it does not seem to have some guarantee with regards to time, only when some data size is reached.
Here is a configuration of my master Redis server. The slaves are pretty much the same, just different ports and indicating they are slaveof the master:
daemonize no
port 6379
dbfilename "dump6379.rdb"
dir "/Users/odedia/Work/Redis/6379"
pidfile "/Users/odedia/Work/Redis/redis6379.pid"
#logfile "/Users/odedia/Work/Redis/redis6379.log"
tcp-backlog 511
timeout 0
tcp-keepalive 60
loglevel notice
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only no
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events "gxE"
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
Here is a Redis sentinel configuration:
port 5000
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 5000
sentinel config-epoch mymaster 59
And here is the application.yml for the Eureka server:
server:
port: 1111
eureka:
instance:
hostname: localhost
client:
serviceUrl:
defaultZone: https://${eureka.instance.hostname}:${server.port}/eureka/
registerWithEureka: false #Dont register yourself with yourself...
fetchRegistry: false
server:
waitTimeInMsWhenSyncEmpty: 0
spring:
application:
name: eureka
And here is the application.yml for the gateway server, which is responsible for the Zuul-based routing:
# Spring properties
spring:
application:
name: gateway-server # Service registers under this name
redis:
sentinel:
master: mymaster
nodes: 127.0.0.1:5000,127.0.0.1:5001,127.0.0.1:5002
server:
port: 8080
security:
sessions: ALWAYS
zuul:
retryable: true #Always retry before failing
routes:
ui1-server: /ui1/**
ui2-server: /ui2/**
api-resource-server: /rest/**
# Discovery Server Access
eureka:
client:
serviceUrl:
defaultZone: https://localhost:1111/eureka/
instance:
hostname: localhost
metadataMap:
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
hystrix:
command:
default:
execution:
isolation:
strategy: THREAD
thread:
timeoutInMilliseconds: 40000 #Timeout after this time in milliseconds
ribbon:
ConnectTimeout: 5000 #try to connect to the endpoint for 5 seconds.
ReadTimeout: 50000 #try to get a response after successfull connection for 5 seconds
# Max number of retries on the same server (excluding the first try)
maxAutoRetries: 1
# Max number of next servers to retry (excluding the first server)
MaxAutoRetriesNextServer: 2
I wrote an article following my experience in production with Spring Data Redis, it is available here for those interested.
https://medium.com/#odedia/production-considerations-for-spring-session-redis-in-cloud-native-environments-bd6aee3b7d34