Redis reaching max clients immediately on startup - redis

We're having an issue hitting the max number of clients immediately after starting redis. When issuing a MONITOR command, we see thousands of INFO commands issued from our master server.
It seems to be baselining around 9000 connections most of the time. This will occasionally drop to a more normal value for our server for a couple of seconds, then it will immediately spike back to the ~9000 connections.
Anytime redis gets busy during the normal business day, we are hitting our max connections and our services start failing.
When I run the MONITOR command, this is a sample of what I see.
1551452385.425215 [0 192.168.100.161:54068] "info"
1551452385.425556 [0 192.168.100.161:54066] "info"
1551452385.425891 [0 192.168.100.161:54071] "info"
1551452385.426242 [0 192.168.100.161:54069] "info"
1551452385.426587 [0 192.168.100.161:54070] "info"
1551452385.426933 [0 192.168.100.161:54072] "info"
1551452385.427281 [0 192.168.100.161:54074] "info"
1551452385.427625 [0 192.168.100.161:54075] "info"
1551452385.427972 [0 192.168.100.161:54076] "info"
1551452385.428316 [0 192.168.100.161:54077] "info"
1551452385.428670 [0 192.168.100.161:54078] "info"
1551452385.429011 [0 192.168.100.161:54079] "info"
1551452385.429359 [0 192.168.100.161:54080] "info"
1551452385.429706 [0 192.168.100.161:54081] "info"
1551452385.430051 [0 192.168.100.161:54082] "info"
1551452385.430398 [0 192.168.100.161:54083] "info"
1551452385.430741 [0 192.168.100.161:54084] "info"
1551452385.431086 [0 192.168.100.161:54085] "info"
1551452385.431454 [0 192.168.100.161:54086] "info"
1551452385.431792 [0 192.168.100.161:54087] "info"
Our redis.conf is below.
daemonize yes
pidfile "/var/run/redis/redis.pid"
port 6379
tcp-backlog 2048
unixsocket "/tmp/redis.sock"
unixsocketperm 777
timeout 90
tcp-keepalive 30
loglevel notice
logfile "/var/log/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/var/lib/redis"
slave-serve-stale-data yes
repl-ping-slave-period 5
maxclients 10208
slave-read-only yes
repl-disable-tcp-nodelay no
maxmemory-policy noeviction
appendonly yes
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 15000
slowlog-log-slower-than 10000
slowlog-max-len 1024
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
slave-priority 1
slaveof 192.168.100.161 6379
Our INFO output is below.
# Server
redis_version:3.0.5
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:cfc7e460e931db7b
redis_mode:standalone
os:Linux 2.6.32-573.8.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:14289
run_id:e809f42198d0a568cc3394cee322a20c069ed682
tcp_port:6379
uptime_in_seconds:35562
uptime_in_days:0
hz:10
lru_clock:7947917
config_file:/etc/redis.conf
# Clients
connected_clients:9399
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:5357045872
used_memory_human:4.99G
used_memory_rss:5606625280
used_memory_peak:5664138480
used_memory_peak_human:5.28G
used_memory_lua:36864
mem_fragmentation_ratio:1.05
mem_allocator:jemalloc-3.6.0
# Persistence
loading:0
rdb_changes_since_last_save:42
rdb_bgsave_in_progress:0
rdb_last_save_time:1551451644
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:23
rdb_current_bgsave_time_sec:-1
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:22
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_current_size:4453038609
aof_base_size:4448482140
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:1
# Stats
total_connections_received:3677782
total_commands_processed:4176358
instantaneous_ops_per_sec:12
total_net_input_bytes:6261124496
total_net_output_bytes:11824027791
instantaneous_input_kbps:1.50
instantaneous_output_kbps:6.87
rejected_connections:3662459
sync_full:2
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:13406
keyspace_misses:10
pubsub_channels:1
pubsub_patterns:0
latest_fork_usec:104081
migrate_cached_sockets:0
# Replication
role:slave
master_host:192.168.100.161
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:26797222
slave_priority:1
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:7529949
repl_backlog_histlen:10344
# CPU
used_cpu_sys:326.54
used_cpu_user:1835.05
used_cpu_sys_children:303.96
used_cpu_user_children:2131.36
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=4182233,expires=1822,avg_ttl=1565571347
db4:keys=1,expires=0,avg_ttl=0
db9:keys=9957,expires=0,avg_ttl=0
db15:keys=386985,expires=0,avg_ttl=0

Related

FATAL: Data file was created with a Redis server configured to handle more than 1 databases. Exiting

I have one redis instance and my redis.conf file is:
# masterauth [password]
# requirepass [password]
bind 0.0.0.0
protected-mode no
# port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
pidfile /var/run/redis_6380.pid
loglevel verbose
#databases 1
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data
appendonly no
appendfilename "appendonly.aof"
# appendfsync always
appendfsync everysec
# appendfsync no
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
maxmemory 1000mb
maxmemory-policy volatile-ttl
port 6379
databases 1
requirepass [password]
masterauth [password]
but my redis can not start!!!
logs:
22:C 19 Aug 2021 07:39:51.385 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
22:C 19 Aug 2021 07:39:51.385 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=22, just started
22:C 19 Aug 2021 07:39:51.385 # Configuration loaded
22:M 19 Aug 2021 07:39:51.387 * Running mode=standalone, port=6379.
22:M 19 Aug 2021 07:39:51.387 # Server initialized
22:M 19 Aug 2021 07:39:51.468 # FATAL: Data file was created with a Redis server configured to handle more than 1 databases. Exiting
ERROR is:
22:M 19 Aug 2021 07:39:51.468 # FATAL: Data file was created with a Redis server configured to handle more than 1 databases. Exiting
what am i doing??
I solved the problem!!!
I was deployed in k8s and using PVC. I don't know why, but after remove PVC and bound new one, this error was fixed

Redis sentinel node can not sync after failover

We have setup Redis with sentinel high availability using 3 nodes. Suppose fist node is master, when we reboot first node, failover happens and second node becomes master, until this point every thing is OK. But when fist node comes back it cannot sync with master and we saw that in its config no "masterauth" is set.
Here is the error log and Generated by CONFIG REWRITE config:
1182:S 29 May 2021 13:49:42.713 * Reconnecting to MASTER 192.168.1.2:6379 after failure
1182:S 29 May 2021 13:49:42.716 * MASTER <-> REPLICA sync started
1182:S 29 May 2021 13:49:42.716 * Non blocking connect for SYNC fired the event.
1182:S 29 May 2021 13:49:42.717 * Master replied to PING, replication can continue...
1182:S 29 May 2021 13:49:42.717 * (Non critical) Master does not understand REPLCONF listening-port: -NOAUTH Authentication required.
1182:S 29 May 2021 13:49:42.717 * (Non critical) Master does not understand REPLCONF capa: -NOAUTH Authentication required.
1182:S 29 May 2021 13:49:42.717 * Partial resynchronization not possible (no cached master)
1182:S 29 May 2021 13:49:42.718 # Unexpected reply to PSYNC from master: -NOAUTH Authentication required.
1182:S 29 May 2021 13:49:42.718 * Retrying with SYNC...
# Generated by CONFIG REWRITE
save 3600 1
save 300 100
save 60 10000
user default on #eb5fbb922a75775721db681c49600c069cf686765eeebaa6e18fad195812140d ~* &* +#all
replicaof 192.168.1.2 6379
What is the problem?
Config Sample:
bind 127.0.0.1 -::1 192.168.1.3
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised systemd
pidfile "/var/run/redis_6379.pid"
loglevel notice
logfile ""
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
rdb-del-sync-files no
dir "/"
replicaof 192.168.1.2 6379
masterauth "redis"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass "redis"
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4kb
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
For those who may run into same problem, problem was REDIS misconfiguration, after third deployment we carefully set parameters and no problem was found.

redis cluster TPS toooooo low, only 8

this is bench result
C:\Users\LG520\Desktop> redisbench -cluster=true -a 192.168.1.61:6380,192.168.1.61:6381,192.168.1.61:6382 -c 10 -n 100 -d 1000
2020/12/22 14:43:50 Go...
2020/12/22 14:43:50 # BENCHMARK CLUSTER (192.168.1.61:6380,192.168.1.61:6381,192.168.1.61:6382, db:0)
2020/12/22 14:43:50 * Clients Number: 10, Testing Times: 100, Data Size(B): 1000
2020/12/22 14:43:50 * Total Times: 1000, Total Size(B): 1000000
2020/12/22 14:46:13 # BENCHMARK DONE
2020/12/22 14:46:13 * TIMES: 1000, DUR(s): 143.547, TPS(Hz): 6
i build a redis cluster, but redisbench result is too low;
this is cluster info
[root#SZFT-LINUX chen]# ./redis-6.0.6/src/redis-cli -c -p 6380
127.0.0.1:6380> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:2616
cluster_stats_messages_pong_sent:3260
cluster_stats_messages_sent:5876
cluster_stats_messages_ping_received:3255
cluster_stats_messages_pong_received:2616
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:5876
127.0.0.1:6380>
127.0.0.1:6380> cluster nodes
c12b3dbe5dbfe23a8bf0c180cbcdd6aaec98c4aa 192.168.1.61:6382#16382 master - 0 1608621050071 3 connected 10923-16383
3adf356189ddc44547b662b4f5f05f85f2cf016b 192.168.1.61:6385#16385 slave 8af6ca7a04368dd2cd7f40b76f3ac43fc0741812 0 1608621048057 2 connected
4a92459e43eff69aa6a0f603e13310b1a679b98d 192.168.1.61:6380#16380 myself,master - 0 1608621049000 1 connected 0-5460
72c20f23d93d87f75d78df4fa19e7cfa7a6f392e 192.168.1.61:6383#16383 slave c12b3dbe5dbfe23a8bf0c180cbcdd6aaec98c4aa 0 1608621048000 3 connected
fd16d8cd8226d3e6ee8854f642f82159c97eaa48 192.168.1.61:6384#16384 slave 4a92459e43eff69aa6a0f603e13310b1a679b98d 0 1608621047049 1 connected
8af6ca7a04368dd2cd7f40b76f3ac43fc0741812 192.168.1.61:6381#16381 master - 0 1608621049060 2 connected 5461-10922
127.0.0.1:6380>
redis version: 6.0.6
i build in docker for the first time(i thought the low TPS was due to docker ), now i build in centos 7, got the same result ;
this is one of the redis.conf, 6 in total
port 6383
#dbfilename dump.rdb
#save 300 10
save ""
appendonly yes
appendfilename appendonly.aof
# appendfsync always
appendfsync everysec
# appendfsync no
dir /home/chen/redis-hd/node6383/data
maxmemory 2G
logfile /home/chen/redis-hd/node6383/data/redis.log
protected-mode no
maxmemory-policy allkeys-lru
# bind 127.0.0.1
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
cluster-slave-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage yes
cluster-announce-ip 192.168.1.61
no-appendfsync-on-rewrite yes
i test one node redis, the tps is 2000,
why redis cluster'TPS is lower than singele node?
anybody can help me, i will be very appreciated!

Redis keys deleting automatically

I am using Redis from Amazon ElastiCache. When I am creating keys its getting deleted automatically in random time intervals, ranging from 1 to 40 seconds
**************:6379> set testkey 1
OK
**************:6379> get testkey
"1"
**************:6379> get testkey
"1"
**************:6379> get testkey
"1"
**************:6379> get testkey
(nil)
Even if i set a expire its still not honoring that time
**************:6379> set testkey 1
OK
**************:6379> expire testkey 1000
(integer) 1
**************:6379> ttl testkey
(integer) 996
**************:6379> ttl testkey
(integer) 994
**************:6379> ttl testkey
(integer) -2
**************:6379> get testkey
(nil)
I tried to search through articles but could not find a solid solution. Please help me or point me in the right direction
My INFO ALL output
# Server
redis_version:4.0.10
redis_git_sha1:0
redis_git_dirty:0
redis_build_id:0
redis_mode:standalone
os:Amazon ElastiCache
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:0.0.0
process_id:1
run_id:9b47409883d74bd6226f6da83049f0299306942f
tcp_port:6379
uptime_in_seconds:1532242
uptime_in_days:17
hz:10
lru_clock:8988158
executable:-
config_file:-
# Clients
connected_clients:1584
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:41694472
used_memory_human:39.76M
used_memory_rss:45117440
used_memory_rss_human:43.03M
used_memory_peak:46522760
used_memory_peak_human:44.37M
used_memory_peak_perc:89.62%
used_memory_overhead:33041108
used_memory_startup:3662144
used_memory_dataset:8653364
used_memory_dataset_perc:22.75%
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:436469760
maxmemory_human:416.25M
maxmemory_policy:volatile-lru
mem_fragmentation_ratio:1.08
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0
# Persistence
loading:0
rdb_changes_since_last_save:54915489
rdb_bgsave_in_progress:0
rdb_last_save_time:1534182572
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
# Stats
total_connections_received:6594931
total_commands_processed:311024303
instantaneous_ops_per_sec:345
total_net_input_bytes:47103888444
total_net_output_bytes:1706056764081
instantaneous_input_kbps:20.91
instantaneous_output_kbps:2093.84
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:2573
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:23866292
keyspace_misses:234233574
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
# Replication
role:master
connected_slaves:0
master_replid:ab5f0fbbecf06195be44983dbde289e2d0725335
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:8175.90
used_cpu_user:5509.23
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Commandstats
cmdstat_ping:calls=434117,usec=366264,usec_per_call=0.84
cmdstat_set:calls=6641,usec=23175,usec_per_call=3.49
cmdstat_config:calls=3,usec=55,usec_per_call=18.33
cmdstat_del:calls=20684265,usec=38010326,usec_per_call=1.84
cmdstat_keys:calls=1,usec=34,usec_per_call=34.00
cmdstat_exists:calls=458,usec=899,usec_per_call=1.96
cmdstat_expire:calls=4229654,usec=9412184,usec_per_call=2.23
cmdstat_flushdb:calls=27478,usec=14170960,usec_per_call=515.72
cmdstat_get:calls=248088801,usec=1086400958,usec_per_call=4.38
cmdstat_setex:calls=20257389,usec=63289845,usec_per_call=3.12
cmdstat_ttl:calls=2202549,usec=3262291,usec_per_call=1.48
cmdstat_getset:calls=7808523,usec=25766044,usec_per_call=3.30
cmdstat_select:calls=6594457,usec=6533380,usec_per_call=0.99
cmdstat_info:calls=689967,usec=219565932,usec_per_call=318.23
# SSL
ssl_enabled:no
ssl_connections_to_previous_certificate:0
ssl_connections_to_current_certificate:0
ssl_current_certificate_not_before_date:(null)
ssl_current_certificate_not_after_date:(null)
ssl_current_certificate_serial:0
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=4604,expires=4604,avg_ttl=172095914
We are using laravel framework, and for some reason we are running artisan clear:cache every minute as pointed out by #himanshu gupta.
I removed the cron and everything is normal

redis used_memory is largger than used_memory_rss

Redis Master has about 90 keys.The longgest key is about 46 bytes.But the master had a 3GB memory usage.Here is the master info information
# Server
redis_version:3.2.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b45e9949f92f30de
redis_mode:standalone
os:Linux 3.10.0-327.36.2.el7.ppc64 ppc64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.5
process_id:150358
run_id:acfc6247d94cf0c62a98694adf35e3ff9f1c0d9d
tcp_port:6379
uptime_in_seconds:3539
uptime_in_days:0
hz:10
lru_clock:14518804
executable:/home/redis/redis-3.2.8/config/redis-server
config_file:/home/redis/redis-3.2.8/config/server_6379.conf
# Clients
connected_clients:37
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:3223126336
used_memory_human:3.00G
used_memory_rss:19988480
used_memory_rss_human:19.06M
used_memory_peak:3223657672
used_memory_peak_human:3.00G
total_system_memory:1071411167232
total_system_memory_human:997.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:0.01
mem_allocator:jemalloc-4.0.3
# Persistence
loading:0
rdb_changes_since_last_save:143046
rdb_bgsave_in_progress:0
rdb_last_save_time:1776122944
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_current_size:9266469
aof_base_size:0
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0
# Stats
total_connections_received:78
total_commands_processed:309390
instantaneous_ops_per_sec:126
total_net_input_bytes:21927610
total_net_output_bytes:62716490
instantaneous_input_kbps:8.79
instantaneous_output_kbps:12.20
rejected_connections:0
sync_full:2
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:47603
keyspace_misses:47731
pubsub_channels:1
pubsub_patterns:0
latest_fork_usec:206
migrate_cached_sockets:0
# Replication
role:master
connected_slaves:2
slave0:ip=10.124.152.8,port=6379,state=online,offset=9995541,lag=1
slave1:ip=10.124.152.7,port=6379,state=online,offset=9997441,lag=1
master_repl_offset:9998557
repl_backlog_active:1
repl_backlog_size:3221225472
repl_backlog_first_byte_offset:2
repl_backlog_histlen:9998556
# CPU
used_cpu_sys:7.61
used_cpu_user:3.37
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=90,expires=0,avg_ttl=0
And the slave info information
# Memory
used_memory:761448
used_memory_human:743.60K
used_memory_rss:7536640
used_memory_rss_human:7.19M
used_memory_peak:823488
used_memory_peak_human:804.19K
total_system_memory:1071411167232
total_system_memory_human:997.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:9.90
mem_allocator:jemalloc-4.0.3
127.0.0.1:6379> info
# Server
redis_version:3.2.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:17a905ed68c0b83
redis_mode:standalone
os:Linux 3.10.0-327.36.2.el7.ppc64 ppc64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.5
process_id:151704
run_id:2df76a29acc2910fff7e1ea77203caf0758b23dd
tcp_port:6379
uptime_in_seconds:3673
uptime_in_days:0
hz:10
lru_clock:14518975
executable:/home/redis/redis-3.2.8/config/redis-server
config_file:/home/redis/redis-3.2.8/config/server_6379.conf
# Clients
connected_clients:8
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:803360
used_memory_human:784.53K
used_memory_rss:7536640
used_memory_rss_human:7.19M
used_memory_peak:844336
used_memory_peak_human:824.55K
total_system_memory:1071411167232
total_system_memory_human:997.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:9.38
mem_allocator:jemalloc-4.0.3
# Persistence
loading:0
rdb_changes_since_last_save:150387
rdb_bgsave_in_progress:0
rdb_last_save_time:1776122982
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:0
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_current_size:9743688
aof_base_size:0
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0
# Stats
total_connections_received:7
total_commands_processed:173861
instantaneous_ops_per_sec:58
total_net_input_bytes:11409472
total_net_output_bytes:13484792
instantaneous_input_kbps:3.73
instantaneous_output_kbps:6.38
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:1
pubsub_patterns:0
latest_fork_usec:203
migrate_cached_sockets:0
# Replication
role:slave
master_host:10.124.152.9
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:10502941
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:3221225472
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:4.19
used_cpu_user:1.72
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=90,expires=0,avg_ttl=0
What is the cause of this situation?
From the redis doc:
Redis will not always free up (return) memory to the OS when keys are removed. This is not something special about Redis, but it is how most malloc() implementations work. For example if you fill an instance with 5GB worth of data, and then remove the equivalent of 2GB of data, the Resident Set Size (also known as the RSS, which is the number of memory pages consumed by the process) will probably still be around 5GB, even if Redis will claim that the user memory is around 3GB. This happens because the underlying allocator can't easily release the memory. For example often most of the removed keys were allocated in the same pages as the other keys that still exist.
used_memory being smaller than used_memory_rss means that memory
fragmentation.
used_memory being bigger than used_memory_rss means that your
physical RAM has run out and part of your redis data reside in disk
swap space.
But, your problem may lie somewhere far beyond our imagination. 90 keys of 46 bytes DEFINITELY won't make used_memory_human:3.00G! What's more strange, Master's info output says itotal_system_memory_human:997.83G, which means your server's total phsical memory is 997.83G!