Redis Master has about 90 keys.The longgest key is about 46 bytes.But the master had a 3GB memory usage.Here is the master info information
# Server
redis_version:3.2.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b45e9949f92f30de
redis_mode:standalone
os:Linux 3.10.0-327.36.2.el7.ppc64 ppc64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.5
process_id:150358
run_id:acfc6247d94cf0c62a98694adf35e3ff9f1c0d9d
tcp_port:6379
uptime_in_seconds:3539
uptime_in_days:0
hz:10
lru_clock:14518804
executable:/home/redis/redis-3.2.8/config/redis-server
config_file:/home/redis/redis-3.2.8/config/server_6379.conf
# Clients
connected_clients:37
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:3223126336
used_memory_human:3.00G
used_memory_rss:19988480
used_memory_rss_human:19.06M
used_memory_peak:3223657672
used_memory_peak_human:3.00G
total_system_memory:1071411167232
total_system_memory_human:997.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:0.01
mem_allocator:jemalloc-4.0.3
# Persistence
loading:0
rdb_changes_since_last_save:143046
rdb_bgsave_in_progress:0
rdb_last_save_time:1776122944
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_current_size:9266469
aof_base_size:0
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0
# Stats
total_connections_received:78
total_commands_processed:309390
instantaneous_ops_per_sec:126
total_net_input_bytes:21927610
total_net_output_bytes:62716490
instantaneous_input_kbps:8.79
instantaneous_output_kbps:12.20
rejected_connections:0
sync_full:2
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:47603
keyspace_misses:47731
pubsub_channels:1
pubsub_patterns:0
latest_fork_usec:206
migrate_cached_sockets:0
# Replication
role:master
connected_slaves:2
slave0:ip=10.124.152.8,port=6379,state=online,offset=9995541,lag=1
slave1:ip=10.124.152.7,port=6379,state=online,offset=9997441,lag=1
master_repl_offset:9998557
repl_backlog_active:1
repl_backlog_size:3221225472
repl_backlog_first_byte_offset:2
repl_backlog_histlen:9998556
# CPU
used_cpu_sys:7.61
used_cpu_user:3.37
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=90,expires=0,avg_ttl=0
And the slave info information
# Memory
used_memory:761448
used_memory_human:743.60K
used_memory_rss:7536640
used_memory_rss_human:7.19M
used_memory_peak:823488
used_memory_peak_human:804.19K
total_system_memory:1071411167232
total_system_memory_human:997.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:9.90
mem_allocator:jemalloc-4.0.3
127.0.0.1:6379> info
# Server
redis_version:3.2.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:17a905ed68c0b83
redis_mode:standalone
os:Linux 3.10.0-327.36.2.el7.ppc64 ppc64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.5
process_id:151704
run_id:2df76a29acc2910fff7e1ea77203caf0758b23dd
tcp_port:6379
uptime_in_seconds:3673
uptime_in_days:0
hz:10
lru_clock:14518975
executable:/home/redis/redis-3.2.8/config/redis-server
config_file:/home/redis/redis-3.2.8/config/server_6379.conf
# Clients
connected_clients:8
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:803360
used_memory_human:784.53K
used_memory_rss:7536640
used_memory_rss_human:7.19M
used_memory_peak:844336
used_memory_peak_human:824.55K
total_system_memory:1071411167232
total_system_memory_human:997.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:9.38
mem_allocator:jemalloc-4.0.3
# Persistence
loading:0
rdb_changes_since_last_save:150387
rdb_bgsave_in_progress:0
rdb_last_save_time:1776122982
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:0
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_current_size:9743688
aof_base_size:0
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0
# Stats
total_connections_received:7
total_commands_processed:173861
instantaneous_ops_per_sec:58
total_net_input_bytes:11409472
total_net_output_bytes:13484792
instantaneous_input_kbps:3.73
instantaneous_output_kbps:6.38
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:1
pubsub_patterns:0
latest_fork_usec:203
migrate_cached_sockets:0
# Replication
role:slave
master_host:10.124.152.9
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:10502941
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:3221225472
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:4.19
used_cpu_user:1.72
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=90,expires=0,avg_ttl=0
What is the cause of this situation?
From the redis doc:
Redis will not always free up (return) memory to the OS when keys are removed. This is not something special about Redis, but it is how most malloc() implementations work. For example if you fill an instance with 5GB worth of data, and then remove the equivalent of 2GB of data, the Resident Set Size (also known as the RSS, which is the number of memory pages consumed by the process) will probably still be around 5GB, even if Redis will claim that the user memory is around 3GB. This happens because the underlying allocator can't easily release the memory. For example often most of the removed keys were allocated in the same pages as the other keys that still exist.
used_memory being smaller than used_memory_rss means that memory
fragmentation.
used_memory being bigger than used_memory_rss means that your
physical RAM has run out and part of your redis data reside in disk
swap space.
But, your problem may lie somewhere far beyond our imagination. 90 keys of 46 bytes DEFINITELY won't make used_memory_human:3.00G! What's more strange, Master's info output says itotal_system_memory_human:997.83G, which means your server's total phsical memory is 997.83G!
Related
checking my SLOWLOG in redis 4.0.9, I found this, db no.1:
1) 1) (integer) 5194
2) (integer) 1538107771
3) (integer) 140185
4) 1) "SETEX"
2) "okurl:/en/7055756"
3) "3600"
4) "1"
5) "172.20.100.4:24784"
6) ""
I would like to know, what is possible to cause this. info:
# Server
redis_version:4.0.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:2fd2284b20453690
redis_mode:standalone
os:FreeBSD 11.2-PRERELEASE amd64
arch_bits:64
multiplexing_api:kqueue
atomicvar_api:atomic-builtin
gcc_version:4.2.1
process_id:40220
run_id:ec7f3e8144a681f0efca5e980ccdf39b8a8fdb71
tcp_port:6379
uptime_in_seconds:81007
uptime_in_days:0
hz:10
lru_clock:11384428
executable:/usr/local/bin/redis-server
config_file:/usr/local/etc/redis-sessions.conf
# Clients
connected_clients:429
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:5749433062
used_memory_human:5.35G
used_memory_rss:5746385347
used_memory_rss_human:5.35G
used_memory_peak:30520735162
used_memory_peak_human:28.42G
used_memory_peak_perc:18.84%
used_memory_overhead:1280922380
used_memory_startup:1055278
used_memory_dataset:4468510682
used_memory_dataset_perc:77.74%
total_system_memory:0
total_system_memory_human:0B
used_memory_lua:142448640
used_memory_lua_human:135.85M
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:1.00
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0
# Persistence
loading:0
rdb_changes_since_last_save:28274431
rdb_bgsave_in_progress:0
rdb_last_save_time:1538107772
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:35
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
# Stats
total_connections_received:232817201
total_commands_processed:1640805022
instantaneous_ops_per_sec:22264
total_net_input_bytes:116006836822
total_net_output_bytes:56698168188
instantaneous_input_kbps:1470.51
instantaneous_output_kbps:639.40
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:28093034
expired_stale_perc:29.33
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:399489929
keyspace_misses:93415724
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:645655
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
# Keyspace
db0:keys=8517091,expires=8516507,avg_ttl=14897082
db1:keys=4938456,expires=4805686,avg_ttl=1455726323
Maybe it is cause just when redis is writing dump file to filesystem, so it slows down ? Or it can be some other blocking command is running ? (I am not using KEYS, but there are other commands which could make this slow down?)
I am not using KEYS, but there are other commands which could make this slow down?
Commands in the slowlog get there if their respective execution time is long - other commmands do not affect that (e.g. KEYS or anything else).
checking my SLOWLOG in redis 4.0.9, I found this
Checking the SLOWLOG is a good practice that should be done periodically. That said, a single slow command in it is probably an outlier that's due to any number of reasons. I wouldn't worry about it unless it is pathological.
I am using Redis from Amazon ElastiCache. When I am creating keys its getting deleted automatically in random time intervals, ranging from 1 to 40 seconds
**************:6379> set testkey 1
OK
**************:6379> get testkey
"1"
**************:6379> get testkey
"1"
**************:6379> get testkey
"1"
**************:6379> get testkey
(nil)
Even if i set a expire its still not honoring that time
**************:6379> set testkey 1
OK
**************:6379> expire testkey 1000
(integer) 1
**************:6379> ttl testkey
(integer) 996
**************:6379> ttl testkey
(integer) 994
**************:6379> ttl testkey
(integer) -2
**************:6379> get testkey
(nil)
I tried to search through articles but could not find a solid solution. Please help me or point me in the right direction
My INFO ALL output
# Server
redis_version:4.0.10
redis_git_sha1:0
redis_git_dirty:0
redis_build_id:0
redis_mode:standalone
os:Amazon ElastiCache
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:0.0.0
process_id:1
run_id:9b47409883d74bd6226f6da83049f0299306942f
tcp_port:6379
uptime_in_seconds:1532242
uptime_in_days:17
hz:10
lru_clock:8988158
executable:-
config_file:-
# Clients
connected_clients:1584
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:41694472
used_memory_human:39.76M
used_memory_rss:45117440
used_memory_rss_human:43.03M
used_memory_peak:46522760
used_memory_peak_human:44.37M
used_memory_peak_perc:89.62%
used_memory_overhead:33041108
used_memory_startup:3662144
used_memory_dataset:8653364
used_memory_dataset_perc:22.75%
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:436469760
maxmemory_human:416.25M
maxmemory_policy:volatile-lru
mem_fragmentation_ratio:1.08
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0
# Persistence
loading:0
rdb_changes_since_last_save:54915489
rdb_bgsave_in_progress:0
rdb_last_save_time:1534182572
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
# Stats
total_connections_received:6594931
total_commands_processed:311024303
instantaneous_ops_per_sec:345
total_net_input_bytes:47103888444
total_net_output_bytes:1706056764081
instantaneous_input_kbps:20.91
instantaneous_output_kbps:2093.84
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:2573
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:23866292
keyspace_misses:234233574
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
# Replication
role:master
connected_slaves:0
master_replid:ab5f0fbbecf06195be44983dbde289e2d0725335
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:8175.90
used_cpu_user:5509.23
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Commandstats
cmdstat_ping:calls=434117,usec=366264,usec_per_call=0.84
cmdstat_set:calls=6641,usec=23175,usec_per_call=3.49
cmdstat_config:calls=3,usec=55,usec_per_call=18.33
cmdstat_del:calls=20684265,usec=38010326,usec_per_call=1.84
cmdstat_keys:calls=1,usec=34,usec_per_call=34.00
cmdstat_exists:calls=458,usec=899,usec_per_call=1.96
cmdstat_expire:calls=4229654,usec=9412184,usec_per_call=2.23
cmdstat_flushdb:calls=27478,usec=14170960,usec_per_call=515.72
cmdstat_get:calls=248088801,usec=1086400958,usec_per_call=4.38
cmdstat_setex:calls=20257389,usec=63289845,usec_per_call=3.12
cmdstat_ttl:calls=2202549,usec=3262291,usec_per_call=1.48
cmdstat_getset:calls=7808523,usec=25766044,usec_per_call=3.30
cmdstat_select:calls=6594457,usec=6533380,usec_per_call=0.99
cmdstat_info:calls=689967,usec=219565932,usec_per_call=318.23
# SSL
ssl_enabled:no
ssl_connections_to_previous_certificate:0
ssl_connections_to_current_certificate:0
ssl_current_certificate_not_before_date:(null)
ssl_current_certificate_not_after_date:(null)
ssl_current_certificate_serial:0
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=4604,expires=4604,avg_ttl=172095914
We are using laravel framework, and for some reason we are running artisan clear:cache every minute as pointed out by #himanshu gupta.
I removed the cron and everything is normal
I have an issue with Redis which appears only in production and I am not able to reproduce it locally.
I have 11 servers that send data to Redis and each one increments members of an hash map (each server has its own hash map).
At random times the hash maps disappear and I see all the counts starting from 0 again.
Note that:
keys are not expired: neither expiration nor ttl are set on any key;
keys are not evicted: maxmemory is not set and maxmemory-policy is no-eviction anyway;
Redis never has memory problems because it's on a server with 15GB of free RAM and it never crashes anyway;
INFO reports 13 connected clients which makes sense: 11 servers + 1 monitoring application that I have locally + the connection used to get the output of the INFO command.
I don't know where to look anymore.
Here is the output of the INFO command:
# Server
redis_version:3.2.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:5a93b34a97c2cde8
redis_mode:standalone
os:Linux 4.9.0-6-amd64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:6.3.0
process_id:1394
run_id:ff6063b446dab8248fe9db118d2993a9de4252c8
tcp_port:6379
uptime_in_seconds:186923
uptime_in_days:2
hz:10
lru_clock:2982223
executable:/usr/bin/redis-server
config_file:/etc/redis/redis.conf
# Clients
connected_clients:13
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:1067336
used_memory_human:1.02M
used_memory_rss:3784704
used_memory_rss_human:3.61M
used_memory_peak:1471928
used_memory_peak_human:1.40M
total_system_memory:27401003008
total_system_memory_human:25.52G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:3.55
mem_allocator:jemalloc-3.6.0
# Persistence
loading:0
rdb_changes_since_last_save:13854793
rdb_bgsave_in_progress:0
rdb_last_save_time:1529530575
rdb_last_bgsave_status:err
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:185
total_commands_processed:19637940
instantaneous_ops_per_sec:121
total_net_input_bytes:752885632
total_net_output_bytes:1197081334
instantaneous_input_kbps:4.61
instantaneous_output_kbps:9.27
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:1333722
keyspace_misses:120814
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:185
migrate_cached_sockets:0
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:262.40
used_cpu_user:207.39
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=14,expires=0,avg_ttl=0
Very likely Redis server is restarting and you are losing data because redis isn't able to save to disk.
Rdb last save status is error, so data was never created. Also, aof is disabled. If redis restarts, it will start with all data wiped out.
Check your logs - very likely redis doesn't have permissions to write to disk. Also I'm sure you will see entries that suggest redis is restarting.
Currently, my 8GB RAM server is using up 5.33GB for Redis (Other parts of the server take up about 1.6GB, so even immediately after rebooting the server, I'm already at ~7GB RAM [88%]). Redis's memory usage continues to grow until it is eventually killed by Ubuntu's OOM, causing a flurry of errors for my node application.
I've attached the Redis INFO output at the bottom of this post. I had originally thought there might be too many keys in redis, but I read from Redis (http://redis.io/topics/faq) that 1 million keys is ~ 100MB. We have about 2 million (~200MB - nowhere near 5GB), so this couldn't possibly be the issue.
My questions are:
- Where is redis consuming all of this memory? The keyspace doesn't take up much at all.
- What can I do to stop it from continuously consuming more memory?
Thanks!
# Server
redis_version:2.8.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:f73a208b84b18824
redis_mode:standalone
os:Linux 3.2.0-55-virtual x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.6.3
process_id:1286
run_id:6d3daee5341a549dfaca63706c40c44086198317
tcp_port:6379
uptime_in_seconds:1390
uptime_in_days:0
hz:10
lru_clock:771223
config_file:/etc/redis/redis.conf
# Clients
connected_clients:198
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:72
# Memory
used_memory:5720230408
used_memory_human:5.33G
used_memory_rss:5826732032
used_memory_peak:5732485800
used_memory_peak_human:5.34G
used_memory_lua:33792
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.5.0
# Persistence
loading:0
rdb_changes_since_last_save:94
rdb_bgsave_in_progress:0
rdb_last_save_time:1412804004
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:40
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:382
total_commands_processed:36936
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:2421
keyspace_misses:1
pubsub_channels:1
pubsub_patterns:9
latest_fork_usec:1361869
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:15.95
used_cpu_user:101.34
used_cpu_sys_children:12.55
used_cpu_user_children:146.17
# Keyspace
db0:keys=2082234,expires=1162351,avg_ttl=306635722644
Thanks for the response Itamar. I was under the false (and really didn't think enough) impression that the keys and values would all roughly be the same size. Turns out there were some hashes stored from kue that were over 10KB each, and we had hundreds of thousands of them. Removing those guys worked.
Thanks again.
I'm getting "OOM command not allowed" when trying to set a key,
maxmemory is set to 500M with maxmemory-policy "volatile-lru", I'm setting TTL for each key sent to redis.
INFO command returns : used_memory_human:809.22M
If maxmemory is set to 500M, how did I reached 809M ?
INFO command does not show any Keyspaces , how is it possible ?
KEYS * returns "(empty list or set)" ,I've tried to change db number , still no keys found.
Here is info command output:
redis-cli -p 6380
redis 127.0.0.1:6380> info
# Server
redis_version:2.6.4
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 2.6.32-358.14.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:28291
run_id:229a2ee688bdbf677eaed24620102e7060725350
tcp_port:6380
uptime_in_seconds:1492488
uptime_in_days:17
lru_clock:1429357
# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:848529904
used_memory_human:809.22M
used_memory_rss:863551488
used_memory_peak:848529192
used_memory_peak_human:809.22M
used_memory_lua:31744
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.0.0
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1375949883
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
# Stats
total_connections_received:3
total_commands_processed:8
instantaneous_ops_per_sec:0
rejected_connections:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
# Replication
role:master
connected_slaves:0
# CPU
used_cpu_sys:18577.25
used_cpu_user:1376055.38
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Keyspace
redis 127.0.0.1:6380>
Redis' maxmemory volatile-lru policy can fail to free enough memory if the maxmemory limit is already used by the non-volatile keys.
Any chance you changed the number of databases? If you use a very large number then the initial memory usage may be high
In our case, maxmemory was set to a high amount, then someone on the team changed it to a lower amount after data had already been stored.
My problem was that old data wasn't being released and it caused the redis db to get jammed up quickly.
in Python, I cleared the cache server by running
red = redis.StrictRedis(...)
red.flushdb()
And then, limted the ttl to 24h by saving the file with "ex":
red.set(<FILENAME>, png, ex=(60*60*24))
Memory is controlled in the config. Thus, your instance limited as it says. You can either look in your redis.conf or from the CLI Tool issue "config get maxmemory" to get the limit.
If you manage this Redis instance, you'll need to consult and adjust the config file. Usually looked for in /etc/redis.conf or /etc/redis/redis.conf.
If you are using a Redis provider you will need to get with them about increasing your limit.
TO debug this issue, need to check that what action you performed on the redis-cli manually or somewhere from the code.
It might be possible you ran keys * and you have very less memory to accommodate memory consumed by this command. This leads to throttling to cache service.
In code, your changes might impact key insertion and duplicate or unique data in the db and this leads to overall memory exceed in the system.