"OOM command not allowed when used memory > 'maxmemory'" for an Amazon ElastiCache Redis - redis

I'm getting "OOM command not allowed when used memory > 'maxmemory'" error from time to time when trying to insert into an Elasticache redis node.
I went from a self-managed redis instance (maxmemory = 12Go, maxmemory-policy = allkeys-lru) to an Elasticache redis one (r5.large, i.e. maxmemory = 14 Go, maxmemory-policy = allkeys-lru).
However, after the migration of keys I'm getting "OOM command not allowed when used memory > 'maxmemory'" error from time to time that I don't manage to understand.
I've checked what they recommend here: https://aws.amazon.com/premiumsupport/knowledge-center/oom-command-not-allowed-redis/ to solve the problem but so far:
I have a TTL on all keys
I'm already in allkeys-lru
When I look at the node freeable memory I have about 7Go available
Here is the output when INFO memory
# Memory
used_memory:10526693040
used_memory_human:9.80G
used_memory_rss:11520012288
used_memory_rss_human:10.73G
used_memory_peak:10560011952
used_memory_peak_human:9.83G
used_memory_peak_perc:99.68%
used_memory_overhead:201133315
used_memory_startup:4203584
used_memory_dataset:10325559725
used_memory_dataset_perc:98.13%
allocator_allocated:10527575720
allocator_active:11510194176
allocator_resident:11667750912
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:10527885773
maxmemory_human:9.80G
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.09
allocator_frag_bytes:982618456
allocator_rss_ratio:1.01
allocator_rss_bytes:157556736
rss_overhead_ratio:0.99
rss_overhead_bytes:-147738624
mem_fragmentation_ratio:1.09
mem_fragmentation_bytes:993361528
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:153411
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
If you have any clue for how to solve this.
Thanks!

Related

Cannot add values to Redis cluster - The cluster is down

I have 4 nodes, 3 are master and 1 of them is a slave. I am trying to add a simple string by set foo bar, but whenever i do it, i get this error:
(error) CLUSTERDOWN The cluster is down
Below is my cluster info
127.0.0.1:7000cluster info
cluster_state:fail
cluster_slots_assigned:11
cluster_slots_ok:11
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:4
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:3
cluster_stats_messages_sent:9262
cluster_stats_messages_received:9160
I am using Redis-x64-3.0.503. please let me know how to solve this
Cluster Nodes:
87982f22cf8fb12c1247a74a2c26cdd1b84a3b88 192.168.2.32:7000 slave bc1c56ef4598fb4ef9d26c804c5fabd462415f71 1492000375488 1492000374508 3 connected
9527ba919a8dcfaeb33d25ef805e0f679276dc8d 192.168.2.35:7000 master - 1492000375488 1492000374508 2 connected 16380
ff999fd6cbe0e843d1a49f7bbac1cb759c3a2d47 192.168.2.33:7000 master - 1492000375488 1492000374508 0 connected 16381
bc1c56ef4598fb4ef9d26c804c5fabd462415f71 127.0.0.1:7000 myself,master - 0 0 3 connected 1-8 16383
Just to add up and simplify what #neuront said.
Redis stores data in hash slots. For this to understand you need to know how Hasmaps or Hashtables work. For our reference here redis has a constant of 16384 slots to assign and distribute to all the masters servers.
Now if we look at the node configuration you posted and refer it with redis documentation, you'll see the end numbers mean the slots assigned to the masters.
In your case this is what it looks like
... slave ... connected
... master ... connected 16380
... master ... connected 16381
... master ... connected 1-8 16380
So all the machines are connected to form up the cluster but not all the hash slots are assigned to store the information. It should've been something like this
... slave ... connected
... master ... connected 1-5461
... master ... connected 5462-10922
... master ... connected 10923-16384
You see, now we are assigning the range of all the hash slots like the documentation says
slot: A hash slot number or range. Starting from argument number 9, but there may be up to 16384 entries in total (limit never reached). This is the list of hash slots served by this node. If the entry is just a number, is parsed as such. If it is a range, it is in the form start-end, and means that the node is responsible for all the hash slots from start to end including the start and end values.
For specifically in your case, when you store some data with a key foo, it must've been assigned to some other slot not registered in the cluster.
Since you're in Windows, you'll have to manually setup the distribution. For that you'll have to do something like this (this is in Linux, translate to windows command)
for slot in {0..5400}; do redis-cli -h master1 -p 6379 CLUSTER ADDSLOTS $slot; done;
taken from this article
Hope it helped.
Only 11 slots were assigned so your cluster is down, just like the message tells you. The slots are 16380 at 192.168.2.35:7000, 16381 at 192.168.2.33:7000 and 1-8 16383 at 127.0.0.1:7000.
Of couse the direct reason is that you need to assign all 16384 slots (0-16383) to the cluster, but I think it was caused by a configuration mistake.
You have a node with a localhost address 127.0.0.1:7000. However, 192.168.2.33:7000 is also a 127.0.0.1:7000, while 192.168.2.35:7000 is also a 127.0.0.1:7000. This localhost address problem makes a node cannot tell itself from another node, and I think this causes the chaos.
I suggest you reset all the nodes by cluster reset commands and re-create the cluster again, and make sure you are using their 192.168.*.* address this time.
#user1829319 Here goes the windows equivalents for add slots:
for /l %s in (0, 1, 8191) do redis-cli -h 127.0.0.1 -p 6379 CLUSTER ADDSLOTS %s
for /l %s in (8192, 1, 16383) do redis-cli -h 127.0.0.1 -p 6380 CLUSTER ADDSLOTS %s
you should recreate your cluster by doing flush all and cluster reset and in next cluster setup make sure you verify all slots has been assigned to masters or not using > cluster slots

Cassandra service cannot start on CentOS 6.4, starting process is mysteriously killed

Cassandra version: 2.2
OS: CentOS 6.4
I just follow the documentation here to run
[user#centOS conf]$ sudo yum install dsc22
[user#centOS conf]$ sudo service cassandra start
Starting Cassandra: OK
But I check the process status by "ps" and no cassandra process running, since the starting process is mysteriously killed. (I run the same thing on my Mac and all works fine, and on CentOS directly type 'cassandra' in command line also works. #update: I tried "yum install dsc20" and working fine too...)
In '/var/log/cassandra/cassandra.log' is like below, no error message, no warning. Could anyone please have any idea on this? Thanks a lot.
tail: /var/log/cassandra/cassandra.log: file truncated
CompilerOracle: inline org/apache/cassandra/db/AbstractNativeCell.compareTo (Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/db/composites/AbstractSimpleCellNameType.compareUnsigned (Lorg/apache/cassandra/db/composites/Composite;Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
INFO 23:47:52 Loading settings from file:/etc/cassandra/default.conf/cassandra.yaml
INFO 23:47:52 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/data/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/data/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/data/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO 23:47:52 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 23:47:52 Global memtable on-heap threshold is enabled at 121MB
INFO 23:47:52 Global memtable off-heap threshold is enabled at 121MB
INFO 23:47:52 Loading settings from file:/etc/cassandra/default.conf/cassandra.yaml
INFO 23:47:52 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/data/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/data/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/data/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO 23:47:52 Hostname: centos-vm.com
INFO 23:47:52 JVM vendor/version: OpenJDK 64-Bit Server VM/1.7.0_45
INFO 23:47:52 Heap size: 509214720/509214720
INFO 23:47:52 Code Cache Non-heap memory: init = 2555904(2496K) used = 636288(621K) committed = 2555904(2496K) max = 50331648(49152K)
INFO 23:47:52 Par Eden Space Heap memory: init = 104071168(101632K) used = 50207568(49030K) committed = 104071168(101632K) max = 104071168(101632K)
INFO 23:47:52 Par Survivor Space Heap memory: init = 12976128(12672K) used = 0(0K) committed = 12976128(12672K) max = 12976128(12672K)
INFO 23:47:52 CMS Old Gen Heap memory: init = 392167424(382976K) used = 0(0K) committed = 392167424(382976K) max = 392167424(382976K)
INFO 23:47:52 CMS Perm Gen Non-heap memory: init = 21757952(21248K) used = 12843096(12542K) committed = 21757952(21248K) max = 174063616(169984K)
INFO 23:47:52 Classpath: /etc/cassandra/conf:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/crc32ex-0.1.1.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-16.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/ohc-core-0.3.4.jar:/usr/share/cassandra/lib/ohc-core-j8-0.3.4.jar:/usr/share/cassandra/lib/reporter-config3-3.0.0.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.0.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.4.1.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-2.2.0.jar:/usr/share/cassandra/apache-cassandra-thrift-2.2.0.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.3.0.jar
It turns out to be a memory problem. Since the starting process is mysteriously killed, the fact is it's killed by kernel.
I'm running it on virtual machine with 1G memory.
When I run "ps -ef | grep cassandra" and then "dmesg | egrep -i -B100 'cassandra-pid'", I got
Out of memory: Kill process 9450 (java) score 218 or sacrifice child
Killed process 9450, UID 0, (java) total-vm:1155600kB, anon-rss:797292kB, file-rss:100956kB
Thus confirmed it's a memory issue. Then I just modified the memory allocation for Cassandra and all worked fine.
in cassandra-env.sh
change:
half_system_memory_in_mb=`expr $system_memory_in_mb / 2`
quarter_system_memory_in_mb=`expr $half_system_memory_in_mb / 2`
to be:
half_system_memory_in_mb="300"
quarter_system_memory_in_mb="300"
you can seet by typing command netstat -tunlp on the prompt , there you will find process id for the port 7199 . then you have to kill it if you want to stop .
netstat -tunlp tell us port wise process id .

Redis - why is redis-server decreasing in memory?

I'm running Redis on windows and have noticed that the size of redis-server.exe decreases over time. When I open Redis, it reads from a dump file and loads all of the hashed key values into memory to about 1.4 GB. However, over time, the amount of memory that redis-server.exe takes up decreases. I have seen it go down to less than 100 MB.
The only reason that I could see this happening is that the keys are expiring and leaving memory, however I have set Redis up so that they never expire. I have also made sure that I have given enough memory.
Some of my settings include:
maxmemory 2gb
maxmemory-policy noeviction
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
activerehashing no
If it's of interest, when I first loaded the keys into Redis, I did it through Python like so:
r.hset(key, field, value)
Any help would be appreciated. I want the keys to be there forever.
This is my output from the INFO command right after I first run it:
redis 127.0.0.1:6379> INFO
redis_version:2.4.6
redis_git_sha1:26cdd13a
redis_git_dirty:0
arch_bits:64
multiplexing_api:winsock2
gcc_version:4.6.1
process_id:9092
uptime_in_seconds:69
uptime_in_days:0
lru_clock:248011
used_cpu_sys:3.34
used_cpu_user:10.06
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:1129560232
used_memory_human:1.05G
used_memory_rss:1129560232
used_memory_peak:1129560144
used_memory_peak_human:1.05G
mem_fragmentation_ratio:1.00
mem_allocator:libc
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1386600366
bgrewriteaof_in_progress:0
total_connections_received:1
total_commands_processed:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
vm_enabled:0
role:master
db0:keys=4007989,expires=0
After I run it when I noticed the memory has decreased in Windows Task Manager, there are not many differences:
uptime_in_seconds:4412 (from 69)
lru_clock:248445 (from 248011)
used_cpu_sys:4.59 (from 3.34)
used_cpu_user:10.25 (from 10.06)
used_memory:1129561240 (from 1129560232)
used_memory_human:1.05G (same!)
used_memory_rss:1129561240 (from 1129560232)
used_memory_peak:1129568960 (from 1129560144)
used_memory_peak_human:1.05G (same!)
mem_fragmentation_ratio:1.00 (same!)
last_save_time:1386600366 (same!)
total_connections_received:4 (from 1)
total_commands_processed:10 (from 0)
expired_keys:0 (same!)
evicted_keys:0 (same!)
keyspace_hits:0 (same!)
keyspace_misses:2 (from 0)
The lookups are taking longer when the memory size is lower. What is going on here?
What version of Redis are you using ?
Do you have a cron of some sort that removes key ? (do a grep on the del command on your codebase just to be sure)
Redis usually runs a single process to manage the in-memory data. However, when the data is persisted to the RDB file, a second process starts to save all the data. During that process, you can see your memory use increase up to double the size of your data set.
I am familiar with how it is done in linux, but I don't know the details about the windows port, so maybe the differences in size you are seeing are because of this second process that is launched periodically? You can easily try if this is the case by issuing a BGSAVE command in redis. This will start the synchronization of data to RDB on the background, so you can see if the memory usage pattern is the one you observed.
If that is the case, then you already know what is going on :)
good luck

How do i view my Redis database current_size?

I am aware of redis-cli, and the info and config commands. However, they do not have anything that states the size of the current database. How could I figure this out?
Using the INFO command. full details here: http://redis.io/commands/info
sample output:
redis-cli
redis 127.0.0.1:6379> info
redis_version:2.4.11
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:kqueue
gcc_version:4.2.1
process_id:300
uptime_in_seconds:1389779
uptime_in_days:16
lru_clock:1854465
used_cpu_sys:59.86
used_cpu_user:73.02
used_cpu_sys_children:0.15
used_cpu_user_children:0.11
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:1329424
used_memory_human:1.27M
used_memory_rss:2285568
used_memory_peak:1595680
used_memory_peak_human:1.52M
mem_fragmentation_ratio:1.72
mem_allocator:libc
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1360719404
bgrewriteaof_in_progress:0
total_connections_received:221
total_commands_processed:29926
expired_keys:2
evicted_keys:0
keyspace_hits:1678
keyspace_misses:3
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:379
vm_enabled:0
role:master
db0:keys=23,expires=0
You can use the following command to list the databases for which some keys are defined:
INFO keyspace
# Keyspace
db0:keys=6002,expires=0,avg_ttl=0
db9:keys=20953296,expires=0,avg_ttl=0
db10:keys=1,expires=0,avg_ttl=0
You can also use Select 0 or Select 1 or any db which you want to check with the current size. After selection of db Use dbsize command to display the size of selected db.
Select 9
OK
dbsize
(integer) 20953296
for listing overall information of your redis type info and to view only memory just type
INFO Memory
# Memory
used_memory:1259920
used_memory_human:1.20M
used_memory_rss:1227000
used_memory_peak:2406152
used_memory_peak_human:2.29M
used_memory_lua:36864
mem_fragmentation_ratio:0.97
mem_allocator:dlmalloc-2.8
redis-cli info memory | grep 'used_memory.*human';
Sample output:
used_memory_human:20.66M
used_memory_rss_human:24.26M
used_memory_peak_human:46.14M
used_memory_lua_human:37.00K
used_memory_scripts_human:0B
You can also do this with an one line command:
redis-cli -p 6379 -a password info| egrep "used_memory_human|total_system_memory_human"
This will display the total memory used by Redis and the total RAM on the server.
Use the dbsize command to get the number of keys in the database

Reset Redis "used_memory_peak" stat

I'm using Redis (2.4.2) and with the INFO command I can read stats about my Redis server.
There are many stats, including some about how much memory is used. And one is "used_memory_peak" that seems to hold the maximum amount of memory Redis has ever taken.
I've deleted a bunch of key, and I'd like to reset this stat since it affects the scale of my Munin graphs.
There is a CONFIG RESETSTAT command, but it doesn't seem to affect this particular stat.
Any idea how I could do this, without having to export/delete/import my dataset ?
EDIT :
According to #antirez himself (issue 369 on GitHub), this is an intended behavior, but it this feature could be improved to be more useful in a future release.
The implementation of CONFIG RESETSTAT is quite simple:
} else if (!strcasecmp(c->argv[1]->ptr,"resetstat")) {
if (c->argc != 2) goto badarity;
server.stat_keyspace_hits = 0;
server.stat_keyspace_misses = 0;
server.stat_numcommands = 0;
server.stat_numconnections = 0;
server.stat_expiredkeys = 0;
addReply(c,shared.ok);
So it does not initialize the server.stat_peak_memory field used to store the maximum amount of memory ever used by Redis. I don't know if it is a bug or a feature.
Here is a hack to reset the value without having to stop Redis. The idea is to use gdb in batch mode to just change the value of the variable (which is part of a static structure). Normally Redis is compiled with debugging symbols.
# Here we have plenty of things in this instance
> ./redis-cli info | grep peak
used_memory_peak:1363052184
used_memory_peak_human:1.27G
# Let's do some cleaning: everything is wiped out
# don't do this in production !!!
> ./redis-cli flushdb
OK
# Again the same values, while some memory has been freed
> ./redis-cli info | grep peak
used_memory_peak:1363052184
used_memory_peak_human:1.27G
# Here is the magic command: reset the parameter with gdb (output and warnings to be ignored)
> gdb -batch -n -ex 'set variable server.stat_peak_memory = 0' ./redis-server `pidof redis-server`
Missing separate debuginfo for /lib64/libm.so.6
Missing separate debuginfo for /lib64/libdl.so.2
Missing separate debuginfo for /lib64/libpthread.so.0
[Thread debugging using libthread_db enabled]
[New Thread 0x41001940 (LWP 22837)]
[New Thread 0x40800940 (LWP 22836)]
Missing separate debuginfo for /lib64/libc.so.6
Missing separate debuginfo for /lib64/ld-linux-x86-64.so.2
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7ffff51ff000
0x00002af0b5eef218 in epoll_wait () from /lib64/libc.so.6
# And now, result is different: great !
> ./redis-cli info | grep peak
used_memory_peak:718768
used_memory_peak_human:701.92K
This is a hack: think twice before applying this trick on a production instance.
Simple trick to clear peal memory::
Step 1:
/home/logproc/redis/bin/redis-cli BGREWRITEAOF
wait till it finish rewriting aof file.
Step 2:
restart redis db
Done. Thats It.