Redis hash structure occupies more memory in cluster mode - redis

Hash:
./redis-cli -c -p 7000 hlen 0
(integer) 7746812
./redis-cli -c -p 7000 hlen 1
(integer) 7746812
./redis-cli -c -p 7000 hlen 2
(integer) 7746812
./redis-cli -c -p 7000 hlen 3
(integer) 7746812
./redis-cli -c -p 7000 hlen 4
(integer) 7746812
./redis-cli -c -p 7000 hlen 5
(integer) 0
Memory for each hash:
./redis-cli -c -p 7000 keys '*'
1) "3"
./redis-cli -c -p 7000 memory usage 3
(integer) 415715543
./redis-cli -c -p 7001 keys '*'
1) "2"
2) "1"
memory usage for each keys:
./redis-cli -c -p 7001 memory usage 1
(integer) 415715543
./redis-cli -c -p 7001 memory usage 2
(integer) 415715543
./redis-cli -c -p 7002 memory usage 0
(integer) 415715543
./redis-cli -c -p 7002 memory usage 4
(integer) 415715543
Memory usage cluster level:
./redis-cli -c -p 7001 info memory
# Memory
used_memory:1004513344
used_memory_human:**957.98M**
used_memory_rss:1030799360
used_memory_rss_human:983.05M
used_memory_peak:1004615496
used_memory_peak_human:958.08M
used_memory_peak_perc:99.99%
used_memory_overhead:2568042
used_memory_startup:1449576
used_memory_dataset:1001945302
used_memory_dataset_perc:99.89%
allocator_allocated:1004619400
allocator_active:1004859392
allocator_resident:1022844928
total_system_memory:75798228992
total_system_memory_human:70.59G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:239992
allocator_rss_ratio:1.02
allocator_rss_bytes:17985536
rss_overhead_ratio:1.01
rss_overhead_bytes:7954432
mem_fragmentation_ratio:1.03
mem_fragmentation_bytes:26347944
mem_not_counted_for_evict:3162
mem_replication_backlog:1048576
mem_clients_slaves:16922
mem_clients_normal:49694
mem_aof_buffer:3162
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
Same for node 7002
And 480MB for node 7000 which has only one hash.
Question:
Each hash takes 415MB
But why memory used is 480MB for one hash and 958MB for 2 hashses.
I printed the list of keys also in the same cluster.
Calculations are not tallying properly.
What am I missing here? Kindly advice.
It is not because of this also. I did memory purge. After that also, memory remains the same.

Redis has internal structure which occupies memory apart from names and values. It is called "Memory Overhead" in redis.
This is the reason for memory change for the hash and the cluster.
We can utilize ziplist to make hashes memory efficient.

Related

Benchmarking on Redis gets low performance when connection number raise to merely 5000

Environment:
Redis on a single machine (standalone mode) with 512GB mem and 128 cores.
Benchmark procedure:
run redis-benchmark -h xx -p xx -c 5000 -n 1000000 -t set,get, the result is like:
run redis-benchmark -h xx -p xx -c 1700 -n 1000000 -t set,get 3 times on the same server (split 5000 conns to 3 processes for executing), and the result on avg is roughly like:
run redis-benchmark -h xx -p xx -c 1700 -n 1000000 -t set,get only once, and the result is:
I've tried adding -P pipeline config, and it makes no big difference when comparing to the above result. I'm wondering why it suffers from performance penalty when configuring 5000 conns in a single redis-benchmark process? And how could I benchmark the real capability of current Redis instance? Thanks!

Redis wrong "used_memory_human" on Sentinel Slave

A slave node reports:
redis-cli -p 7001 info memory | grep used_memory
used_memory:10741368904
used_memory_human:10.00G
With no keys:
redis-cli -p 7001 info Keyspace
# Keyspace
It's master reports the correct size:
# Memory
used_memory:4963584
used_memory_human:4.73M
Persistence files have all the same size on every server:
-rw-r--r-- 1 root root 178 Feb 28 17:43 /var/lib/redis/dump_7001.rdb
Actions taken: on the Slave the .rdb file was deleted and redis restarted. Master-Slave synced ok but still reporting the difference.
Failovered but didn't solve it.
jira#fr4redistaskmp03:~$ redis-cli -h redistaskmp01 -p 7001 info memory | grep used_memory_human
used_memory_human:4.73M
jira#fr4redistaskmp03:~$ redis-cli -h redistaskmp02 -p 7001 info memory | grep used_memory_human
used_memory_human:4.77M
jira#fr4redistaskmp03:~$ redis-cli -h redistaskmp03 -p 7001 info memory | grep used_memory_human
used_memory_human:10.00G
jira#fr4redistaskmp03:~$ redis-cli -h redistaskmp03 -p 7001 info replication
# Replication
role:slave
master_host:172.25.8.17
master_port:7001
Any idea?
The complete output:
The bad one:
redistaskmp03:~$ redis-cli -p 7001 info memory
# Memory
used_memory:10741368904
used_memory_human:10.00G
used_memory_rss:6864896
used_memory_rss_human:6.55M
used_memory_peak:10741430888
used_memory_peak_human:10.00G
used_memory_peak_perc:100.00%
used_memory_overhead:10741311316
used_memory_startup:3658312
used_memory_dataset:57588
used_memory_dataset_perc:0.00%
allocator_allocated:10741442744
allocator_active:10741768192
allocator_resident:10751008768
total_system_memory:135206285312
total_system_memory_human:125.92G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:10737418240
maxmemory_human:10.00G
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:325448
allocator_rss_ratio:1.00
allocator_rss_bytes:9240576
rss_overhead_ratio:0.00
rss_overhead_bytes:-10744143872
mem_fragmentation_ratio:0.00
mem_fragmentation_bytes:-10734483056
mem_not_counted_for_evict:0
mem_replication_backlog:10737418240
mem_clients_slaves:0
mem_clients_normal:234764
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
The good one:
# Memory
used_memory:4963584
used_memory_human:4.73M
used_memory_rss:4591616
used_memory_rss_human:4.38M
used_memory_peak:5168376
used_memory_peak_human:4.93M
used_memory_peak_perc:96.04%
used_memory_overhead:4908920
used_memory_startup:3658352
used_memory_dataset:54664
used_memory_dataset_perc:4.19%
allocator_allocated:4993456
allocator_active:5312512
allocator_resident:10412032
total_system_memory:135206285312
total_system_memory_human:125.92G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:10737418240
maxmemory_human:10.00G
maxmemory_policy:noeviction
allocator_frag_ratio:1.06
allocator_frag_bytes:319056
allocator_rss_ratio:1.96
allocator_rss_bytes:5099520
rss_overhead_ratio:0.44
rss_overhead_bytes:-5820416
mem_fragmentation_ratio:0.94
mem_fragmentation_bytes:-309056
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:201992
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
redis_version:5.0.4
Maybe a bug?
Solved.
The problem was "repl-backlog-size 10gb" on .conf file.

unable to delete keys with prefix from redis

How does one deletes keys with certain prefix from Redis 5+?
I've tried following, yet didn't work for me(
root#1acb94e11aa2:/data# redis-cli --version
redis-cli 5.0.4
root#1acb94e11aa2:/data# redis-cli -n 9 KEYS ISO:* | wc -l
935
root#1acb94e11aa2:/data# redis-cli -n 9 KEYS ISO:* | xargs -0 redis-cli -n 9 DEL
(integer) 0
root#1acb94e11aa2:/data# redis-cli -n 9 KEYS ISO:* | wc -l
935
root#1acb94e11aa2:/data# redis-cli -n 9 --scan --pattern ISO:* | xargs -0 redis-cli -n 9 unlink
(integer) 0
root#1acb94e11aa2:/data#
Please advise.
As long as your key names do not include spaces, you should be able to run this:
$ redis-cli -n 9 --scan --pattern "ISO:*" | xargs -n 1 redis-cli -n 9 UNLINK
EDIT: if they do include spaces, you can do:
$ redis-cli -n 9 --scan --pattern "ISO:*" | xargs -n 1 -d "\n" redis-cli -n 9 UNLINK
FIX:
root#1acb94e11aa2:/data# redis-cli -n 9 KEYS ISO:* | xargs -d "\n" redis-cli -n 9 del
(integer) 262
root#1acb94e11aa2:/data#
root#1acb94e11aa2:/data# redis-cli -n 9 KEYS ISO:*
(empty list or set)
root#1acb94e11aa2:/data#

redis keys command delete data but scan command does not

redis-cli -s /data/redis/redis.sock --scan --pattern "*abcd|6128*" | xargs -L 100 redis-cli -s /data/redis/redis.sock DEL
above command is not deleting adta from redis and giving following output
(integer) 0
While the keys command works perfectly
redis-cli -s /data/redis/redis.sock KEYS 'abcd|6291*' | xargs redis-cli -s /data/redis/redis.sock DEL;
Is there is something wrong i am doing
Try xargs with -L 1 instead. Worked for me.
redis-cli -s /data/redis/redis.sock --scan --pattern "*abcd|6128*" | xargs -L 1 redis-cli -s /data/redis/redis.sock DEL
BTW, KEYS should be avoided in production environments as it is a blocking command.
scan only applies to some keys (by default on 10 keys per iteration). It returns an offset to continuesly run scan until you reach offset 0. Then you have sampled all keys. More details are in the documentation: http://redis.io/commands/scan
Keys on the other hand samples all keys in the db in one pass. It is also blocking due to redises single threaded architecture which might be bad for performance of other clients.

Copy all keys from one db to another in redis

Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
Copies all keys from database number 0 to database number 1 on localhost.
redis-cli --scan | xargs redis-cli migrate localhost 6379 '' 1 0 copy keys
If you use the same server/port you will get a timeout error but the keys seem to copy successfully anyway. GitHub Redis issue #1903
redis-cli -a $source_password -p $source_port -h $source_ip keys /*| while read key;
do echo "Copying $key";
redis-cli --raw -a $source_password -h $source_ip -p $source_port -n $dbname DUMP "$key"| head -c -1| redis-cli -x -a $destination_password -h $destination_IP -p $destination_port RESTORE "$key" 0;
Latest solution:
Use the RIOT open-source command line tool provided by Redislabs to copy the data.
Reference: https://developer.redis.com/riot/riot-redis/cookbook.html#_performing_migration
GitHub project link: https://github.com/redis-developer/riot
How to install: https://developer.redis.com/riot/riot-redis/
# Source Redis db
SH=test1-redis.com
SP=6379
# Target Redis db
TH=test1-redis.com
TP=6379
# Copy from db0 to db1 (standalone Redis db, Or cluster mode disabled)
#
riot-redis -h $SH -p $SP --db 0 replicate -h $TH -p $TP --db 1 --batch 10000 \
--scan-count 10000 \
--threads 4 \
--reader-threads 4 \
--reader-batch 500 \
--reader-queue 2000 \
--reader-pool 4
RIOT is quicker, supports multithreading, and works well with cross-environment Redis data copy ( AWS Elasticache, Redis OSS, and Redislabs ).
Not directly. I would suggest to use the always convenient redis-rdb-tools package (from Sripathi Krishnan) to extract the data from a normal rdb dump, and reinject it to another instance.
See https://github.com/sripathikrishnan/redis-rdb-tools
As far as I understand you need to copy keys from a particular DB (e.g 5 ) to a particular DB say 10. If that is the case you can use redis database dumper (https://github.com/r043v/rdd). Although as per documentation it has a switch (-d) to select a database for operation but didn't work for me, so what I did
1.) Edit the rdd.c file and look for int main(int argc,char argv) function
2.) Change the DB to as per your requirement
3.) compile the src by **make
4.) Dump all keys using ./rdd -o "save.rdd"
5.) Edit the rdd.c file again and change the DB
6.) Make again
7.) Import by using ./rdd "save.rdd" -o insert -s "IP" -p"Port"
I know this is old, but for those of you coming here form Google:
I just published a command line interface utility to npm and github that allows you to copy keys that match a given pattern (even *) from one Redis database to another.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
Try using dump to first dump all the keys and then restore the same
If migrating keys inside of the same redis engine, then you might use internal command MOVE for that (pipelining for more speed):
#!/bin/bash
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=4
target_db=0
total=$(redis-cli -n 4 keys \* | sed 's/^/MOVE /g' | sed 's/$/ '$target_db'/g' | wc -c)
#copy all keys without preserving ttl!
time redis-cli -h $source_host -p $source_port -n $source_db keys \* | \
sed 's/^/MOVE /g' | sed 's/$/ 0/g' | \
pv -s $total | \
redis-cli -h $source_host -p $source_port -n $source_db >/dev/null