I have a Redis cluster with maxmemory_human 6.05 G and used_memory_human: 4.62M
I want to fuel up this used_memory_human with dump data so i would have 2G of used_memory_human
How could i do that?
There's a built-in debug command for that.
debug populate 2000000 testkey 1000
This will create 2 million 1kb string keys.
> debug populate 2000000 testkey 1000
OK
(2.52s)
> scan 0
1) "65536"
2) 1) "testkey:1637732"
2) "testkey:510112"
3) "testkey:1313139"
4) "testkey:34729"
5) "testkey:734989"
6) "testkey:996052"
7) "testkey:223126"
8) "testkey:1578003"
9) "testkey:1335698"
10) "testkey:1151100"
> info memory
# Memory
used_memory:2185489192
used_memory_human:2.04G
used_memory_rss:2247540736
used_memory_rss_human:2.09G
used_memory_peak:2185571088
used_memory_peak_human:2.04G
Populate
eval "for i=0,(1024*1024*20) do redis.call('set','testData:'..i,'1234567890') end" 0
used_memory_human:1.81G
Clean
eval "for i=0,(1024*1024*20) do redis.call('del','testData:'..i) end" 0
used_memory_human:574.41K
Related
in Redis, increment value can be stored or we can increment value of keys. Like
127.0.0.1:6379> set _inc 0
OK
127.0.0.1:6379> INCR _inc
(integer) 1
127.0.0.1:6379> INCR _inc
(integer) 2
127.0.0.1:6379> get _inc
"2"
or we can save items like
item:UNIQUE-ID
item:UNI-QUE-ID
But how to save items with increment N ID like:
item:1
item:2
item:3
item:4
...
So far I found a solution with LUA Script
127.0.0.1:6379> eval 'return redis.call("set", "item:" .. redis.call("incr","itemNCounter"), "item value")' 0
OK
...
127.0.0.1:6379> keys item:*
1) "item:10"
2) "item:14"
3) "item:13"
4) "item:6"
5) "item:15"
6) "item:9"
7) "item:4"
8) "item:1"
9) "item:5"
10) "item:3"
11) "item:12"
12) "item:7"
13) "item:8"
14) "item:11"
15) "item:2"
Question: Is there a method to make it without running Lua script or reliable method?
I expect that there would be a Redis command to make it.
Question: Is there a method to make it without running Lua script or reliable method?
No, there isn't. However, EVAL is supported since Redis version 2.6 and LUA scripts are first-class citizens in Redis.
I try to parse redis slowlog to a files with csv format (comma, column or space as delimiter), but I am not sure how to do that.
If I run redis-cli -h <ip> -p 6379 slowlog get 2, I get below output:
1) 1) (integer) 472141
2) (integer) 1625198930
3) (integer) 11243
4) 1) "ZADD"
2) "key111111"
3) "16251.8488247"
4) "abcd"
5) "1.2.3.4:89"
6) ""
2) 1) (integer) 37214
2) (integer) 1525198930
3) (integer) 1024
4) 1) "ZADD"
2) "key2"
3) "1625.8"
5) "1.2.3.4:89"
6) "client-name"
Note the item 4) of each log entry may contain different numbers of arguments, e.g. 4) of log entry 1) has 4 arguments, while 4) of log entry 2) has 3 arguments; and the item 6) can be a string like client-name or can be empty.
If I run the command using below shell script:
results=$(redis-cli -h <ip> -p $port slowlog get 2)
echo $results
I get below output:
472141 1625198930 11243 ZADD key111111 16251.8488247 abcd 1.2.3.4:89 37214 1525198930 1024 ZADD key2 1625.8 1.2.3.4:89 client-name
As you see, the output of the command becomes lots of words. Besides, it is hard to figure out which group of words belong to the same log entry. what I want is to get a csv file like:
472141 1625198930 11243 ZADD key111111 16251.8488247 abcd 1.2.3.4:89
37214 1525198930 1024 ZADD key2 1625.8 1.2.3.4:89 client-name
Is there anyway I can parse the redis slowlog to a cvs file as I want? any script like python, shell is welcomed. and any existing code is welcomed.
I am moving an existing scheduling data set to redis. This data has schedules, and users. This is a many-to-many relationship.
I store the full list of schedules in a scored zset where the score is the timestamp of the schedule date. I store it like this so I can easily find all schedules that have elapsed and act on those schedules.
I also need the ability to find all schedules that belongs to a user, so each user has their own zset containing duplicate information.
So the data may look like this:
s_1000: [ (100, "{..}"), (101, "{..}") ] # the schedules key
us_abc: [ (100, "{..}"), ] # a users schedules key
us_efg: [ (100, "{..}"), ] # another users schedules key
An actual record looks like this:
"{\"di\":10000,\"ci\":10000,\"si\":10000,\"p\":\"M14IB5A2830TE4KSSEGY0ZDX37V93FYX\",\"sse\":false}"
I've shortened the keys, and could even remove them altogether along with the json formatting for a really minimal payload, but all the data needs to be there.
This string alone is only 85 chars. Because there is a copy of each record, that would have a total of 170 chars for this record. The key for this would be us_M14IB5A2830TE4KSSEGY0ZDX37V93FYX_YYMMDD for another 42 chars. In total, i'm seeing only 255 bytes necessary to store this data.
I've inserted 100k records just like this one in the way I've described. By my count, that should only require 25mb, but I'm seeing this is taking up well over 200mb to store.
The memory info for that payload is 344 bytes (x100k = 33mb)
The memory info for the schedules key is 18,108,652 bytes (18mb)
The schedules mem usage looks correct.
Here are the memory stats:
memory stats
1) "peak.allocated"
2) (integer) 3343080744
3) "total.allocated"
4) (integer) 201656296
5) "startup.allocated"
6) (integer) 3668896
7) "replication.backlog"
8) (integer) 0
9) "clients.slaves"
10) (integer) 0
11) "clients.normal"
12) (integer) 1189794
13) "aof.buffer"
14) (integer) 0
15) "lua.caches"
16) (integer) 0
17) "db.0"
18) 1) "overhead.hashtable.main"
2) (integer) 5850304
3) "overhead.hashtable.expires"
4) (integer) 4249632
19) "overhead.total"
20) (integer) 14958626
21) "keys.count"
22) (integer) 100036
23) "keys.bytes-per-key"
24) (integer) 1979
25) "dataset.bytes"
26) (integer) 186697670
27) "dataset.percentage"
28) "94.297752380371094"
29) "peak.percentage"
30) "6.0320491790771484"
31) "allocator.allocated"
32) (integer) 202111512
33) "allocator.active"
34) (integer) 204464128
35) "allocator.resident"
36) (integer) 289804288
37) "allocator-fragmentation.ratio"
38) "1.011640191078186"
39) "allocator-fragmentation.bytes"
40) (integer) 2352616
41) "allocator-rss.ratio"
42) "1.4173845052719116"
43) "allocator-rss.bytes"
44) (integer) 85340160
45) "rss-overhead.ratio"
46) "0.98278516530990601"
47) "rss-overhead.bytes"
48) (integer) -4988928
49) "fragmentation"
50) "1.4126673936843872"
51) "fragmentation.bytes"
52) (integer) 83200072
It looks like the bytes per key is a whopping 1977 bytes.
Why does each key use 344 bytes? Is it possible to tell redis to only use 1byte per char?
Why does redis use so many bytes per key?
Is there a way I can structure my data better so I don't blow out redis on such low amounts of data (I need 100mms of records).
I am using a redis cluster with 3 master and 3 slave as mysql cache,and the client is redisson with #Cacheabel annotation.But I found some slow logs with the command CLUSTER NODES like:
3) 1) (integer) 4
2) (integer) 1573033128
3) (integer) 10955
4) 1) "CLUSTER"
2) "NODES"
5) "192.168.110.102:57172"
6) ""
4) 1) (integer) 3
2) (integer) 1573032928
3) (integer) 10120
4) 1) "CLUSTER"
2) "NODES"
5) "192.168.110.90:59456"
6) ""
So ,I want to know what was the problem?
I have got following slow query log in redis. I have disable writing to disk. So database is in memory database. I am not able to understand why these two query are slow?
FYI
I have total 462698 hash. Having pattern key:<numeric_number>
1) 1) (integer) 34
2) (integer) 1364981115
3) (integer) 10112
4) 1) "HMGET"
2) "key:123"
3) "is_working"
6) 1) (integer) 29
2) (integer) 1364923711
3) (integer) 87705
4) 1) "HMSET"
2) "key:538771"
3) "status_app"
4) ".. (122246 more bytes)"