In Redis, it is advised not to use KEYS command. Why it is so? Is it because its time complexity is O(N) ? Or something else is the reason.
Yes.
Time complexity is very bad. Note that the N in O(N) refers to the total number of keys in the database, not the number of keys being selected by the filter pattern. So this can be a really big number for a production database.
And even worse, since only one command can run at the same time (Redis not being multi-threaded), everything else will have to wait for that KEYS to complete.
I did the following experiment to prove how dangerous KEYS command is.
While one command with KEYS runs, others KEYS commands are waiting for the time to run. One run of the KEYS command has 2 phases, first is to get the information from Redis, second is to send it to the client.
$ time src/redis-cli keys "*" | wc -l
1450832
real 0m17.943s
user 0m8.341s
$ src/redis-cli
127.0.0.1:6379> slowlog get
1) 1) (integer) 0
2) (integer) 1621437661
3) (integer) 8321405
4) 1) "keys"
2) "*"
So, it was running on Redis for 8s and then was piped to 'wc' command. Redis finished with this command in 8s but 'wc' command needed that data for 17s to complete the couting. So the memory buffers had to be there for at least 17s. Now, let's imagine clients on the network, where this data has to go to the clients as well. If we have 10 keys commands, that will run on Redis one by one, when the first one finishes and next one runs the results of the first command has to be stored in memory before the client will consume them. That all takes memory, so I can imagine a situation, where 5th client is running KEYS command but we still need to keep the data for the first client, because they were still not trasfered through the network.
Let's test it out.
Scenario: Let's have Redis DB with 200M size (1000M physical memory) and check how much memory is one execution of KEYS takes, and how long when done through the network. Then simulate 5 same KEYS commands to be run and see if it kills Redis.
$ src/redis-cli info memory
used_memory_human:214.17M
total_system_memory_human:926.08M
When run from the same node:
$ time src/redis-cli keys "*" | wc -l
1450832
real 0m17.702s
user 0m8.278s
$ free -m
total used free shared buff/cache available
Mem: 926 301 236 24 388 542
Mem: 926 336 200 24 388 507
Mem: 926 368 168 24 388 475
Mem: 926 445 91 24 388 398
Mem: 926 480 52 24 393 363
Mem: 926 491 35 24 399 352
-> looks like it consumed 190M for the KEYS command
-> So, Redis is busy with the command for 8s, but the memory is consumed for this command for 17s.
-> running just one KEYS command just blocks the Redis for 8s, but does not cause the OOM
Let's run 2 KEYS commands at the (almost) same time (that will run one after another anyway)
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ free -m
total used free shared buff/cache available
Mem: 926 300 430 24 194 546
Mem: 926 370 361 24 194 477
Mem: 926 454 276 24 194 393
Mem: 926 589 141 24 194 258
Mem: 926 693 37 24 194 154
-> now we used 392M memory for 26s, while Redis is hung for 17s
-> but we still have a running Redis
Let's run 3 KEYS commands at the (almost) same time (that will run one after another anyway)
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ free -m
total used free shared buff/cache available
Mem: 926 299 474 23 152 549
Mem: 926 385 388 23 152 463
Mem: 926 512 261 23 152 336
Mem: 926 573 200 23 152 275
Mem: 926 711 61 23 152 136
Mem: 926 842 21 21 62 17
-> now we used 532M memory for 36s, while Redis is hung for 26s
-> but we still have a running Redis
Let's run 4 KEYS commands at the (almost) same time (that will run one after another anyway)
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
-> that kills Redis
Nothing in the Redis logs:
2251:C 19 May 16:03:05.355 * DB saved on disk
2251:C 19 May 16:03:05.379 * RDB: 2 MB of memory used by copy-on-write
1853:M 19 May 16:03:05.432 * Background saving terminated with success
In /var/log/messages
May 19 16:08:01 consumer2 kernel: [454881.744017] redis-cli invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
May 19 16:08:01 consumer2 kernel: [454881.744180] [<8023bdb8>] (oom_kill_process) from [<8023c6e8>] (out_of_memory+0x134/0x36c)
Conclusion:
we can kill healthy Redis instance, consuming 200M of RAM, where 70% RAM free on OS by just running 4 KEYS commands issued one after another and run one after another. Just because the results has to be buffered even after Redis is finished with executing them.
one is unable to protect Redis against that behavior with maxmemory, as the memory usage is not a result of SET command
Related
I have an intermittent lag on the web applications I am serving from Apache on a Debian box. Apache and MySQL check out. I am far from fully utilizing the box CPU/Memory. Still there is an intermittent lag. My theory is there is a network rate limit needing to be tweaked. Stats below.
Apache Server Status
Current Time: Tuesday, 02-Jun-2020 14:36:53 EDT
Restart Time: Monday, 01-Jun-2020 01:00:03 EDT
Parent Server Config. Generation: 1
Parent Server MPM Generation: 0
Server uptime: 1 day 13 hours 36 minutes 50 seconds
Server load: 2.95 3.23 3.09
Total accesses: 1213060 - Total Traffic: 22.0 GB - Total Duration: 32311929295
CPU Usage: u396.94 s164.31 cu2065.15 cs789.27 - 2.52% CPU load
8.96 requests/sec - 170.5 kB/second - 19.0 kB/request - 26636.7 ms/request
296 requests currently being processed, 66 idle workers
WR.WWWW.KWW_W._W_KWWWWWWKWWWWW_WWWWK_WK_WWW_WW_RWWWWWKCWWWWWW._W
_WW_R_W_.__K_WWWW__WWWWWWKKWWWWWWKWWWW_W____WWWWWWWW_WWW_KWWWWWW
WWWWWWWW_.WWWWWK_WWW_WWKWWWWWWKWWKWK_WWWWWRKWWW.WW_KKWKWWWKW_WWW
WW.W_.K._WWWK_WW_K_K._WW..WWWWWWW_.W_WWWW_W_W.W_WWWW_.WWKWK_WKWW
_W_WWWW_W.WWWWWW.WWWW_K__..W.WW_WWWWWWWWKRW_WWW_C.W_KW_WWW_KW.._
..WWWWWWWCWWW.WWW_WKKWWWW_._WWW.....WWW.W_W.W._.KW...W...WWW.WWW
W..W..K..WW_.W._................W..._W.W.....K.W.K_...R..K...W.W
...W..W.............................................
top
top - 14:31:14 up 79 days, 21:39, 3 users, load average: 2.26, 2.57, 2.86
Tasks: 717 total, 1 running, 716 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.3 us, 0.7 sy, 0.2 ni, 95.7 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
MiB Mem : 64365.1 total, 539.8 free, 8847.0 used, 54978.4 buff/cache
MiB Swap: 65477.0 total, 63810.0 free, 1667.0 used. 54580.5 avail Mem
ss -s
Total: 1934
TCP: 2362 (estab 1233, closed 1105, orphaned 2, timewait 1104)
Transport Total IP IPv6
RAW 0 0 0
UDP 0 0 0
TCP 1257 430 827
INET 1257 430 827
FRAG 0 0 0
ulimit -n
1024
ss -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
1 Local
6 192.XXX.XXX.XXX
100 127.0.0.1
340 10.0.0.XX
866 [
ss -ntu | awk '{print $6}' | cut -d: -f1 | sort | uniq -c | sort -n
..........
lists # of ip connections. Besides 127.0.0.1 and [ there are 2 ips over 50.
74 104.xxx.xxx.xxx
91 12.xxx.xxx.xxx
MySQL
No processes running more than a second. Number of processes well within limits.
I do not know what stats would be relevant beyond these in diagnosing network rate limiting issues. Any pointers would be appreciated.
EDITED
CPU
lscpu https://pastebin.com/Jha6F7J8
Apache Config
apachectl -t -D DUMP_RUN_CFG https://pastebin.com/i1L2hnjH
Mysql
SHOW GLOBAL STATUS https://pastebin.com/aQX4D01k
SHOW GLOBAL VARIABLES https://pastebin.com/L8EfmHfn
SHOW FULL PROCESSLIST https://pastebin.com/GtqK2tET
mysqltuner https://pastebin.com/GLhhKA9q
Optional Very Helpful Information
top -bn1 https://pastebin.com/r94vpXe6
iostat -xm 5 3 https://pastebin.com/R8YLK3QU
ulimit -a https://pastebin.com/KUC3wqxU
Dorothy, Your system is very busy with activity. Not knowing the frequency and duration of the intermittent hangs puts us at a disadvantage. One possible cause is com_drop_table had 3,318 uses in your 83 days of uptime. Another possible cause is volume of data read and written. It appears innodb_data_written was 484TB in 83 days and yet MySQLTuner reports only 800K of data in 10 tables. Our General Log Analysis could likely identify the cause of this high activity. These suggestions will be a starting effort, more analysis and changes should be accomplished.
From your OS command prompt,
ulimit -n 96000 would enable many more Open Files (handles) above today's 1024 limit.
This is a dynamic operation in Linux and does not require OS restart to be implemented.
For this change to persist across OS stop/start the following URL could be used as a guide.
Please use 96000, not 500000 - as in their example documentation.
https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section
innodb_io_capacity=1900 # from 200 if you have SSD, 900 if you have magnetic storage to improve IOPS
net_buffer_length=32K # from 16K to reduce malloc operations
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function
key_cache_segments=16 # from 0 to reduce mutex contention with MyISAM opens
key_cache_division_limit=50 # from 100 for Hot/Warm storage to reduce key_page_reads RPS of 18
aria_pagecache_division_limit=50 # from 100 for Hot/Warm storage to reduce aria_pagecache_reads RPS of 5K
read_rnd_buffer_size=64K # from 256K to reduce handler_read_rnd_next RPS of 27,707
These changes should reduce elapsed time to complete most queries.
Additional areas to consider include the use of Slow Query Log analysis to find where an index could avoid a table scan. MySQLTuner reported more than 4 million joins performed without indexes. Our FAQ page includes information on how you could find the tables needing indexes to avoid scans. Let us know how these suggestions work for you.
Skype Talk works very well if you have the flexibility to use that form of communication.
I have ssh access to a list of ~20 machines. I need to find the load status for all of them in a list. The program 'top' does a good job giving info on the machine status in its header.
Example:
top - 13:29:53 up 107 days, 20:13, 47 users, load average: 3.80, 3.74, 3.62
Tasks: 794 total, 2 running, 787 sleeping, 3 stopped, 2 zombie
Cpu(s): 2.6%us, 0.8%sy, 0.0%ni, 84.7%id, 11.9%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 99055876k total, 47947572k used, 51108304k free, 697684k buffers
Swap: 26148860k total, 17145136k used, 9003724k free, 35844820k cached
Today I manually do ssh into each machine, do the 'top' copy the data and store it. I was wondering if this task can be automated. I found out that ssh has the option of giving a unix cmd as an argument to be executed on the remote machine. But how to capture the output from 'top'? Or is there a batch-too giving the same header output? It would be great to have just one script that does the table for me.
Thanks,
Gert
For Ubuntu:
[12:15 AM] borlaze#mac: /tmp $ ssh USER#HOST 'top -b -n 1 | head -n 5' >123.txt
[12:15 AM] borlaze#mac: /tmp $ cat 123.txt
top - 00:16:06 up 35 days, 10:58, 1 user, load average: 0,34, 0,36, 0,29
Tasks: 277 total, 1 running, 274 sleeping, 0 stopped, 2 zombie
%Cpu(s): 7,1 us, 5,7 sy, 0,0 ni, 87,0 id, 0,1 wa, 0,0 hi, 0,0 si, 0,0 st
KiB Mem : 24671340 total, 1066056 free, 12822724 used, 10782560 buff/cache
KiB Swap: 16756732 total, 16094308 free, 662424 used. 11208916 avail Mem
One of my redis servers is repeatedly going down today without any overt, diagnosable cause. My users all end up getting Error 111 connecting to unix socket: /var/run/redis/redis2.sock. Connection refused errors.
Looking into the logs at /var/log/redis, the last few lines capture nothing more nefarious than a scheduled backup:
[8248] 09 Mar 07:48:17.090 * 10 changes in 21600 seconds. Saving...
[8248] 09 Mar 07:48:17.374 * Background saving started by pid 47613
[47613] 09 Mar 07:51:02.257 * DB saved on disk
[47613] 09 Mar 07:51:02.486 * RDB: 526 MB of memory used by copy-on-write
[8248] 09 Mar 07:51:02.920 * Background saving terminated with success
The pid file still exists too. Which implies the server wasn't formally shut down, and redis was still daemonized?
I logged into my system and did sudo service redis-server restart twice to get it up and running. Apart from these logs, how else can I diagnose what might have gone wrong?
Update: I noticed that at the time of the first crash, disk swapping started taking place. This hasn't happened before. Moreover, cat /proc/sys/vm/swappiness confirms swappiness is set to 2.
free -m shows (after normal operation):
total used free shared buffers cached
Mem: 28136 27015 1120 305 80 6586
-/+ buffers/cache: 20349 7787
Swap: 1023 991 32
free -m shows (after the redis server goes down):
total used free shared buffers cached
Mem: 28136 8770 19365 305 60 441
-/+ buffers/cache: 8268 19868
Swap: 1023 1022 1
This sounds like the work of the OS' OOM killer - you can verify/discredit the hypothesis by reviewing the /var/log/syslog.
In this case, the persistence job's overhead triggered the killer. You need to provision for that by setting maxmemory and allocating enough RAM to accommodate persistence's requirements, including COW.
Note that free isn't useful after the fact - you need to monitor your resources continuously.
As for swap, if you don't care about latency then you can certainly do that.
I have a 20GB+ rdb dump in production.
I suspect there's a specific set of keys bloating it.
I'd like to have a way to always spot the first 100 biggest objects from static dump analysis or ask it to the server itself, which by the way has ove 7M objects.
Dump analysis tools like rdbtools are not helpful in this (I think) really common use case!
I was thinking to write a script and iterate the whole keyset with "redis-cli debug object", but I have the feeling there must be some tool I'm missing.
An option was added to redis-cli: redis-cli --bigkeys
Sample output based on https://gist.github.com/michael-grunder/9257326
$ ./redis-cli --bigkeys
# Press ctrl+c when you have had enough of it... :)
# You can use -i 0.1 to sleep 0.1 sec every 100 sampled keys
# in order to reduce server load (usually not needed).
Biggest string so far: day:uv:483:1201737600, size: 2
Biggest string so far: day:pv:2013:1315267200, size: 3
Biggest string so far: day:pv:3:1290297600, size: 5
Biggest zset so far: day:topref:2734:1289433600, size: 3
Biggest zset so far: day:topkw:2236:1318723200, size: 7
Biggest zset so far: day:topref:651:1320364800, size: 20
Biggest string so far: uid:3467:auth, size: 32
Biggest set so far: uid:3029:allowed, size: 1
Biggest list so far: last:175, size: 51
-------- summary -------
Sampled 329 keys in the keyspace!
Total key length in bytes is 15172 (avg len 46.12)
Biggest list found 'day:uv:483:1201737600' has 5235597 items
Biggest set found 'day:uvx:555:1201737600' has 47 members
Biggest hash found 'day:uvy:131:1201737600' has 2888 fields
Biggest zset found 'day:uvz:777:1201737600' has 1000 members
0 strings with 0 bytes (00.00% of keys, avg size 0.00)
19 lists with 5236744 items (05.78% of keys, avg size 275618.11)
50 sets with 112 members (15.20% of keys, avg size 2.24)
250 hashs with 6915 fields (75.99% of keys, avg size 27.66)
10 zsets with 1294 members (03.04% of keys, avg size 129.40)
redis-rdb-tools does have a memory report that does exactly what you need. It generates a CSV file with memory used by every key. You can then sort it and find the Top x keys.
There is also an experimental memory profiler that started to do what you need. Its not yet complete, and so isn't documented. But you can try it - https://github.com/sripathikrishnan/redis-rdb-tools/tree/master/rdbtools/cli. And of course, I'd encourage you to contribute as well!
Disclaimer: I am the author of this tool.
I am pretty new to bash scripting. I came out with this:
for line in $(redis-cli keys '*' | awk '{print $1}'); do echo `redis-cli DEBUG OBJECT $line | awk '{print $5}' | sed 's/serializedlength://g'` $line; done; | sort -h
This script
Lists all the key with redis-cli keys "*"
Gets size with redis-cli DEBUG OBJECT
sorts the script based on the name prepend with the size
This may be very slow due to the fact that bash is looping through every single redis key. You have 7m keys you may need to cache the out put of the keys to a file.
If you have keys that follow this pattern "A:B" or "A:B:*", I wrote a tool that analyzes both existing content as well as monitors for things such as hit rate, number of gets/sets, network traffic, lifetime, etc. The output is similar to the one below.
https://github.com/alexdicianu/redis_toolkit
$ ./redis-toolkit report -type memory -name NAME
+----------------------------------------+----------+-----------+----------+
| KEY | NR KEYS | SIZE (MB) | SIZE (%) |
+----------------------------------------+----------+-----------+----------+
| posts:* | 500 | 0.56 | 2.79 |
| post_meta:* | 440 | 18.48 | 92.78 |
| terms:* | 192 | 0.12 | 0.63 |
| options:* | 109 | 0.52 | 2.59 |
Try redis-memory-analyzer - a console tool to scan Redis key space in real time and aggregate memory usage statistic by key patterns. You may use this tools without maintenance on production servers. It shows you detailed statistics about each key pattern in your Redis serve.
Also you can scan Redis db by all or selected Redis types such as "string", "hash", "list", "set", "zset". Matching pattern also supported.
RMA also try to discern key names by patterns, for example if you have keys like 'user:100' and 'user:101' application would pick out common pattern 'user:*' in output so you can analyze most memory distressed data in your instance.
I am using redis for caching but recently I ran into a problem with the amount of memory used - had to restart my server since all ram had been consumed.
It's not the biggest machine but how should I configure redis to avoid the same problem again?
free -m
total used free shared buffers cached
Mem: 240 222 17 0 6 38
-/+ buffers/cache: 177 62
Swap: 255 46 209
I have changed the following settings:
timeout 60
databases 1
save 300 1
save 60 100
maxmemory 104857600
top
top - 14:15:28 up 1:19, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 49 total, 1 running, 48 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 245956k total, 228420k used, 17536k free, 6916k buffers
Swap: 262136k total, 47628k used, 214508k free, 39540k cached
you can use the "maxmemory" directive in the config file: when this amount of memory is exceeded then Redis will expire earlier keys having already an expire set (the keys that would expire sooner are the first that will be removed).
Unlike memcached, redis is supposed to be a databse; so it won't automatically remove old values to make room for new ones.
You have to explicitly set the expire time for each key/value, and even then you could overflow if you create key/values faster than that.
Use Redis virtual memory in Redis 2.0:
http://antirez.com/post/redis-virtual-memory-story.html