Redis CLUSTER NODES showing in slowlog - redis

I am using a redis cluster with 3 master and 3 slave as mysql cache,and the client is redisson with #Cacheabel annotation.But I found some slow logs with the command CLUSTER NODES like:
3) 1) (integer) 4
2) (integer) 1573033128
3) (integer) 10955
4) 1) "CLUSTER"
2) "NODES"
5) "192.168.110.102:57172"
6) ""
4) 1) (integer) 3
2) (integer) 1573032928
3) (integer) 10120
4) 1) "CLUSTER"
2) "NODES"
5) "192.168.110.90:59456"
6) ""
So ,I want to know what was the problem?

Related

Redis search with index giving inconsistent results

We are not able to get consistent searches to redis.
Redis server v=7.0.7 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=2260280010e18db8
root#redis01:~# redis-cli
127.0.0.1:6379> info modules
module:name=ReJSON,ver=20008,api=1,filters=0,usedby=[search],using=[],options=[handle-io-errors]
module:name=search,ver=20405,api=1,filters=0,usedby=[],using=[ReJSON],options=[]
We have exactly 5022786 key/value pairs, the same number of entries are on our 'idx36' index. There is no incoming traffic, so this dataset of 5022786 entries remains constant all the time.
127.0.0.1:6379> info keyspace
db0:keys=5022786,expires=5022786,avg_ttl=257039122
127.0.0.1:6379> ft.info idx36
1) index_name
2) idx36
3) index_options
4) 1) "NOFREQS"
5) index_definition
6) 1) key_type
2) HASH
3) prefixes
4) 1) 36|
5) default_score
6) "1"
7) attributes
8) 1) 1) identifier
2) CheckIn
3) attribute
4) CheckIn
5) type
6) NUMERIC
2) 1) identifier
2) HotelCode
3) attribute
4) HotelCode
5) type
6) TAG
7) SEPARATOR
8)
9) num_docs
10) "5022786"
11) max_doc_id
12) "12729866"
13) num_terms
14) "0"
15) num_records
16) "1.8446744073526942e+19"
17) inverted_sz_mb
18) "162.96730041503906"
19) vector_index_sz_mb
20) "0"
21) total_inverted_index_blocks
22) "17965200"
23) offset_vectors_sz_mb
24) "0"
25) doc_table_size_mb
26) "1843.12548828125"
27) sortable_values_size_mb
28) "0"
29) key_table_size_mb
30) "194.14376831054688"
31) records_per_doc_avg
32) "3672612012032"
33) bytes_per_record_avg
34) "9.2636185181071973e-12"
35) offsets_per_term_avg
36) "0"
37) offset_bits_per_record_avg
38) "-nan"
39) hash_indexing_failures
40) "333006"
41) indexing
42) "0"
43) percent_indexed
44) "1"
45) gc_stats
46) 1) bytes_collected
2) "264321009"
3) total_ms_run
4) "1576438"
5) total_cycles
6) "5"
7) average_cycle_time_ms
8) "315287.59999999998"
9) last_run_time_ms
10) "706929"
11) gc_numeric_trees_missed
12) "0"
13) gc_blocks_denied
14) "23251"
47) cursor_stats
48) 1) global_idle
2) (integer) 0
3) global_total
4) (integer) 0
5) index_capacity
6) (integer) 128
7) index_total
8) (integer) 0
This index has a tag called 'HotelCode' and a numeric field called 'CheckIn'.
Now we try to search all entries that contain the string 'AO-B0' within 'HotelCode' (should be all):
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2708499
And now try to search all entries that don't contain the string 'AO-B0' within 'HotelCode' (should be 0):
127.0.0.1:6379> ft.search idx36 '-(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 0
But these two searches don't sum the total number of entries. Even if I'm wrong and not all entries contain the 'AO-B0' string, If I repeat the first search the result changes every time:
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2615799
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2442799
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2626299
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2694899
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2516699
(0.50s)
If now I try this search, less restrictive, I should get more entries ... but not:
127.0.0.1:6379> ft.search idx36 '#HotelCode:{AO\-B*}' limit 0 0
1) (integer) 1806899
Maybe I'm doing something wrong ... if someone can point me to the right way ...

Redis increment keyN or key:N

in Redis, increment value can be stored or we can increment value of keys. Like
127.0.0.1:6379> set _inc 0
OK
127.0.0.1:6379> INCR _inc
(integer) 1
127.0.0.1:6379> INCR _inc
(integer) 2
127.0.0.1:6379> get _inc
"2"
or we can save items like
item:UNIQUE-ID
item:UNI-QUE-ID
But how to save items with increment N ID like:
item:1
item:2
item:3
item:4
...
So far I found a solution with LUA Script
127.0.0.1:6379> eval 'return redis.call("set", "item:" .. redis.call("incr","itemNCounter"), "item value")' 0
OK
...
127.0.0.1:6379> keys item:*
1) "item:10"
2) "item:14"
3) "item:13"
4) "item:6"
5) "item:15"
6) "item:9"
7) "item:4"
8) "item:1"
9) "item:5"
10) "item:3"
11) "item:12"
12) "item:7"
13) "item:8"
14) "item:11"
15) "item:2"
Question: Is there a method to make it without running Lua script or reliable method?
I expect that there would be a Redis command to make it.
Question: Is there a method to make it without running Lua script or reliable method?
No, there isn't. However, EVAL is supported since Redis version 2.6 and LUA scripts are first-class citizens in Redis.

How to parse redis slowlog

I try to parse redis slowlog to a files with csv format (comma, column or space as delimiter), but I am not sure how to do that.
If I run redis-cli -h <ip> -p 6379 slowlog get 2, I get below output:
1) 1) (integer) 472141
2) (integer) 1625198930
3) (integer) 11243
4) 1) "ZADD"
2) "key111111"
3) "16251.8488247"
4) "abcd"
5) "1.2.3.4:89"
6) ""
2) 1) (integer) 37214
2) (integer) 1525198930
3) (integer) 1024
4) 1) "ZADD"
2) "key2"
3) "1625.8"
5) "1.2.3.4:89"
6) "client-name"
Note the item 4) of each log entry may contain different numbers of arguments, e.g. 4) of log entry 1) has 4 arguments, while 4) of log entry 2) has 3 arguments; and the item 6) can be a string like client-name or can be empty.
If I run the command using below shell script:
results=$(redis-cli -h <ip> -p $port slowlog get 2)
echo $results
I get below output:
472141 1625198930 11243 ZADD key111111 16251.8488247 abcd 1.2.3.4:89 37214 1525198930 1024 ZADD key2 1625.8 1.2.3.4:89 client-name
As you see, the output of the command becomes lots of words. Besides, it is hard to figure out which group of words belong to the same log entry. what I want is to get a csv file like:
472141 1625198930 11243 ZADD key111111 16251.8488247 abcd 1.2.3.4:89
37214 1525198930 1024 ZADD key2 1625.8 1.2.3.4:89 client-name
Is there anyway I can parse the redis slowlog to a cvs file as I want? any script like python, shell is welcomed. and any existing code is welcomed.

Strange order with redis list

We used redis list in a Spring web application and used the lpush/rpop command and expected it to behave as a FIFO queue. But it wasn't! It pops a random element from the list. You can see that at the following lrange output.
127.0.0.1:6379> lrange word_getprice_queue:2520e2df-6771-4ee0-8cea-f6b2c68019b3 0 -1
1) "ef682e35-aea8-4cb4-bd32-26b52d7943e0"
2) "83f4ff87-0a8e-4631-8f2c-7785b298077b"
3) "99fdb591-d2ed-4bef-b85e-38ee42dbe8ef"
4) "9527ca7e-b6e7-4d7f-93c2-d3b59bb1aacc"
5) "4ad6a66e-c727-4373-8e81-82e330adba92"
6) "23f201b4-02c6-4385-9080-bd0a6b21bdc8"
7) "3c9b6876-e3ba-481a-8012-f0b364830bfd"
8) "0c00e8f6-5de4-4685-bee1-cec4eca4b546"
9) "bb6b87b0-05e9-4a8b-9617-060f32963f68"
10) "1048e02f-0bbd-4130-b94e-ab658d77d7c6"
127.0.0.1:6379> lrange word_getprice_queue:2520e2df-6771-4ee0-8cea-f6b2c68019b3 0 -1
1) "3c9b6876-e3ba-481a-8012-f0b364830bfd"
2) "0c00e8f6-5de4-4685-bee1-cec4eca4b546"
3) "bb6b87b0-05e9-4a8b-9617-060f32963f68"
4) "1048e02f-0bbd-4130-b94e-ab658d77d7c6"
5) "ef682e35-aea8-4cb4-bd32-26b52d7943e0"
6) "83f4ff87-0a8e-4631-8f2c-7785b298077b"
7) "99fdb591-d2ed-4bef-b85e-38ee42dbe8ef"
8) "9527ca7e-b6e7-4d7f-93c2-d3b59bb1aacc"
9) "4ad6a66e-c727-4373-8e81-82e330adba92"
127.0.0.1:6379> lrange word_getprice_queue:2520e2df-6771-4ee0-8cea-f6b2c68019b3 0 -1
1) "bb6b87b0-05e9-4a8b-9617-060f32963f68"
2) "1048e02f-0bbd-4130-b94e-ab658d77d7c6"
3) "ef682e35-aea8-4cb4-bd32-26b52d7943e0"
4) "83f4ff87-0a8e-4631-8f2c-7785b298077b"
5) "99fdb591-d2ed-4bef-b85e-38ee42dbe8ef"
6) "9527ca7e-b6e7-4d7f-93c2-d3b59bb1aacc"
7) "3c9b6876-e3ba-481a-8012-f0b364830bfd"
8) "0c00e8f6-5de4-4685-bee1-cec4eca4b546"
I've updated the redis to 3.0.7 and jedis to 2.4.2 but has no luck.
usages about the redis list
TaskPusherGetPriceWord、TaskGet5Price、TaskPusherWordCount are three spring scheduled tasks. TaskPusherGetPriceWord push some words into the queue and TaskGet5Price pop those words from the queue and TaskPusherWordCount just empties the queue if something happens. You can see all the calls to manipulate the queue in the project from the above picture.
<task:scheduled-tasks>
<task:scheduled ref="taskPusherGetPriceWord" method="doTask" fixed-delay="300000" />
</task:scheduled-tasks>
<task:scheduled-tasks>
<task:scheduled ref="taskGet5Price" method="doTask" fixed-delay="5" />
</task:scheduled-tasks>
<task:scheduled-tasks>
<task:scheduled ref="taskPusherWordCount" method="doTask" fixed-delay="60000" />
</task:scheduled-tasks>

Why these statement are slow in redis?

I have got following slow query log in redis. I have disable writing to disk. So database is in memory database. I am not able to understand why these two query are slow?
FYI
I have total 462698 hash. Having pattern key:<numeric_number>
1) 1) (integer) 34
2) (integer) 1364981115
3) (integer) 10112
4) 1) "HMGET"
2) "key:123"
3) "is_working"
6) 1) (integer) 29
2) (integer) 1364923711
3) (integer) 87705
4) 1) "HMSET"
2) "key:538771"
3) "status_app"
4) ".. (122246 more bytes)"