We used redis list in a Spring web application and used the lpush/rpop command and expected it to behave as a FIFO queue. But it wasn't! It pops a random element from the list. You can see that at the following lrange output.
127.0.0.1:6379> lrange word_getprice_queue:2520e2df-6771-4ee0-8cea-f6b2c68019b3 0 -1
1) "ef682e35-aea8-4cb4-bd32-26b52d7943e0"
2) "83f4ff87-0a8e-4631-8f2c-7785b298077b"
3) "99fdb591-d2ed-4bef-b85e-38ee42dbe8ef"
4) "9527ca7e-b6e7-4d7f-93c2-d3b59bb1aacc"
5) "4ad6a66e-c727-4373-8e81-82e330adba92"
6) "23f201b4-02c6-4385-9080-bd0a6b21bdc8"
7) "3c9b6876-e3ba-481a-8012-f0b364830bfd"
8) "0c00e8f6-5de4-4685-bee1-cec4eca4b546"
9) "bb6b87b0-05e9-4a8b-9617-060f32963f68"
10) "1048e02f-0bbd-4130-b94e-ab658d77d7c6"
127.0.0.1:6379> lrange word_getprice_queue:2520e2df-6771-4ee0-8cea-f6b2c68019b3 0 -1
1) "3c9b6876-e3ba-481a-8012-f0b364830bfd"
2) "0c00e8f6-5de4-4685-bee1-cec4eca4b546"
3) "bb6b87b0-05e9-4a8b-9617-060f32963f68"
4) "1048e02f-0bbd-4130-b94e-ab658d77d7c6"
5) "ef682e35-aea8-4cb4-bd32-26b52d7943e0"
6) "83f4ff87-0a8e-4631-8f2c-7785b298077b"
7) "99fdb591-d2ed-4bef-b85e-38ee42dbe8ef"
8) "9527ca7e-b6e7-4d7f-93c2-d3b59bb1aacc"
9) "4ad6a66e-c727-4373-8e81-82e330adba92"
127.0.0.1:6379> lrange word_getprice_queue:2520e2df-6771-4ee0-8cea-f6b2c68019b3 0 -1
1) "bb6b87b0-05e9-4a8b-9617-060f32963f68"
2) "1048e02f-0bbd-4130-b94e-ab658d77d7c6"
3) "ef682e35-aea8-4cb4-bd32-26b52d7943e0"
4) "83f4ff87-0a8e-4631-8f2c-7785b298077b"
5) "99fdb591-d2ed-4bef-b85e-38ee42dbe8ef"
6) "9527ca7e-b6e7-4d7f-93c2-d3b59bb1aacc"
7) "3c9b6876-e3ba-481a-8012-f0b364830bfd"
8) "0c00e8f6-5de4-4685-bee1-cec4eca4b546"
I've updated the redis to 3.0.7 and jedis to 2.4.2 but has no luck.
usages about the redis list
TaskPusherGetPriceWord、TaskGet5Price、TaskPusherWordCount are three spring scheduled tasks. TaskPusherGetPriceWord push some words into the queue and TaskGet5Price pop those words from the queue and TaskPusherWordCount just empties the queue if something happens. You can see all the calls to manipulate the queue in the project from the above picture.
<task:scheduled-tasks>
<task:scheduled ref="taskPusherGetPriceWord" method="doTask" fixed-delay="300000" />
</task:scheduled-tasks>
<task:scheduled-tasks>
<task:scheduled ref="taskGet5Price" method="doTask" fixed-delay="5" />
</task:scheduled-tasks>
<task:scheduled-tasks>
<task:scheduled ref="taskPusherWordCount" method="doTask" fixed-delay="60000" />
</task:scheduled-tasks>
Related
in Redis, increment value can be stored or we can increment value of keys. Like
127.0.0.1:6379> set _inc 0
OK
127.0.0.1:6379> INCR _inc
(integer) 1
127.0.0.1:6379> INCR _inc
(integer) 2
127.0.0.1:6379> get _inc
"2"
or we can save items like
item:UNIQUE-ID
item:UNI-QUE-ID
But how to save items with increment N ID like:
item:1
item:2
item:3
item:4
...
So far I found a solution with LUA Script
127.0.0.1:6379> eval 'return redis.call("set", "item:" .. redis.call("incr","itemNCounter"), "item value")' 0
OK
...
127.0.0.1:6379> keys item:*
1) "item:10"
2) "item:14"
3) "item:13"
4) "item:6"
5) "item:15"
6) "item:9"
7) "item:4"
8) "item:1"
9) "item:5"
10) "item:3"
11) "item:12"
12) "item:7"
13) "item:8"
14) "item:11"
15) "item:2"
Question: Is there a method to make it without running Lua script or reliable method?
I expect that there would be a Redis command to make it.
Question: Is there a method to make it without running Lua script or reliable method?
No, there isn't. However, EVAL is supported since Redis version 2.6 and LUA scripts are first-class citizens in Redis.
I try to parse redis slowlog to a files with csv format (comma, column or space as delimiter), but I am not sure how to do that.
If I run redis-cli -h <ip> -p 6379 slowlog get 2, I get below output:
1) 1) (integer) 472141
2) (integer) 1625198930
3) (integer) 11243
4) 1) "ZADD"
2) "key111111"
3) "16251.8488247"
4) "abcd"
5) "1.2.3.4:89"
6) ""
2) 1) (integer) 37214
2) (integer) 1525198930
3) (integer) 1024
4) 1) "ZADD"
2) "key2"
3) "1625.8"
5) "1.2.3.4:89"
6) "client-name"
Note the item 4) of each log entry may contain different numbers of arguments, e.g. 4) of log entry 1) has 4 arguments, while 4) of log entry 2) has 3 arguments; and the item 6) can be a string like client-name or can be empty.
If I run the command using below shell script:
results=$(redis-cli -h <ip> -p $port slowlog get 2)
echo $results
I get below output:
472141 1625198930 11243 ZADD key111111 16251.8488247 abcd 1.2.3.4:89 37214 1525198930 1024 ZADD key2 1625.8 1.2.3.4:89 client-name
As you see, the output of the command becomes lots of words. Besides, it is hard to figure out which group of words belong to the same log entry. what I want is to get a csv file like:
472141 1625198930 11243 ZADD key111111 16251.8488247 abcd 1.2.3.4:89
37214 1525198930 1024 ZADD key2 1625.8 1.2.3.4:89 client-name
Is there anyway I can parse the redis slowlog to a cvs file as I want? any script like python, shell is welcomed. and any existing code is welcomed.
The Redis ZSET Sorted Set (member, score) sorts the set by the score.
The Redis SET are an unordered collection of unique Strings.
What I need is a method that returns the members in a Sorted Set matching a pattern as in ZRANGEBYLEX but with members with different scores.
Is is possible at all with redis?
Well, it seems I did not know about the SCAN suite. In this case ZSCAN solves this issue, however with cost O(N) where N is the number of items in sorted set because it iterates over the whole set.
Example of elements in:
127.0.0.1:6379> zrange l 0 -1 WITHSCORES
1) "foodgood:irene"
2) "1"
3) "foodgood:pep"
4) "1"
5) "sleep:irene"
6) "1"
7) "sleep:juan"
8) "1"
9) "sleep:pep"
10) "1"
11) "sleep:toni"
12) "1"
13) "foodgood:juan"
14) "2"
Now ZSCAN for those with prefix foodgood:
127.0.0.1:6379> ZSCAN l 0 match foodgood:*
1) "0"
2) 1) "foodgood:irene"
2) "1"
3) "foodgood:pep"
4) "1"
5) "foodgood:juan"
6) "2"
The first returned value "0" if zero indicates collection was completely explored.
What I would have liked is that to be O(log(N)+M) where M is the number of elements retrieved similar to Binary Search Tree.
I am using a redis cluster with 3 master and 3 slave as mysql cache,and the client is redisson with #Cacheabel annotation.But I found some slow logs with the command CLUSTER NODES like:
3) 1) (integer) 4
2) (integer) 1573033128
3) (integer) 10955
4) 1) "CLUSTER"
2) "NODES"
5) "192.168.110.102:57172"
6) ""
4) 1) (integer) 3
2) (integer) 1573032928
3) (integer) 10120
4) 1) "CLUSTER"
2) "NODES"
5) "192.168.110.90:59456"
6) ""
So ,I want to know what was the problem?
I have got following slow query log in redis. I have disable writing to disk. So database is in memory database. I am not able to understand why these two query are slow?
FYI
I have total 462698 hash. Having pattern key:<numeric_number>
1) 1) (integer) 34
2) (integer) 1364981115
3) (integer) 10112
4) 1) "HMGET"
2) "key:123"
3) "is_working"
6) 1) (integer) 29
2) (integer) 1364923711
3) (integer) 87705
4) 1) "HMSET"
2) "key:538771"
3) "status_app"
4) ".. (122246 more bytes)"