i have some records in redis
1) "one"
2) "two"
3) "three"
4) "four"
5) "five"
6) "six"
7) "seven"
8) "eight"
9) "nine"
10) "ten"
11) "eleven"
12) "twelve"
13) "thirteen"
14) "fourteen"
15) "fifteen"
16) "sixteen"
17) "seventeen"
18) "eighteen"
19) "nineteen"
i have to get first 10 values from the list
LRANGE keyname 1 10
and i have to get last 10 values from the list
LRANGE keyname (n-10) n
or say get middle 10 values from the list
LRANGE keyname (n/2) (n/2)+10
and i have to get random 10 values from this list
SRANDMEMBER keyname 10
so in order to performance all this operation
which datatype should i use in redis to achieve this ?
i am currently doing this
LRANGE keyname randomNumber randomNumber+10
but it not completely random
EDITED
I want to perform both operation on by data in redis
get range of data(like LRANGE) and get random data(like SRANDMEMBER)?
Related
We are not able to get consistent searches to redis.
Redis server v=7.0.7 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=2260280010e18db8
root#redis01:~# redis-cli
127.0.0.1:6379> info modules
module:name=ReJSON,ver=20008,api=1,filters=0,usedby=[search],using=[],options=[handle-io-errors]
module:name=search,ver=20405,api=1,filters=0,usedby=[],using=[ReJSON],options=[]
We have exactly 5022786 key/value pairs, the same number of entries are on our 'idx36' index. There is no incoming traffic, so this dataset of 5022786 entries remains constant all the time.
127.0.0.1:6379> info keyspace
db0:keys=5022786,expires=5022786,avg_ttl=257039122
127.0.0.1:6379> ft.info idx36
1) index_name
2) idx36
3) index_options
4) 1) "NOFREQS"
5) index_definition
6) 1) key_type
2) HASH
3) prefixes
4) 1) 36|
5) default_score
6) "1"
7) attributes
8) 1) 1) identifier
2) CheckIn
3) attribute
4) CheckIn
5) type
6) NUMERIC
2) 1) identifier
2) HotelCode
3) attribute
4) HotelCode
5) type
6) TAG
7) SEPARATOR
8)
9) num_docs
10) "5022786"
11) max_doc_id
12) "12729866"
13) num_terms
14) "0"
15) num_records
16) "1.8446744073526942e+19"
17) inverted_sz_mb
18) "162.96730041503906"
19) vector_index_sz_mb
20) "0"
21) total_inverted_index_blocks
22) "17965200"
23) offset_vectors_sz_mb
24) "0"
25) doc_table_size_mb
26) "1843.12548828125"
27) sortable_values_size_mb
28) "0"
29) key_table_size_mb
30) "194.14376831054688"
31) records_per_doc_avg
32) "3672612012032"
33) bytes_per_record_avg
34) "9.2636185181071973e-12"
35) offsets_per_term_avg
36) "0"
37) offset_bits_per_record_avg
38) "-nan"
39) hash_indexing_failures
40) "333006"
41) indexing
42) "0"
43) percent_indexed
44) "1"
45) gc_stats
46) 1) bytes_collected
2) "264321009"
3) total_ms_run
4) "1576438"
5) total_cycles
6) "5"
7) average_cycle_time_ms
8) "315287.59999999998"
9) last_run_time_ms
10) "706929"
11) gc_numeric_trees_missed
12) "0"
13) gc_blocks_denied
14) "23251"
47) cursor_stats
48) 1) global_idle
2) (integer) 0
3) global_total
4) (integer) 0
5) index_capacity
6) (integer) 128
7) index_total
8) (integer) 0
This index has a tag called 'HotelCode' and a numeric field called 'CheckIn'.
Now we try to search all entries that contain the string 'AO-B0' within 'HotelCode' (should be all):
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2708499
And now try to search all entries that don't contain the string 'AO-B0' within 'HotelCode' (should be 0):
127.0.0.1:6379> ft.search idx36 '-(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 0
But these two searches don't sum the total number of entries. Even if I'm wrong and not all entries contain the 'AO-B0' string, If I repeat the first search the result changes every time:
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2615799
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2442799
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2626299
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2694899
(0.50s)
127.0.0.1:6379> ft.search idx36 '(#HotelCode:{AO\-B0})' limit 0 0
1) (integer) 2516699
(0.50s)
If now I try this search, less restrictive, I should get more entries ... but not:
127.0.0.1:6379> ft.search idx36 '#HotelCode:{AO\-B*}' limit 0 0
1) (integer) 1806899
Maybe I'm doing something wrong ... if someone can point me to the right way ...
The Redis ZSET Sorted Set (member, score) sorts the set by the score.
The Redis SET are an unordered collection of unique Strings.
What I need is a method that returns the members in a Sorted Set matching a pattern as in ZRANGEBYLEX but with members with different scores.
Is is possible at all with redis?
Well, it seems I did not know about the SCAN suite. In this case ZSCAN solves this issue, however with cost O(N) where N is the number of items in sorted set because it iterates over the whole set.
Example of elements in:
127.0.0.1:6379> zrange l 0 -1 WITHSCORES
1) "foodgood:irene"
2) "1"
3) "foodgood:pep"
4) "1"
5) "sleep:irene"
6) "1"
7) "sleep:juan"
8) "1"
9) "sleep:pep"
10) "1"
11) "sleep:toni"
12) "1"
13) "foodgood:juan"
14) "2"
Now ZSCAN for those with prefix foodgood:
127.0.0.1:6379> ZSCAN l 0 match foodgood:*
1) "0"
2) 1) "foodgood:irene"
2) "1"
3) "foodgood:pep"
4) "1"
5) "foodgood:juan"
6) "2"
The first returned value "0" if zero indicates collection was completely explored.
What I would have liked is that to be O(log(N)+M) where M is the number of elements retrieved similar to Binary Search Tree.
I need to store the lowest score for each key I add to the set, but when I do ZADD, Redis overwrites the score with the new value even if the score is higher.
ZADD myorderset 1 'one' 2 'two' 3 'three'
(integer) 3
ZRANGE myorderset 0 -1 WITHSCORES
1) "one"
2) "1"
3) "two"
4) "2"
5) "three"
6) "3"
ZADD myorderset 5 'three'
(integer) 0
ZRANGE myorderset 0 -1 WITHSCORES
1) "one"
2) "1"
3) "two"
4) "2"
5) "three"
6) "5"
In the example case, I need the key 'three' not be updated since the new score is higher (5) than the existing (3). Is there a way to do this natively or do I need to create a script in Lua?
I've been researching ZADD modifiers (XX, NX, CH) but none of them do what I need.
Thank you very much!
A Lua script for this CAS use case would be the simplest and idiomatic solution:
127.0.0.1:6379> ZADD myorderset 1 'one' 2 'two' 3 'three'
(integer) 3
127.0.0.1:6379> EVAL "local s = redis.call('ZSCORE', KEYS[1], ARGV[2]) if not s or s > ARGV[1] then redis.call('ZADD', KEYS[1], ARGV[1], ARGV[2]) end" 1 myorderset 5 'three'
(nil)
127.0.0.1:6379> ZRANGE myorderset 0 -1 WITHSCORES
1) "one"
2) "1"
3) "two"
4) "2"
5) "three"
6) "3"
127.0.0.1:6379> EVAL "local s = redis.call('ZSCORE', KEYS[1], ARGV[2]) if not s or s > ARGV[1] then redis.call('ZADD', KEYS[1], ARGV[1], ARGV[2]) end" 1 myorderset 2 'three'
(nil)
127.0.0.1:6379> ZRANGE myorderset 0 -1 WITHSCORES
1) "one"
2) "1"
3) "three"
4) "2"
5) "two"
6) "2"
There is no single command or command option to do it both. You can either use combination of ZSCORE with ZADD(in lua). Alternatively("It is/looks like over-engineered") you may use ZUNIONSTORE with the aggregate option MIN.
With the AGGREGATE option, it is possible to specify how the results of the union are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.
127.0.0.1:6379> ZADD base 1 a 2 b 5 c
(integer) 3
127.0.0.1:6379> ZADD new 3 c
(integer) 1
127.0.0.1:6379> ZUNIONSTORE base 2 base new AGGREGATE MIN
(integer) 3
127.0.0.1:6379> DEL new
(integer) 1
127.0.0.1:6379> ZRANGE base 0 -1 WITHSCORES
1) "a"
2) "1"
3) "b"
4) "2"
5) "c"
6) "3"
127.0.0.1:6379> ZADD new 5 b
(integer) 1
127.0.0.1:6379> ZUNIONSTORE base 2 base new AGGREGATE MIN
(integer) 3
127.0.0.1:6379> DEL new
(integer) 1
127.0.0.1:6379> ZRANGE base 0 -1 WITHSCORES
1) "a"
2) "1"
3) "b"
4) "2"
5) "c"
6) "3"
127.0.0.1:6379>
If you prefer,
You may generate new set name with random string in application level
Put EXPIRE to this new set, no need to DEL the new key manually after the ZUNIONSTORE, it will be expired eventually.
It can be done in MULTI/EXEC in a single transaction.
I have this keys list:
redis 127.0.0.1:6379> keys *
1) "r:fd:g1:1377550557255"
2) "r:fd:g1:1377550561240"
3) "r:fd:g1:1377550561561"
4) "r:fd:g1:1377550562300"
5) "r:fd:g1:1377550558977"
6) "r:fd:g1:1377550561344"
7) "r:fd:g1:1377550561832"
8) "r:fd:g1:1377550560344"
9) "r:fd:g1:1377550559978"
10) "r:fd:g1:1377550557777"
11) "r:fd:g1:1377550554258"
12) "r:fd:g1:1377550556772"
13) "r:fd:g1:1377550559649"
14) "r:fd:g1:1377550555460"
15) "r:fd:g1:1377550560895"
16) "r:fd:g1:1377550559139"
17) "r:fd:g1:1377550556595"
18) "r:fd:g1:1377550557634"
How i can get only this keys where timestamp is more then 1377550561300 ?
You can't do this.
But you can use sorted sets and write timestamps as scores, then you'll be able to use
zrangebyscore:
zrangebyscore key 1377550561300) +inf
I have got following slow query log in redis. I have disable writing to disk. So database is in memory database. I am not able to understand why these two query are slow?
FYI
I have total 462698 hash. Having pattern key:<numeric_number>
1) 1) (integer) 34
2) (integer) 1364981115
3) (integer) 10112
4) 1) "HMGET"
2) "key:123"
3) "is_working"
6) 1) (integer) 29
2) (integer) 1364923711
3) (integer) 87705
4) 1) "HMSET"
2) "key:538771"
3) "status_app"
4) ".. (122246 more bytes)"