Redis: fan out news feeds in list or sorted set? - redis

I'm caching fan-out news feeds with Redis in the following way:
each feed activity is a key/value, like activity:id where the value is a JSON string of the data.
each news feed is currently a list, the key is feed:user:user_id and the list contains the keys of the relevant activities.
to retrieve a news feed I use for example: 'sort feed:user:user_id by nosort get * limit 0 40'
I'm considering changing the feed to a sorted set where the score is the activity's timestamp, this way the feed is always sorted by time.
I read http://arindam.quora.com/Redis-sorted-sets-and-lists-Pertaining-to-Newsfeed which recommend using lists because of the time complexity of sorted sets, but by keep using lists I have to take care of the insert order,
inserting a past story requires to iterate through the list and finding the right index to push to. (which can cause new problems in distributed environments).
should I keep using lists or go for sorted sets?
is there a way to retrieve the news feed instantly from a sorted set, (like with the sort ... get * command for a list) or does it have to be zrange and then iterating through the results and getting each value?

Yes, sorted sets are very fast and powerful. They seem a much better match for your requirements than SORT operations. The time complexity is often misunderstood. O(log(N)) is very fast, and scales just fine. We use it for tens of millions of members in one sorted set. Retrieval and insertion is sub-millisecond.
Use ZRANGEBYSCORE key min max WITHSCORES [LIMIT offset count] to get your results.
Depending on how you store the timestamps as 'scores', ZREVRANGEBYSCORE might be better.
A small remark about the timestamps: Sorted set SCORES which don't need a decimal part should be using 15 digits or less. So the SCORE has to stay in the range -999999999999999 to 999999999999999. Note: These limits exist because Redis server actually stores the score (float) as a redis-string representation internally.
I therefore recommend this format, converted to Zulu Time: -20140313122802 for second-precision. You may add 1 digit for 100ms-precision, but no more if you want no loss in precision. It's still a float64 by the way, so loss of precision could be fine in some scenarios, but your case fits in the 'perfect precision' range, so that's what I recommend.
If your data expires within 10 years, you can also skip the three first digits (CCY of CCYY), to achieve .0001 second precision.
I suggest negative scores here, so you can use the simpler ZRANGEBYSCORE instead of the REV one. You can use -inf as the start score (minus infinity) and LIMIT 0 100 to get the top 100 results.
Two sorted set members (or 'keys' but that's ambiguous since the sorted set is also a key in itself) may share a score, that's no problem, the results within an identical score are alphabetical.
Hope this helps, TW
Edit after chat
The OP wanted to collect data (using a ZSET) from different keys (GET/SET or HGET/HSET keys). JOIN can do that for you, ZRANGEBYSCORE can't.
The preferred way of doing this, is a simple Lua script. The Lua script is executed on the server. In the example below I use EVAL for simplicity, in production you would use SCRIPT EXISTS, SCRIPT LOAD and EVALSHA. Most client libraries have some bookkeeping logic built-in, so you don't upload the script each time.
Here's an example.lua:
local r={}
local zkey=KEYS[1]
local a=redis.call('zrangebyscore', zkey, KEYS[2], KEYS[3], 'withscores', 'limit', 0, KEYS[4])
for i=1,#a,2 do
r[i]=a[i+1]
r[i+1]=redis.call('get', a[i])
end
return r
You use it like this (raw example, not coded for performance):
redis-cli -p 14322 set activity:1 act1JSON
redis-cli -p 14322 set activity:2 act2JSON
redis-cli -p 14322 zadd feed 1 activity:1
redis-cli -p 14322 zadd feed 2 activity:2
redis-cli -p 14322 eval '$(cat example.lua)' 4 feed '-inf' '+inf' 100
Result:
1) "1"
2) "act1JSON"
3) "2"
4) "act2JSON"

Related

Redis: How do I count the elements in a stream in a certain range?

Bussiness Objective
I'm creating a dashboard that will depend on some time-series and I'll use Redis to implement it. I'm new to using Redis and I'm trying to use Redis-Streams to count the elements in a stream.
XADD conversation:9:chat_messages * id 2583 user_type Bot
XADD conversation:9:chat_messages * id 732016 user_type User
XADD conversation:9:chat_messages * id 732017 user_type Staff
XRANGE conversation:9:chat_messages - +
I'm aware that I can get the total count of the elements using the XLEN command like this:
XLEN conversation:9:chat_messages
but I want to also know the elements in a period, for example:
XLEN conversation:9:chat_messages 1579551316273 1579551321872
I know I can use LUA to count those elements but I want some REALLY fast way to achieve this and I know that using Redis markup will be the fastest way.
Is there any way to achieve this with a straight forward Redis command? Or do I have to write a Lua script to do this?
Additional information
I'm limited by AWS' ElastiCache to use the only Redis 5.0.6, I cannot install other modules such as the RedisTimeSeries module. I'd like to use that module but it's not possible at the moment.
While the Redis Stream data structure doesn't support this, you can use a Sorted Set alongside it for keeping track of message ranges.
Basically, for each message ID you get from XADD - e.g. "1579551316273-0" - you need to do a ZADD conversation:9:ids 0 1579551316273-0. Then, you can use ZLEXCOUNT to get the "length" of a range.
Sorry, there is no commands-way to achieve this.
Your best option with Redis Streams would be to use a Lua script. You will get O(N) with N being the number of elements being counted, instead of O(log N) if a command existed.
local T = redis.call('XRANGE', KEYS[1], ARGV[1], ARGV[2])
local count = 0
for _ in pairs(T) do count = count + 1 end
return count
Note the difference between O(N) and O(log(N)) is significant for a large N, but for a chat application, if tracked by conversation, this won't make that big of a difference if chats have hundreds or even thousands of entries, once you account total command time including Round Trip Time which takes most of the time. The Lua script above removes network-payload and client-processing time.
You can switch to sorted sets if you really want O(log N) and you don't need consumer groups and other stream features. See How to store in Redis sorted set with server-side timestamp as score? if you want to use Redis server timestamp atomically.
Then you can use ZCOUNT which is O(log(N)).
If you do need Stream features, then you would need to keep the sorted set as a secondary index.

How to limit count of items in Redis sorted sets

In my case I upload a lot of records to Redis sorted set, but I need to store only 10 highest scored items. I don't have ability to influence on the data which is uploaded (to sort and to limit it before uploading).
Currently I just execute
ZREMRANGEBYRANK key 0 -11
after uploading finish, but such approach looks not very optimal (it's slow and it will be better if Redis could handle that).
So does Redis provide something out of the box to limit count of items in sorted sets?
Nopes, redis does not provide any such functionality apart from ZREMRANGEBYRANK .
There is a similar problem about keeping a redis list of constant size, say 10 elements only when elements are being pushed from left using LPUSH.
The solution lies in optimizing the pruning operation.
Truncate your sorted set, once a while, not everytime
Methods:
Run a ZREMRANGEBYRANK with 1/5 probability everytime, using a random integer.
Use redis pipeline or Lua scripting to achieve this , this would even save the two network calls happening at almost every 5th call.
This is optimal enough for the purpose mentioned.
Algorithm example:
ZADD key member1 score1
random_int = some random number between 1-5
if random_int == 1: # trim sorted set with 1/5 chance
ZREMRANGEBYRANK key 0 -11

Expire geospatial items in Redis

There are proposals for sorted set item expiration in Redis (see https://groups.google.com/d/msg/redis-db/rXXMCLNkNSs/Bcbd5Ae12qQJ and https://quickleft.com/blog/how-to-create-and-expire-list-items-in-redis/), I tried the worker approach to expire geospatial indexes with ZREMRANGEBYSCORE and ZREMRANGEBYRANK commands unsuccessfully (nothing removed).
I succeded using ZREMRANGEBYLEX.
Is there a way to work with geospatial items score other than Strings?
Update:
For example, if time to live(ttl) of an item is 30sec, I add it as:
geoadd 1 -8.616021 41.154503 30
Now, suppose worker executes after 40sec, I was expecting that
zremrangebyscore 1 0 40
would do the job, but it does not,
ZREMRANGEBYLEX 1 [0 [40
does it. Why is this behavior? That means the score of a geospatial item supports only lexicographical operations?
Sorted Sets have elements (strings), and every element has a score (floating-point). Geosets use the score to encode a coordinate.
Redis doesn't expire members in a Sorted Set (or a Geoset). You have to remove them yourself if that is required.
In your case, you'll need to keep two Sorted Sets - one as your GeoSet and one for managing TTLs as scores.
For example, assuming your member is called 'foo', to add it:
ZADD ttls 30 foo
ZADD elems -8.616021 41.154503 foo
To manually expire, first find the members with a call to ZRANGEBYSCORE ttls, and then remove them from both Sets.
Tip: it is preferable to use a timestamp as score instead of seconds.

How to combine muli-fields values and sorted time-ranges using Redis

I am trying to insert time based records with multiple fields on the values (with TTL enabled).
For the multiple fields the best way to do it via Redis is using HSET:
HSET user:32 name "johns" timecreated "3333311232" address "somewhere"
I also try to read those values via time range:
for example return all history records (for example user 32) which was inserted in the last day:
so the best for that would be storing via ZADD using scores(this time I am losing the hash-map structure for easy retrieval):
ZADD user:32 3333311232 "name=johns,timecreated=3333311232,address=somewhere"
On the top of the things I want to add TTL for each record
Any idea how I could optimize my design?
I could split into two but that will requires two queries when reading:
ZADD user:32 3333311232 "user:32:3333311232"
HMSET user:32:3333311232 name “johns” timecreated “3333311232” address="somewhere"
than to retrieve ill need:
//some range
ZRANGEBYSCORE user:32 3333311232 333331123
result: 1389772850
now to get all information: HGETALL user:32:1389772850
What do you think?
Thank you,
ray.
The two methods you describe are the two common approaches. If you store the entire object in the ZSET, you would typically store it as a JSON string. If you don't need "random" access to the object, that's a valid approach.
I usually go for the other approach; a ZSET combined with hashes. the two queries are not a big deal. You could even abstract it away with a Lua script; see EVAL.
Regarding the TTL, while you cannot expire individual ZSET values, you could expire the hash, and use keyspace notifications to listen for the expired event, and remove the corresponding value from the ZSET.
Let me know if you need some more specifics.

Redis - Sorted set, find item by property value

In redis I store objects in a sorted set.
In my solution, it's important to be able to run a ranged query by dates, so I store the items with the score being the timestamp of each items, for example:
# Score Value
0 1443476076 {"Id":"92","Ref":"7ADT","DTime":1443476076,"ATime":1443901554,"ExTime":0,"SPName":"7ADT33CFSAU6","StPName":"7ADT33CFSAU6"}
1 1443482969 {"Id":"11","Ref":"DAJT","DTime":1443482969,"ATime":1443901326,"ExTime":0,"SPName":"DAJTJTT4T02O","StPName":"DAJTJTT4T02O"}
However, in other situations I need to find a single item in the set based on it's ID.
I know I can't just query this data structure as if it were a nosql db, but I tried using ZSCAN, which didn't work.
ZSCAN MySet 0 MATCH Id:92 count 1
It returns; "empty list or set"
Maybe I need to serialize different?
I have serialized using Json.Net.
How, if possible, can I achieve this; using dates as score and still be able to lookup an item by it's ID?
Many thanks,
Lars
Edit:
Assume it's not possible, but any thoughts or inputs are welcome:
Ref: http://openmymind.net/2011/11/8/Redis-Zero-To-Master-In-30-Minutes-Part-1/
In Redis, data can only be queried by its key. Even if we use a hash,
we can't say get me the keys wherever the field race is equal to
sayan.
Edit 2:
I tried to do:
ZSCAN MySet 0 MATCH *87*
127.0.0.1:6379> ZSCAN MySet 0 MATCH *87*
1) "192"
2) 1) "{\"Id\":\"64\",\"Ref\":\"XQH4\",\"DTime\":1443837798,\"ATime\":1444187707,\"ExTime\":0,\"SPName\":\"XQH4BPGW47FM\",\"StPName\":\"XQH4BPGW47FM\"}"
2) "1443837798"
3) "{\"Id\":\"87\",\"Ref\":\"5CY6\",\"DTime\":1443519199,\"ATime\":1444172326,\"ExTime\":0,\"SPName\":\"5CY6DHP23RXB\",\"StPName\":\"5CY6DHP23RXB\"}"
4) "1443519199"
And it finds the desired item, but it also finds another one with an occurance of 87 in the property ATime. Having more unique, longer IDs might work this way and I would have to filter the results in code to find the one with the exact value in its property.
Still open for suggestions.
I think it's very simple.
Solution 1(Inferior, not recommended)
Your way of ZSCAN MySet 0 MATCH Id:92 count 1 didn't work out because the stored string is "{\"Id\":\"92\"... not "{\"Id:92\".... The string has been changed into another format. So try to use MATCH Id\":\"64 or something like that to match the json serialized data in redis. I'm not familiar with json.net, so the actual string leaves for you to discover.
By the way, I have to ask you did ZSCAN MySet 0 MATCH Id:92 count 1 return a cursor? I suspect you used ZSCAN in a wrong way.
Solution 2(Better, strongly recommended)
ZSCAN is good when your sorted set is not large and you know how to save network roundtrip time by Redis' Lua transaction. This still make "look up by ID" operation O(n). Therefore, a better solution is to change you data model in the following way:
change sorted set
from
# Score Value
0 1443476076 {"Id":"92","Ref":"7ADT","DTime":1443476076,"ATime":1443901554,"ExTime":0,"SPName":"7ADT33CFSAU6","StPName":"7ADT33CFSAU6"}
1 1443482969 {"Id":"11","Ref":"DAJT","DTime":1443482969,"ATime":1443901326,"ExTime":0,"SPName":"DAJTJTT4T02O","StPName":"DAJTJTT4T02O"}
to
# Score Value
0 1443476076 Id:92
1 1443482969 Id:11
Move the rest detailed data in another set of hashes type keys:
# Key field-value field-value ...
0 Id:92 Ref-7ADT DTime-1443476076 ...
1 Id:11 Ref-7ADT DTime-1443476076 ...
Then, you locate by id by doing hgetall id:92. As to ranged query by date, you need do ZRANGEBYSCORE sortedset mindate maxdate then hgetall every id one by one. You'd better use lua to wrap these commands in one and it will still be super fast!
Data in NoSql database need to be organized in a redundant way like above. This may make some usual operation involve more than one commands and roundtrip, but it can be tackled by redis's lua feature. I strongly recommend the lua feature of redis, cause it wrap commands into one network roundtrip, which are all executed on the redis-server side and is atomic and super fast!
Reply if there's anything you don't know