Redis ZRANGEBYLEX command complexity - redis

According documentation section for ZRANGEBYLEX command, there is following information. If store keys in ordered set with zero score, later keys can be retrieved with lexicographical order. And ZRANGEBYLEX operation complexity will be O(log(N)+M), where N is total elements count and M is result set size. Documentation has some information about string comparation, but tells nothing about structure, in which elements will be stored.
But after some experiments and reading source code, it's probably what ZRANGEBYLEX operation has a linear time search, when every element in ziplist will be matched against request. If so, complexity will be more larger than described above - about O(N), because every element in ziplist will be scanned.
After debugging with gdb, it's clean that ZRANGEBYLEX command is implemented in genericZrangebylexCommand function. Control flow continues at eptr = zzlFirstInLexRange(zl,&range);, so major work for element retrieving will be performed at zzlFirstInLexRange function. All namings and following control flow consider that ziplist structure is used, and all comparation with input operands are done sequentially element by element.
Inspecting memory with analysis after inserting well-known keys in redis store, it seems that ZSET elements are really stored in ziplist - byte-per-byte comparation with gauge confirm it.
So question - how can documentation be wrong and propagate logarithmic complexity where linear one appears? Or maybe ZRANGEBYLEX command works slightly different? Thanks in advance.

how can documentation be wrong and propagate logarithmic complexity where linear one appears?
The documentation has been wrong on more than a few occasions, but it is an ongoing open source effort that you can contribute to via the repository (https://github.com/antirez/redis-doc).
Or maybe ZRANGEBYLEX command works slightly different?
Your conclusion is correct in the sense that Sorted Set search operations, whether lexicographical or not, exhibit linear time complexity when Ziplists are used for encoding them.
However.
Ziplists are an optimization that prefers CPU to memory, meaning it is meant for use on small sets (i.e. low N values). It is controlled via configuration (see the zset-max-ziplist-entries and zset-max-ziplist-value directives), and once the data grows above the specified thresholds the ziplist encoding is converted to a skip list.
Because ziplists are small (little Ns), their complexity can be assumed to be constant, i.e. O(1). On the other hand, due to their nature, skip lists exhibit logarithmic search time. IMO that means that the documentation's integrity remains intact, as it provides the worst case complexity.

Related

Redis PFADD to check a exists-in-set query

I have a requirement to process multiple records from a queue. But due to some external issues the items may sporadically occur multiple times.
I need to process items only once
What I planned to use is PFADD into redis every record ( as a md5sum) and then see if that returns success. If that shows no increment then the record is a duplicate else process the record.
This seems pretty straightforward , but I am getting too many false positives while using PFADD
Is there a better way to do this ?
Being the probabilistic data structure that it is, Redis' HyperLogLog exhibits 0.81% standard error. You can reduce (but never get rid of) the probability for false positives by using multiple HLLs, each counting a the value of a different hash function on your record.
Also note that if you're using a single HLL there's no real need to hash the record - just PFADD as is.
Alternatively, use a Redis Set to keep all the identifiers/hashes/records and have 100%-accurate membership tests with SISMEMBER. This approach requires more (RAM) resources as you're storing each processed element, but unless your queue is really huge that shouldn't be a problem for a modest Redis instance. To keep memory consumption under control, switch between Sets according to the date and set an expiry on the Set keys (another approach is to use a single Sorted Set and manually remove old items from it by keeping their timestamp in the score).
In general in distributed systems you have to choose between processing items either :
at most once
at least once
Processing something exactly-once would be convenient however this is generally impossible.
That being said there could be acceptable workarounds for your specific use case, and as you suggest storing the items already processed could be an acceptable solution.
Be aware though that PFADD uses HyperLogLog, which is fast and scales but is approximate about the count of the items, so in this case I do not think this is what you want.
However if you are fine with having a small probability of errors, the most appropriate data structure here would be a Bloom filter (as described here for Redis), which can be implemented in a very memory-efficient way.
A simple, efficient, and recommended solution would be to use a simple redis key (for instance a hash) storing a boolean-like value ("0", "1" or "true", "false") for instance with the HSET or SET with the NX option instruction. You could also put it under a namespace if you wish to. It has the added benefit of being able to expire keys also.
It would avoid you to use a set (not the SET command, but rather the SINTER, SUNION commands), which doesn't necessarily work well with Redis cluster if you want to scale to more than one node. SISMEMBER is still fine though (but lacks some features from hashes such as time to live).
If you use a hash, I would also advise you to pick a hash function that has fewer chances of collisions than md5 (a collision means that two different objects end up with the same hash).
An alternative approach to the hash would be to assign an uuid to every item when putting it in the queue (or a squuid if you want to have some time information).

How does Redis checks if an element is a member of set or not?

I have read online that Redis can say if an element is member of set or not in O(1) time. I want to know how Redis does this. What algorithm does Redis use to achieve this.
A Redis Set is implemented internally in one of two ways: an intset or a hashtable. The intset is a special optimization for integer-only sets and uses the intsetSearch function to search the set. This function, however, uses a binary search so that's actually technically O(logN). However, since the cardinallity of intsets is capped at a constant (the set-max-intset-entries configuration directive), we can assume O(1) accurately reflects the complexity here.
hashtable is used for a lot of things in Redis, including the implementation of Sets. It uses a hash function on the key to map it into a table (array) of entries - checking whether the hashed key value is in the array is straightforwardly done in O(1) in dictFind. The elements under each hashed key are stored as a linked list, so again you're basically talking O(N) to traverse it, but given the hash function extremely low probability for collisions (hmm, need some sort of citation here?) these lists are extremely short so we can safely assume it is effectively O(1).
Because of the above, SISMEMBER's claim of being O(1) in terms of computational complexity is valid.

Using REDIS sorted sets, what is the time complexity certain special operations?

On REDIS documentation, it states that insert and update operations on sorted sets are O(log(n)).
On this question they specify more details about the underlying data structure, the skip list.
However there are a few special cases that depend on the REDIS implementation with which I'm not familiar.
adding at the head or tail of the sorted set will probably not be a O(log(n)) operation, but O(1), right? this question seems to agree with reservations.
updating the score of a member even if the ordering doesn't change is still O(log(n)) either because you take the element out and insert it again with the slightly different score, or because you have to check that the ordering doesn't change and so the difference is only in constant operations between insert or update score. right? I really hope I'm wrong in this case.
Any insights will be most welcome.
Note: skip lists will be used once the list grows above a certain size (max_ziplist_entries), below that size a zip list is used.
Re. 1st question - I believe that it would still be O(log(n)) since a skip list is a type of a binary tree so there's no assurance where the head/tail nodes are
Re. 2nd question - according to the source, changing the score is implemented with a removing and readding the member: https://github.com/antirez/redis/blob/209f266cc534471daa03501b2802f08e4fca4fe6/src/t_zset.c#L1233 & https://github.com/antirez/redis/blob/209f266cc534471daa03501b2802f08e4fca4fe6/src/t_zset.c#L1272
In Skip List, when you insert a new element in head or tail, you still need to update O(log n) levels. The previous head or tail can have O(log n) levels and each may have pointers which need to be updated.
Already answered by #itamar-haber

Why there is no LINCR for Redis?

I wonder why Redis has no command to increment an element in the list?
You can increment a key's value with INCR, you can use HINCRBY to increment an item in the hash set and you can use ZINCRBY to inrement an element of the sorted set. But not in the list.
This puzzles me. Why not?
What was the thinking behind this decision? Lists are "not supposed to be used like this", then why? They work in a very different way from sets? Then what's the big difference?
The big difference is there is no possibility of accessing a given item efficiently in a Redis list. They are implemented as double-linked lists (for big lists) or completely serialized (ziplist optimization, for small lists). By comparison hash and sorted set are implemented using a hash table which allows O(1) amortized complexity for item accesses.
So if such incrementation command would exist for lists, its complexity would be O(n). Not very interesting for just an incrementation.
Note that if you need such feature, you can easily implement it yourself with a server-side Lua script by calling LINDEX and LSET.

Redis Internals - LRU Implementation For Sampling

Does someone know about the internals of Redis LRU based eviction / deletion.
How does Redis ensure that the older (lesser used) keys are deleted first (in case we do not have volatile keys and we are not setting TTL expiration)?
I know for sure that Redis has a configuration parameter "maxmemory-samples" that governs a sample size that it uses for removing keys - so if you set a sample size of 10 then it samples 10 keys and removes the oldest from amongst these.
What I don't know is whether it sample these key's completely randomly, or does it somehow have a mechanism that allows it to automatically sample from an equivalent of an "older / less used generation"?
This is what I found at antirez.com/post/redis-as-LRU-cache.html - the whole point of using a "sample three" algorithm is to save memory. I think this is much more valuable than precision, especially since this randomized algorithms are rarely well understood. An example: sampling with just three objects will expire 666 objects out of a dataset of 999 with an error rate of only 14% compared to the perfect LRU algorithm. And in the 14% of the remaining there are hardly elements that are in the range of very used elements. So the memory gain will pay for the precision without doubts.
So although Redis samples randomly (implying that this is not actual LRU .. and as such an approximation algorithm), the accuracy is relatively high and increasing the sampling size will further increase this. However, in case someone needs exact LRU (there is zero tolerance for error), then Redis may not be the correct choice.
Architecture ... as they say ... is about tradeoffs .. so use this (Redis LRU) approach to tradeoff accuracy for raw performance.
Since v3.0.0 (2014) the LRU algorithm uses a pool of 15 keys, populated with the best candidates out of the different samplings of N keys (where N is defined by maxmemory-samples).
Every time a key needs to be evicted, N new keys are selected randomly and checked against the pool. If they're better candidates (older keys), they're added in it, while the worst candidates (most recent keys) are taken out, keeping the pool at a constant size of 15 keys.
At the end of the round, the best eviction candidate is selected from the pool.
Source: Code and comments in evict.c file from Redis source code