What happens when Int64 maxvalue is exceded with Redis INCR - redis

Simple enough, I am using Redis INCR to ensure atomic increments of a counter, the counter has an indeterminate start value less than Int64.MaxValue.
Does Redis reset the value when it reaches Int64.MaxValue or throw an error?
I read the documentation but it does not say what happens, and I really do want to maintain the atomic nature at rollover

It will throw an error. I did a small experiment for your use case
127.0.0.1:6379> set value 9223372036854775807 (2 power 63 -1)
OK
127.0.0.1:6379> incr value
(error) ERR increment or decrement would overflow
127.0.0.1:6379>
Redis can hold upto 2 power 63. and throws error if it exceeds that limit. It might be either "out of range" error or "overflow" error
Upon error you can catch that exception and reset the value in your application logic.

Related

In Redis, how can I guarantee fetching N number items from a list in multi-clients environment?

Assume that there is a key K in Redis that is holding a list of values.
Many producer clients are adding elements to this list, one by one using LPUSH or RPUSH.
On the other hand, another set of consumer clients are popping elements from the list, though with certain restriction. Consumers will only attempt to pop N number of items, only if the list contains at least N number of items. This ensures that the consumer will hold N items in hand after finishing popping process
If the list contains fewer than N number of items, consumers shouldn't even attempt to pop elements from the list at all, because they won't have at least N items at the end.
If there is only 1 Consumer client, the client can simply run LLEN command to check if the list contains at least N items, and subtract N using LPOP/RPOP.
However, if there are many consumer clients, there can be a race condition and they can simultaneously pop items from the list, after reading LLEN >= N. So we might end up in a state where each consumer might pop fewer than N elements, and there is no item left in the list in Redis.
Using a separate locking system seems to be one way to tackle this issue, but I was curious if this type of operation can be done only using Redis commands, such as Multi/Exec/Watch etc.
I checked Multi/Exec approach and it seems that they do not support rollback. Also, all commands executed between Multi/Exec transaction will return 'QUEUED' so I won't be able to know if N number of LPOP that I will be executing in the transaction will all return elements or not.
So all you need is an atomic way to check on list length and pop conditionally.
This is what Lua scripts are for, see EVAL command.
Here a Lua script to get you started:
local len = redis.call('LLEN', KEYS[1])
if len >= tonumber(ARGV[1]) then
local res = {n=len}
for i=1,len do
res[i] = redis.call('LPOP', KEYS[1])
end
return res
else
return false
end
Use as
EVAL "local len = redis.call('LLEN', KEYS[1]) \n if len >= tonumber(ARGV[1]) then \n local res = {n=len} \n for i=1,len do \n res[i] = redis.call('LPOP', KEYS[1]) \n end \n return res \n else \n return false \n end" 1 list 3
This will only pop ARGV[1] elements (the number after the key name) from the list if the list has at least that many elements.
Lua scripts are ran atomically, so there is no race condition between reading clients.
As OP pointed in comments, there is risk of data-loss, say because power failure between LPOPs and the script return. You can use RPOPLPUSH instead of LPOP, storing the elements on a temporary list. Then you also need some tracking, deleting, and recovery logic. Note your client could also die, leaving some elements unprocessed.
You may want to take a look at Redis Streams. This data structure is ideal for distributing load among many clients. When using it with Consumer Groups, it has a pending entries list (PEL), that acts as that temporary list.
Clients then do a XACK to remove elements from the PEL once processed. Then you are also protected from client failures.
Redis Streams are very useful to solve the complex problem you are trying to solve. You may want to do the free course about this.
You could use a prefetcher.
Instead of each consumer greedily picking an item from the queue which leads to the problem of 'water water everywhere, but not a drop to drink', you could have a prefetcher that builds a packet of size = 6. When the prefetcher has a full packet, it can place the item in a separate packet queue (another redis key with list of packets) and pop the items from the main queue in a single transaction. Essentially, what you wrote:
If there is only 1 Consumer client, the client can simply run LLEN
command to check if the list contains at least N items, and subtract N
using LPOP/RPOP.
If the prefetcher doesn't have a full packet, it does nothing and keeps waiting for the main queue size to reach 6.
On the consumer side, they will just query the prefetched packets queue and pop the top packet and go. It is always 1 pre-built packet (size=6 items). If there are no packets available, they wait.
On the producer side, no changes are required. They can keep inserting into the main queue.
BTW, there can be more than one prefetcher task running concurrently and they can synchronize access to the main queue between themselves.
Implementing a scalable prefetcher
Prefetcher implementation could be described using a buffet table analogy. Think of the main queue as a restaurant buffet table where guests can pick up their food and leave. Etiquette demands that the guests follow a queue system and wait for their turn. Prefetchers also would follow something analogous. Here's the algorithm:
Algorithm Prefetch
Begin
while true
check = main queue has 6 items or more // this is a queue read. no locks required
if(check == true)
obtain an exclusive lock on the main queue
if lock successful
begin a transaction
create a packet and fill it with top 6 items from
the queue after popping them
add the packet to the prefetch queue
if packet added to prefetch queue successfully
commit the transaction
else
rollback the transaction
end if
release the lock
else
// someone else has the excl lock, we should just wait
sleep for xx millisecs
end if
end if
end while
End
I am just showing an infinite polling loop here for simplicity. But this could be implemented using pub/sub pattern through Redis Notifications. So, the prefetcher just waits for a notification that the main queue key is receiving an LPUSH and then executes the logic inside the while loop body above.
There are other ways you could do this. But this should give you some ideas.

TYPE option does not work for REDIS SCAN command

There is a command in Redis - SCAN. It has an option TYPE which return objects that match a given type. When I try to run the set of commands which is provided in the example https://redis.io/commands/scan#the-type-option I get an error ERR syntax error when I run the last command SCAN 0 TYPE zset.
I have prepared objects with the list and zset types, but neither of them works, I always get an exception. Even if I add something on my own, it does not work.
My question is next. Does SCAN actually support TYPE option? I found this issue https://github.com/antirez/redis/issues/3323, but it's not closed and on Redis docs there are such details
Redis version:
redis> INFO
# Server
redis_version:5.0.5
redis> RPUSH list_object "list_element"
redis> TYPE list_object
list
redis> ZADD zset_object 1 "zset_element"
redis> TYPE zset_object
zset
redis> SCAN 0 TYPE zset
ERR syntax error
redis> SCAN 0 type list
ERR syntax error
The code for TYPE option is still in the unstable branch, and haven't been release to the latest version of Redis. So far, you cannot use that command. You have to wait for the new release to support this feature, or take the risk to use the unstable branch.
However, you can also achieve this goal on the client side:
Use the SCAN command to iterate the key space
For each key, call the type command to do the filter on the client side.
In order to make this operation faster, you can wrap the logic into a Lua script.
UPDATE
Redis 6.0 already supports this feature.

Is there good way to support pop members from the Redis Sorted Set?

Is there good way to support pop members from the Redis Sorted Set just like the api LPOP of the List ?
What I figured out for poping message from the Redis Sorted Set is using ZRANGE +ZREM , however it is not thread security and need the distributed lock when multi threads accessing them at the same time from the different host.
Please kind suggesting if there is better way to pop the members from the Sorted Set?
In Redis 5.0 or above, you can use [B]ZPOP{MIN|MAX} key [count] for this scenario.
The MIN version takes the item(s) with the lowest scores; MAX takes the item(s) with the highest scores. count defaults to 1, and the B prefix blocks until the data is available.
ZPOPMIN
ZPOPMAX
BZPOPMIN
BZPOPMAX
You can write a Lua script to do the job: wrap these two commands in a single Lua script. Redis ensures that the Lua script runs in an atomic way.
local key = KEYS[1]
local result = redis.call('ZRANGE', key, 0, 0)
local member = result[1]
if member then
redis.call('ZREM', key, member)
return member
else
return nil
end

ZREM on Redis Sorted Set

What will happen if 2 workers call ZREM on the same element of a sorted set at the same time? Will it return true to the worker which actually removes the element and false to the other to indicate it doesn't exist or will it return true to both? In other words is ZREM atomic internally?
Redis is (mostly) single-threaded so all its operations are atomic and ZREM is no exception. Your question, however, is actually about doing a "ZPOP" atomically so there are two possible ways to do that.
Option 1: WATCH/MULTI/EXEC
In pseudo code, this is how an optimistic transaction would look:
:start
WATCH somekey
member = ZREVRANGE somekey 0 0
MULTI
ZREM somekey member
if not EXEC goto :start // or quit trying
Option 2: Lua script
zpop.lua:
local member = redis.call('ZREVRANGE', KEYS[1], 0, 0)
return redis.call('ZREM', KEYS[1], member)
redis-cli --eval zpop.lua somekey
Note - The Importance of Atomicity
In case you decide not to use these mechanisms that ensure atomicity, you'll be running into issues sooner than you think. Here's a possible scenario:
Process A Redis Server Process B
ZREVRANGE ------------>
<------------- foo
<--------- ZADD +inf bar
OK --------->
ZREM foo -------------->
<-------------- 1
In the above example, after A fetches foo, B inserts bar with an absurdly high score so it becomes the top element in the set. A, however, will continue and remove the previously-at-the-top foo.

Return value of PFADD in Redis

According to Redis documentation on PFADD command:
Return value
Integer reply, specifically:
1 if at least 1 HyperLogLog internal register was altered. 0 otherwise.
Can anyone explain the following two points?
Does this mean PFADD will return "1" if the counter was really incremented by 1? is it guaranteed that after running PFADD, the new PFCOUNT will be PFCOUNT(before) + output of PFADD? In other words, can a single-threaded client keep track of the count using only the output of PFADD?
When PFADD returns "0" or "1", do they translate to a "cache hit" and a "cache miss" respectively?
Does this mean PFADD will return "1" if the counter was really incremented by 1?
No.
The return value is purely boolean, i.e it only indicates whether or not the
underlying HyperLogLog was modified.
Is it guaranteed that after running PFADD, the new PFCOUNT will be PFCOUNT(before) + output of PFADD?
No, since the output of PFADD does not represent a count (see above).
That being said, you may want to use the output of PFADD as a trigger to call
PFCOUNT again, as explained by antirez in the original blog post:
This is interesting for the user since as we add elements the
probability of an element actually modifying some register decreases.
The fact that the API is able to provide hints about the fact that a
new cardinality is available allows for programs that continuously add
elements and retrieve the approximated cardinality only when a new one
is available.
At last:
When PFADD returns "0" or "1", do they translate to a "cache hit" and a "cache miss" respectively?
No. As detailed above it only indicates that a new cardinality is available.