I am trying to use Redis Rate limiting patter as specified in https://redis.io/commands/incr under "Pattern: Rate limiter 1". But how can I scale this in case I want to do rate limiting across multiple servers. Like I have service deployed across 5 servers behind load balancer and I want total requests per api key across 5 servers should not cross x/sec. As per redis pattern I mentioned , the problem is that if I have my rate limiter running in multiple servers itself, then the two different request to two different rate limiter servers,can do "get key" at same time and read same value, before anyone updates it, which can probably allow more requests to go.How can I handle this?I can obviously put get in MULTI block, but I think it will make things a lot more slow.
INCR replies with the updated value. So it can be used as both write and read command.
FUNCTION LIMIT_API_CALL(ip)
ts = CURRENT_UNIX_TIME()
keyname = ip+":"+ts
MULTI
INCR(keyname)
EXPIRE(keyname,10)
EXEC
current = RESPONSE_OF_INCR_WITHIN_MULTI
IF current > 10 THEN
ERROR "too many requests per second"
ELSE
PERFORM_API_CALL()
END
You need to run LUA script that will check rate-limiting and increase/decrease/reset the counter(s).
You can find a simple example in Larval framework here
https://github.com/laravel/framework/blob/8.x/src/Illuminate/Redis/Limiters/DurationLimiter.php
/**
* Get the Lua script for acquiring a lock.
*
* KEYS[1] - The limiter name
* ARGV[1] - Current time in microseconds
* ARGV[2] - Current time in seconds
* ARGV[3] - Duration of the bucket
* ARGV[4] - Allowed number of tasks
*
* #return string
*/
protected function luaScript()
{
return <<<'LUA'
local function reset()
redis.call('HMSET', KEYS[1], 'start', ARGV[2], 'end', ARGV[2] + ARGV[3], 'count', 1)
return redis.call('EXPIRE', KEYS[1], ARGV[3] * 2)
end
if redis.call('EXISTS', KEYS[1]) == 0 then
return {reset(), ARGV[2] + ARGV[3], ARGV[4] - 1}
end
if ARGV[1] >= redis.call('HGET', KEYS[1], 'start') and ARGV[1] <= redis.call('HGET', KEYS[1], 'end') then
return {
tonumber(redis.call('HINCRBY', KEYS[1], 'count', 1)) <= tonumber(ARGV[4]),
redis.call('HGET', KEYS[1], 'end'),
ARGV[4] - redis.call('HGET', KEYS[1], 'count')
}
end
return {reset(), ARGV[2] + ARGV[3], ARGV[4] - 1}
LUA;
}
The accepted answer is incorrect. It leads to inaccurate counter value. Let's look a the following example:
5 clients executing 5 concurrent requests to Redis. The current state of the counter is 10, and while the limit is also 10.
The 5 concurrent requests will increment the counter to 15 while declining each of the requests. Rather, the value should remain 10 to reflect on the correct number of times clients were "allowed".
Solution:
we effectively needs to combine two separate atomic operations into a single atomic operation. That's where LUA script comes in. It's simply a modification on Redis it self to introduce another code path that "executes a get, and then a set" atomically. And it does so because Redis is single thread.
Related
I have a LUA script that fetches a SET data-type and iterates through it's members. This set contains a list of other SET data-types (which ill call the secondary sets for clarity).
For context, the secondary SETs are lists of cache keys, majority of cache keys within these secondary sets reference cache items that no longer exist, these lists grow exponentially in size so my goal is to check each of these cache keys; within each secondary set to see if they have expired and if so, remove them from the secondary set, thus reducing the size and conserving memory.
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
The script above works perfectly, until one of the secondary set keys contain a different hash-slot, the error I receive is:
Lua script attempted to access keys of different hash slots
Looking through the docs, I noticed that Redis allows this to be bypassed with an EVAL flag allow-cross-slot-keys, so following that example I updated my script to the following
#!lua flags=allow-cross-slot-keys
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
I'm now strangely left with the error:
"ERR Error compiling script (new function): user_script:1: unexpected symbol near '#'"
Any help appreciated, have reached out to the pocket burner that is Redis Enterprise but still awaiting a response.
For awareness, this is to clean up the shoddy Laravel Redis implementation where they create these sets to manage tagged cache but never clean them up, over time they amount to gigabytes in wasted space, and if your eviction policy requires allkeys-lfu then all your cache will be pushed out in favour of this messy garbage Laravel leaves behind, leaving you with a worthless caching system or thousands of dollars out of pocket to increase RAM.
Edit: It would seem we're on Redis 6.2 and those flags are Redis 7+ unlikely there is a solution suitable for 6.2 but if there is please let me know.
Let's suppose we have a list of tasks to be executed and some workers that pop items from that list.
If a worker crashes unexpectedly before finishing the execution of the task then that task is lost.
What kind of mechanism could prevent that so we can reprocess abandoned tasks?
You need to use ZSET to solve this issue
Pop operation
Add to ZSET with expiry time
Remove from list
Ack Operation
Remove from ZSET
Worker
You need to run a scheduled worker that will move items from ZSET to list if they are expired
Read it in detail, how I did in Rqueue https://medium.com/#sonus21/introducing-rqueue-redis-queue-d344f5c36e1b
Github Code: https://github.com/sonus21/rqueue
There is no EXPIRE for set or zset members and there is no atomic operation to pop from zset and push to list. So I ended writing this lua script which runs atomically.
First I add a task to the executing-tasks zset with a timestamp score ((new Date()).valueOf() in javascript):
ZADD 1619028226766 executing-tasks
Then I run the script:
EVAL [THE SCRIPT] 2 executing-tasks tasks 1619028196766
If the task is more than 30 seconds old it will be sent to the tasks list. If not, it will be sent back to the executing-tasks zset.
Here is the script
local source = KEYS[1]
local destination = KEYS[2]
local min_score = ARGV[1]
local popped = redis.call('zpopmin', source)
local id = popped[1]
local score = popped[2]
if table.getn(popped) > 0 then
if score < min_score then
redis.call('rpush', destination, id)
return { "RESTORED", id }
else
redis.call('zadd', source, score, id)
return { "SENT_BACK", id }
end
end
return { "NOTHING_DONE" }
I've been playing around with redis to keep track of the ratelimit of an external api in a distributed system. I've decided to create a key for each route where a limit is present. The value of the key is how many request I can still make until the limit resets. And the reset is made by setting the TTL of the key to when the limit will reset.
For that I wrote the following lua script:
if redis.call("EXISTS", KEYS[1]) == 1 then
local remaining = redis.call("DECR", KEYS[1])
if remaining < 0 then
local pttl = redis.call("PTTL", KEYS[1])
if pttl > 0 then
--[[
-- We would exceed the limit if we were to do a call now, so let's send back that a limit exists (1)
-- Also let's send back how much we would have exceeded the ratelimit if we were to ignore it (ramaning)
-- and how long we need to wait in ms untill we can try again (pttl)
]]
return {1, remaining, pttl}
elseif pttl == -1 then
-- The key expired the instant after we checked that it existed, so delete it and say there is no ratelimit
redis.call("DEL", KEYS[1])
return {0}
elseif pttl == -2 then
-- The key expired the instant after we decreased it by one. So let's just send back that there is no limit
return {0}
end
else
-- Great we have a ratelimit, but we did not exceed it yet.
return {1, remaining}
end
else
return {0}
end
Since a watched key can expire in the middle of a multi transaction without aborting it. I assume the same is the case for lua scripts. Therefore I put in the cases for when the ttl is -1 or -2.
After I wrote that script I looked a bit more in depth at the eval command page and found out that a lua script has to be a pure function.
In there it says
The script must always evaluates the same Redis write commands with
the same arguments given the same input data set. Operations performed
by the script cannot depend on any hidden (non-explicit) information
or state that may change as script execution proceeds or between
different executions of the script, nor can it depend on any external
input from I/O devices.
With this description I'm not sure if my function is a pure function or not.
After Itamar's answer I wanted to confirm that for myself so I wrote a little lua script to test that. The scripts creates a key with a 10ms TTL and checks the ttl untill it's less then 0:
redis.call("SET", KEYS[1], "someVal","PX", 10)
local tmp = redis.call("PTTL", KEYS[1])
while tmp >= 0
do
tmp = redis.call("PTTL", KEYS[1])
redis.log(redis.LOG_WARNING, "PTTL:" .. tmp)
end
return 0
When I ran this script it never terminated. It just went on to spam my logs until I killed the redis server. However time dosen't stand still while the script runs, instead it just stops once the TTL is 0.
So the key ages, it just never expires.
Since a watched key can expire in the middle of a multi transaction without aborting it. I assume the same is the case for lua scripts. Therefore I put in the cases for when the ttl is -1 or -2.
AFAIR that isn't the case w/ Lua scripts - time kinda stops (in terms of TTL at least) when the script's running.
With this description I'm not sure if my function is a pure function or not.
Your script's great (without actually trying to understand what it does), don't worry :)
When I run the info command in redis-cli against a redis 3.2.4 server, it shows me this for expires:
expires=223518
However, when I then run a keys * command and ask for the ttl for each key, and only print out keys with a ttl > 0, I only see a couple hundred.
I thought that the expires is a count of the number of expiring keys but I am not even within an order of magnitude of this number.
Can someone clarify exactly what expires is meant to convey? Does this include both to-be-expired and previously expired but not yet evicted keys?
Update:
Here is how I counted the number of keys expiring:
task count_tmp_keys: :environment do
redis = Redis.new(timeout: 100)
keys = redis.keys '*'
ct_expiring = 0
keys.each do |k|
ttl = redis.ttl(k)
if ttl > 0
ct_expiring += 1
puts "Expiring: #{k}; ttl is #{ttl}; total: #{ct_expiring}"
STDOUT.flush
end
end
puts "Total expiring: #{ct_expiring}"
puts "Done at #{Time.now}"
end
When I ran this script it shows I have a total expiring of 78
When I run info, it says db0:keys=10237963,expires=224098,avg_ttl=0
Because 224098 is so much larger than 78, I am very confused. Is there perhaps a better way for me to obtain a list of all 225k expiring keys?
Also, how is it that my average ttl is 0? Wouldn't you expect it to be nonzero?
UPDATE
I have new information and a simple, 100% repro of this situation locally!
To repro: setup two redis processes locally on your laptop. Make one a slave of the other. On the slave process, set the following:
config set slave-serve-stale-data yes
config set slave-read-only no
Now, connect to the slave (not the master) and run:
set foo 1
expire foo 10
After 10 seconds, you will no longer be able to access foo, but info command will still show that you have 1 key expiring with an average ttl of 0.
Can someone explain this behavior?
expires contains existing keys with TTL which will expire, not including already expired keys.
Example ( with omission of extra information from info command for brevity ):
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> SETEX mykey1 1000 "1"
OK
127.0.0.1:6379> SETEX mykey2 1000 "2"
OK
127.0.0.1:6379> SETEX mykey3 1000 "3"
OK
127.0.0.1:6379> info
# Keyspace
db0:keys=3,expires=3,avg_ttl=992766
127.0.0.1:6379> SETEX mykey4 1 "4"
OK
127.0.0.1:6379> SETEX mykey5 1 "5"
OK
127.0.0.1:6379> info
# Keyspace
db0:keys=3,expires=3,avg_ttl=969898
127.0.0.1:6379> keys *
1) "mykey2"
2) "mykey3"
3) "mykey1"
127.0.0.1:6379>
Given that in your situation you are asking about key expiry on slaves, per https://github.com/antirez/redis/issues/2861:
keys on a slave are not actively expired, and thus the avg_ttl is
never calculated
And per https://groups.google.com/forum/#!topic/redis-db/NFTpdmpOPnc:
avg_ttl is never initialized on a slave and thus it can be what ever
arbitrary value resides in memory at that place.
Thus, it is to be expected that the info command behaves differently on slaves.
The expires just returns the size of keys that will expire not the time.
The source code of 3.2.4
long long keys, vkeys;
keys = dictSize(server.db[j].dict);
vkeys = dictSize(server.db[j].expires);
if (keys || vkeys) {
info = sdscatprintf(info,
"db%d:keys=%lld,expires=%lld,avg_ttl=%lld\r\n",
j, keys, vkeys, server.db[j].avg_ttl);
}
It just calculate the size of server.db[j].expires. (note j is the database index).
The below function delete keys from smembers, they are not passed by eval arguments, is it proper in redis cluster?
def ClearLock():
key = 'Server:' + str(localIP) + ':UserLock'
script = '''
local keys = redis.call('smembers', KEYS[1])
local count = 0
for k,v in pairs(keys) do
redis.call('delete', v)
count = count + 1
end
redis.call('delete', KEYS[1])
return count
'''
ret = redisObj.eval(script, 1, key)
You're right to be worried using those keys that aren't passed by an eval argument.
Redis Cluster won't guarantee that those keys are present in the node that's running the lua script, and some of those delete commands will fail as a result.
One thing you can do is mark all those keys with a common hashtag. This will give you the guarantee that any time node re balancing isn't in progress, keys with the same hash tag will be present on the same node. See the sections on hash tags in the the redis cluster spec. http://redis.io/topics/cluster-spec
(When you are doing cluster node re balancing this script can still fail, so you'll need to figure out how you want to handle that)
Perhaps add the local ip for all entries in that set as the hash tag. The main key could become:
key = 'Server:{' + str(localIP) + '}:UserLock'
Adding the {} around the ip in the string will have redis read this as the hashtag.
You would also need to add that same hashtag {"(localIP)"} as part of the key for all entries you are going to later delete as part of this operation.