Redis LUA script eval flags: user_script:1: unexpected symbol near '#' - redis

I have a LUA script that fetches a SET data-type and iterates through it's members. This set contains a list of other SET data-types (which ill call the secondary sets for clarity).
For context, the secondary SETs are lists of cache keys, majority of cache keys within these secondary sets reference cache items that no longer exist, these lists grow exponentially in size so my goal is to check each of these cache keys; within each secondary set to see if they have expired and if so, remove them from the secondary set, thus reducing the size and conserving memory.
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
The script above works perfectly, until one of the secondary set keys contain a different hash-slot, the error I receive is:
Lua script attempted to access keys of different hash slots
Looking through the docs, I noticed that Redis allows this to be bypassed with an EVAL flag allow-cross-slot-keys, so following that example I updated my script to the following
#!lua flags=allow-cross-slot-keys
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
I'm now strangely left with the error:
"ERR Error compiling script (new function): user_script:1: unexpected symbol near '#'"
Any help appreciated, have reached out to the pocket burner that is Redis Enterprise but still awaiting a response.
For awareness, this is to clean up the shoddy Laravel Redis implementation where they create these sets to manage tagged cache but never clean them up, over time they amount to gigabytes in wasted space, and if your eviction policy requires allkeys-lfu then all your cache will be pushed out in favour of this messy garbage Laravel leaves behind, leaving you with a worthless caching system or thousands of dollars out of pocket to increase RAM.
Edit: It would seem we're on Redis 6.2 and those flags are Redis 7+ unlikely there is a solution suitable for 6.2 but if there is please let me know.

Related

lua setting an expiration time using the redis command with 'setex' appears as a permanent key in openresty

setting an expiration time using the redis command appears as a permanent key when i do this in openresty with lua script.
the lua script is follow this:
local function ip_frequency(ip,red)
local limit_num=50
local key = "limit:frequency:"..ip
local resp, err = red:get(key)
if resp==nil then
local ok,err=red:setex(key,2,1)
if not ok then
return false;
end
end
if type(resp) == "string" then
if tonumber(resp) > limit_num then
return false
end
end
ok, err = red:incr(key)
if not ok then
return false
end
return true
end
When the openresty program has run for some time,some permanent keys appear in redis.From this function it can be seen that I did not set a key for the permanent time,but it just happens. Why is that,please help me answer this question. Thank you!
The software version is as follows:
openresty: 1.17.8.2
redis: 6.0+
centos: 8.0+
Openresty connects to redis database and uses it's functionality. Such usage of redis functions in lua or other language is not atomic. For redis server it means: [redis:get, pause, redis:setex] or [redis:get, pause, redis:incr]. During pause period can happen a lot of things, event if there is only 1ms, such as cleanup of 'dead' keys.
And this what could happen with your code:
local resp, err = red:get(key)
You get valid key value which is smaller than limit_num
ok, err = red:incr(key)
Redis checks if key is valid, and removes it if reaches ttl
Redis checks that there is no such key, so creates key with value=0 and increments key value
So at this point you have permanent key. If you want to avoid permanent keys, use something like: red:setex(key,2,tonumber(res)+1) istead of red:incr(key)

using .net StackExchange.Redis with "wait" isn't working as expected

doing a R/W test with redis cluster (servers): 1 master + 2 slaves. the following is the key WRITE code:
var trans = redisDatabase.CreateTransaction();
Task<bool> setResult = trans.StringSetAsync(key, serializedValue, TimeSpan.FromSeconds(10));
Task<RedisResult> waitResult = trans.ExecuteAsync("wait", 3, 10000);
trans.Execute();
trans.WaitAll(setResult, waitResult);
using the following as the connection string:
[server1 ip]:6379,[server2 ip]:6379,[server3 ip]:6379,ssl=False,abortConnect=False
running 100 threads which do 1000 loops of the following steps:
generate a GUID as key and random as value of 1024 bytes
writing the key (using the above code)
retrieve the key using "var stringValue =
redisDatabase.StringGet(key, CommandFlags.PreferSlave);"
compare the two values and print an error if they differ.
running this test a few times generates several errors - trying to understand why as the "wait" with (10 seconds!) operation should have guaranteed the write to all slaves before returning.
Any idea?
WAIT isn't supported by SE.Redis as explained by its prolific author at Stackexchange.redis lacks the "WAIT" support
What about improving consistency guarantees, by adding in some "check, write, read" iterations?
SET a new key value pair (master node)
Read it (set CommandFlags to DemandReplica.
Not there yet? Wait and Try X times.
4.a) Not there yet? SET again. go back to (3) or give up
4.b) There? You're "done"
Won't be perfect but it should reduce probability of losing a SET??

Is this redis lua script that deals with key expire race conditions a pure function?

I've been playing around with redis to keep track of the ratelimit of an external api in a distributed system. I've decided to create a key for each route where a limit is present. The value of the key is how many request I can still make until the limit resets. And the reset is made by setting the TTL of the key to when the limit will reset.
For that I wrote the following lua script:
if redis.call("EXISTS", KEYS[1]) == 1 then
local remaining = redis.call("DECR", KEYS[1])
if remaining < 0 then
local pttl = redis.call("PTTL", KEYS[1])
if pttl > 0 then
--[[
-- We would exceed the limit if we were to do a call now, so let's send back that a limit exists (1)
-- Also let's send back how much we would have exceeded the ratelimit if we were to ignore it (ramaning)
-- and how long we need to wait in ms untill we can try again (pttl)
]]
return {1, remaining, pttl}
elseif pttl == -1 then
-- The key expired the instant after we checked that it existed, so delete it and say there is no ratelimit
redis.call("DEL", KEYS[1])
return {0}
elseif pttl == -2 then
-- The key expired the instant after we decreased it by one. So let's just send back that there is no limit
return {0}
end
else
-- Great we have a ratelimit, but we did not exceed it yet.
return {1, remaining}
end
else
return {0}
end
Since a watched key can expire in the middle of a multi transaction without aborting it. I assume the same is the case for lua scripts. Therefore I put in the cases for when the ttl is -1 or -2.
After I wrote that script I looked a bit more in depth at the eval command page and found out that a lua script has to be a pure function.
In there it says
The script must always evaluates the same Redis write commands with
the same arguments given the same input data set. Operations performed
by the script cannot depend on any hidden (non-explicit) information
or state that may change as script execution proceeds or between
different executions of the script, nor can it depend on any external
input from I/O devices.
With this description I'm not sure if my function is a pure function or not.
After Itamar's answer I wanted to confirm that for myself so I wrote a little lua script to test that. The scripts creates a key with a 10ms TTL and checks the ttl untill it's less then 0:
redis.call("SET", KEYS[1], "someVal","PX", 10)
local tmp = redis.call("PTTL", KEYS[1])
while tmp >= 0
do
tmp = redis.call("PTTL", KEYS[1])
redis.log(redis.LOG_WARNING, "PTTL:" .. tmp)
end
return 0
When I ran this script it never terminated. It just went on to spam my logs until I killed the redis server. However time dosen't stand still while the script runs, instead it just stops once the TTL is 0.
So the key ages, it just never expires.
Since a watched key can expire in the middle of a multi transaction without aborting it. I assume the same is the case for lua scripts. Therefore I put in the cases for when the ttl is -1 or -2.
AFAIR that isn't the case w/ Lua scripts - time kinda stops (in terms of TTL at least) when the script's running.
With this description I'm not sure if my function is a pure function or not.
Your script's great (without actually trying to understand what it does), don't worry :)

redis cluster: delete keys from smembers in lua script

The below function delete keys from smembers, they are not passed by eval arguments, is it proper in redis cluster?
def ClearLock():
key = 'Server:' + str(localIP) + ':UserLock'
script = '''
local keys = redis.call('smembers', KEYS[1])
local count = 0
for k,v in pairs(keys) do
redis.call('delete', v)
count = count + 1
end
redis.call('delete', KEYS[1])
return count
'''
ret = redisObj.eval(script, 1, key)
You're right to be worried using those keys that aren't passed by an eval argument.
Redis Cluster won't guarantee that those keys are present in the node that's running the lua script, and some of those delete commands will fail as a result.
One thing you can do is mark all those keys with a common hashtag. This will give you the guarantee that any time node re balancing isn't in progress, keys with the same hash tag will be present on the same node. See the sections on hash tags in the the redis cluster spec. http://redis.io/topics/cluster-spec
(When you are doing cluster node re balancing this script can still fail, so you'll need to figure out how you want to handle that)
Perhaps add the local ip for all entries in that set as the hash tag. The main key could become:
key = 'Server:{' + str(localIP) + '}:UserLock'
Adding the {} around the ip in the string will have redis read this as the hashtag.
You would also need to add that same hashtag {"(localIP)"} as part of the key for all entries you are going to later delete as part of this operation.

my redis keys do not expire

My redis server does not delete keys when the time-to-live reaches 0.
Here is a sample code:
redis-cli
>SET mykey "ismykey"
>EXPIRE mykey 20
#check TTL
>TTL mykey
>(integer) 17
> ...
>TTL mykey
>(integer) -1
#mykey chould have expired:
>EXISTS mykey
>(integer) 1
>#oh still there, check its value
>GET mykey
>"ismykey"
If i check the info return by redis, it says 0 keys were expired.
Any idea?
thanks.
Since you're doing a '...' it's hard to say for sure, but I'd say you're setting mykey during that part, which will effectively remove the expiration.
From the EXPIRE manual
The timeout is cleared only when the key is removed using the DEL
command or overwritten using the SET or GETSET commands
Also, regarding the -1 reply from TTL
Return value
Integer reply: TTL in seconds or -1 when key does not
exist or does not have a timeout.
EDIT: Note that this behaviour changed in Redis 2.8
Starting with Redis 2.8 the return value in case of error changed:
The command returns -2 if the key does not exist.
The command returns -1 if the key exists but has no associated expire.
In other words, if your key exists, it would seem to be persistent, ie not have any expiration set.
EDIT: It seems I can reproduce this if I create the key on a REDIS slave server, the slave will not delete the key without master input, since normally you would not create keys locally on a slave. Is this the case here?
However while the slaves connected to a master will not expire keys
independently (but will wait for the DEL coming from the master),
they'll still take the full state of the expires existing in the
dataset, so when a slave is elected to a master it will be able to
expire the keys independently, fully acting as a master.