This integration test uses ReactiveRedisTemplate list operations to:
push a List to the cache using the ReactiveRedisTemplate
list operation leftPushAll()
retrieves the List as a flux from the
cache using the ReactiveRedisTemplate list operation range() to retrieve all elements of the list from index 0 to the last index in the list
public void givenList_whenLeftPushAndRange_thenPopCatalog() {
//catalog is a List<Catalog> of size 367
Mono<Long> lPush = reactiveListOps.leftPushAll(LIST_NAME, catalog).log("Pushed");
StepVerifier.create(lPush).expectNext(367L).verifyComplete();
Mono<Long> sz = reactiveListOps.size(LIST_NAME);
Long listsz = sz.block();
assert listsz != null;
Assert.assertEquals(listsz.intValue(), catalog.size());
Flux<Catalog> catalogFlux =
reactiveListOps
.range(LIST_NAME, 0, listsz)
.map(
c -> {
System.out.println(c.getOffset());
return c;
})
.log("Fetched Catalog");
List<Catalog> catalogList = catalogFlux.collectList().block();
assert catalogList != null;
Assert.assertEquals(catalogList.size(), catalog.size());
}
This test works fine. My question is - how is the EXPIRY and TTL of these List objects stored in the cache controlled?
The reason I ask - is on my local Redis server, I notice they remain in the cache for a number of hours, but when I run this code against a Redis server hosted on AWS, the List objects only seem to remain in the cache for 30 minutes.
Is there a configuration property that can be used to control the TTL of list objects?
Thank you, ideally I'd like to have more control over how long these objects remain in the cache.
Related
I have a LUA script that fetches a SET data-type and iterates through it's members. This set contains a list of other SET data-types (which ill call the secondary sets for clarity).
For context, the secondary SETs are lists of cache keys, majority of cache keys within these secondary sets reference cache items that no longer exist, these lists grow exponentially in size so my goal is to check each of these cache keys; within each secondary set to see if they have expired and if so, remove them from the secondary set, thus reducing the size and conserving memory.
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
The script above works perfectly, until one of the secondary set keys contain a different hash-slot, the error I receive is:
Lua script attempted to access keys of different hash slots
Looking through the docs, I noticed that Redis allows this to be bypassed with an EVAL flag allow-cross-slot-keys, so following that example I updated my script to the following
#!lua flags=allow-cross-slot-keys
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
I'm now strangely left with the error:
"ERR Error compiling script (new function): user_script:1: unexpected symbol near '#'"
Any help appreciated, have reached out to the pocket burner that is Redis Enterprise but still awaiting a response.
For awareness, this is to clean up the shoddy Laravel Redis implementation where they create these sets to manage tagged cache but never clean them up, over time they amount to gigabytes in wasted space, and if your eviction policy requires allkeys-lfu then all your cache will be pushed out in favour of this messy garbage Laravel leaves behind, leaving you with a worthless caching system or thousands of dollars out of pocket to increase RAM.
Edit: It would seem we're on Redis 6.2 and those flags are Redis 7+ unlikely there is a solution suitable for 6.2 but if there is please let me know.
Let's suppose we have a list of tasks to be executed and some workers that pop items from that list.
If a worker crashes unexpectedly before finishing the execution of the task then that task is lost.
What kind of mechanism could prevent that so we can reprocess abandoned tasks?
You need to use ZSET to solve this issue
Pop operation
Add to ZSET with expiry time
Remove from list
Ack Operation
Remove from ZSET
Worker
You need to run a scheduled worker that will move items from ZSET to list if they are expired
Read it in detail, how I did in Rqueue https://medium.com/#sonus21/introducing-rqueue-redis-queue-d344f5c36e1b
Github Code: https://github.com/sonus21/rqueue
There is no EXPIRE for set or zset members and there is no atomic operation to pop from zset and push to list. So I ended writing this lua script which runs atomically.
First I add a task to the executing-tasks zset with a timestamp score ((new Date()).valueOf() in javascript):
ZADD 1619028226766 executing-tasks
Then I run the script:
EVAL [THE SCRIPT] 2 executing-tasks tasks 1619028196766
If the task is more than 30 seconds old it will be sent to the tasks list. If not, it will be sent back to the executing-tasks zset.
Here is the script
local source = KEYS[1]
local destination = KEYS[2]
local min_score = ARGV[1]
local popped = redis.call('zpopmin', source)
local id = popped[1]
local score = popped[2]
if table.getn(popped) > 0 then
if score < min_score then
redis.call('rpush', destination, id)
return { "RESTORED", id }
else
redis.call('zadd', source, score, id)
return { "SENT_BACK", id }
end
end
return { "NOTHING_DONE" }
After enabling subscription conflate in my regions, I saw increment negative number (-XXXXX) in the queue size field in the Member Client Table in GemFire Pulse Website. Any reason that the negative number appear in the queue size field?
GemFire Version : 9.8.6
Number of Regions : 1
1 Client Application updating regions every 0.5 seconds (Caching Proxy)
1 Client Application reading data from regions (Caching Proxy - Register interest for all keys)
1 Locators and 1 Cache Server in same virtual machine
Queue Size. The size of the queue used by server to send events in case of a subscription enabled client or a client that has continuous queries running on the server. [https://gemfire.docs.pivotal.io/910/geode/developing/events/tune_client_message_tracking_timeout.html].
Additional Discovery
Pulse Website (Negative Number in Queue Size)
JConsole (showClientQueueDetail)
(numVoidRemovals (4486)
#ClientCacheApplication(locators = {
#ClientCacheApplication.Locator(host = "192.168.208.20", port = 10311) }, name = "Reading-Testing", subscriptionEnabled = true)
#EnableEntityDefinedRegions(basePackageClasses = Person.class, clientRegionShortcut = ClientRegionShortcut.CACHING_PROXY, poolName = "SecondPool")
#EnableGemfireRepositories(basePackageClasses = PersonRepository.class)
#EnablePdx
#Import({ GemfireCommonPool.class })
public class PersonDataAccess {
....
}
#Configuration
public class GemfireCommonPool {
#Bean("SecondPool")
public Pool init() {
PoolFactory poolFactory = PoolManager.createFactory();
poolFactory.setPingInterval(8000);
poolFactory.setRetryAttempts(-1);
poolFactory.setMaxConnections(-1);
poolFactory.setReadTimeout(30000);
poolFactory.addLocator("192.168.208.20", 10311);
poolFactory.setSubscriptionEnabled(true);
return poolFactory.create("SecondPool");
}
}
Additonal Discovery 2
When i remove the poolName field in #EnableEntityDefinedRegions, I found out that the pulse website does not display negative number for the queue size. However, in the showClientQueueDetail, it display negative number for queue size.
Is it my coding error or conflate issue?
Thank you so much.
doing a R/W test with redis cluster (servers): 1 master + 2 slaves. the following is the key WRITE code:
var trans = redisDatabase.CreateTransaction();
Task<bool> setResult = trans.StringSetAsync(key, serializedValue, TimeSpan.FromSeconds(10));
Task<RedisResult> waitResult = trans.ExecuteAsync("wait", 3, 10000);
trans.Execute();
trans.WaitAll(setResult, waitResult);
using the following as the connection string:
[server1 ip]:6379,[server2 ip]:6379,[server3 ip]:6379,ssl=False,abortConnect=False
running 100 threads which do 1000 loops of the following steps:
generate a GUID as key and random as value of 1024 bytes
writing the key (using the above code)
retrieve the key using "var stringValue =
redisDatabase.StringGet(key, CommandFlags.PreferSlave);"
compare the two values and print an error if they differ.
running this test a few times generates several errors - trying to understand why as the "wait" with (10 seconds!) operation should have guaranteed the write to all slaves before returning.
Any idea?
WAIT isn't supported by SE.Redis as explained by its prolific author at Stackexchange.redis lacks the "WAIT" support
What about improving consistency guarantees, by adding in some "check, write, read" iterations?
SET a new key value pair (master node)
Read it (set CommandFlags to DemandReplica.
Not there yet? Wait and Try X times.
4.a) Not there yet? SET again. go back to (3) or give up
4.b) There? You're "done"
Won't be perfect but it should reduce probability of losing a SET??
The below function delete keys from smembers, they are not passed by eval arguments, is it proper in redis cluster?
def ClearLock():
key = 'Server:' + str(localIP) + ':UserLock'
script = '''
local keys = redis.call('smembers', KEYS[1])
local count = 0
for k,v in pairs(keys) do
redis.call('delete', v)
count = count + 1
end
redis.call('delete', KEYS[1])
return count
'''
ret = redisObj.eval(script, 1, key)
You're right to be worried using those keys that aren't passed by an eval argument.
Redis Cluster won't guarantee that those keys are present in the node that's running the lua script, and some of those delete commands will fail as a result.
One thing you can do is mark all those keys with a common hashtag. This will give you the guarantee that any time node re balancing isn't in progress, keys with the same hash tag will be present on the same node. See the sections on hash tags in the the redis cluster spec. http://redis.io/topics/cluster-spec
(When you are doing cluster node re balancing this script can still fail, so you'll need to figure out how you want to handle that)
Perhaps add the local ip for all entries in that set as the hash tag. The main key could become:
key = 'Server:{' + str(localIP) + '}:UserLock'
Adding the {} around the ip in the string will have redis read this as the hashtag.
You would also need to add that same hashtag {"(localIP)"} as part of the key for all entries you are going to later delete as part of this operation.