How to do a fuzzy scan in Redis? - redis

Trying to obtain all values from Redis.
The key is:
key: mission:step:{missionId}:{yyMMdd} field:{userId} value:{progress}
I would like to get three columns: mission_id - user_id - progress to and put them to Hive/MySQL. The values of mission_id are different everyday.
How to achieve this? Any help is appreciated.

You can, but it will be a slow operation. Redis KEYS command lists all keys matched by a regex pattern.
But it is a O(N) operation as it scans each key in redis to match it against the pattern.

You can use the SCAN command which is more safe than keys in production environment.
Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements.
You can use SCAN 0 MATCH your-key-pattern to do loop query, and every next query's cursor is in the this time's reply, util the reply cursor is 0.
SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call.
An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0.

Related

Efficiently delete RedisKeys in bulk via wildcard pattern

Problem:
I need to efficiently delete keys from my Redis Cache using a wildcard pattern. I don't need atomicity; eventual consistency is acceptable.
Tech stack:
.NET 6 (async all the way through)
StackExchange.Redis 2.6.66
Redis Server 6.2.6
I currently have ~500k keys in Redis.
I'm not able to use RedisJSON for various reasons
Example:
I store the following 3 STRING types with keys:
dailynote:getitemsforuser:region:sw:user:123
dailynote:getitemsforuser:region:fl:user:123
dailynote:getitemsforuser:region:sw:user:456
...
where each STRING stores JSON like so:
> dump dailynote:getitemsforuser:region:fl:user:123
"{\"Name\":\"john\",\"Age\":22}"
The original solution used the KeysAsync method to retrieve the list of keys to delete via a wildcard pattern. Since the Redis Server is 6.x, the SCAN feature is being used by KeysAsync internally by the StackExchange.Redis nuget.
Original implementation used a wildcard pattern dailynote:getitemsforuser:region:*. As one would expect, this solution didn't scale well and we started seeing RedisTimeoutExceptions.
I'm aware of the "avoid this in PROD if you can" and have seen Marc Gravell respond to a couple other questions/issues on SO and StackExchange.Redis GitHub. The only potential alternative I could think of is to use a Redis SET to "track" each RedisKey and then retrieve the list of values from the SET (which are the keys I need to remove). Then delete the SET as well as the returned keys.
Potential Solution?:
Create a Redis SET with a key of dailynote:getitemsforuser with a value which is the key of the form dailynote:getitemsforuser:region:XX...
The SET would look like:
dailynote:getitemsforuser (KEY)
dailynote:getitemsforuser:region:sw:user:123 (VALUE)
dailynote:getitemsforuser:region:fl:user:123 (VALUE)
dailynote:getitemsforuser:region:sw:user:456 (VALUE)
...
I would still have each individual STRING type as well:
dailynote:getitemsforuser:region:sw:user:123
dailynote:getitemsforuser:region:fl:user:123
dailynote:getitemsforuser:region:sw:user:456
...
when it is time to do the "wildcard" remove, I get the members of the dailynote:getitemsforuser SET, then call RemoveAsync passing the members of the set as the RedisKey[]. Then call RemoveAsync with the key of the SET (dailynote:getitemsforuser)
I'm looking for feedback on how viable of a solution this is, alternative ideas, gotchas, and suggestions for improvement. TIA
UPDATE
Added my solution I went with below...
The big problem with both KEYS and SCAN with Redis is that they require a complete scan of the massive hash table that stores every Redis key. Even if you use a pattern, it still needs to check each entry in that hash table to see if it matches.
Assuming you are calling SADD when you are also setting the value in your key—and thus avoiding the call to SCAN—this should work. It is worth noting that calls to SMEMBERS to get all the members of a Set can also cause issues if the Set is big. Redis—being single-threaded—will block while all the members are returned. You can mitigate this by using SSCAN instead. StackExchange.Redis might do this already. I'm not sure.
You might also be able to write a Lua script that reads the Set and UNLINKs all the keys atomically. This would reduce network but could tie Redis up if this takes too long.
I ended up using the solution I suggested above where I use a Redis SET with a known/fixed key to "track" each of the necessary keys.
When a key that needs to be tracked is added, I call StackExchange.Redis.IDatabase.SetAddAsync (SADD) while calling StackExchange.Redis.IDatabase.HashSetAsync (HSET) for adding the "tracked" key (along with its TTL).
When it is time to remove the "tracked" key, I first call StackExchange.Redis.IDatabase.SetScanAsync (SSCAN) (with a page size of 250) iterating on the IAsyncEnumerable and call StackExchange.Redis.IDatabase.KeyDeleteAsync (HDEL) on chunks of the members of the SET. I then call StackExchange.Redis.IDatabase.KeyDeleteAsync on the actual key of the SET itself.
Hope this helps someone else.

Use set or just create keys in redis to check existence?

I can think of two ways of checking existence using redis:
Use the whole database as a 'set', and just SET a key and checking existence by GETing it (or using EXISTS as mentioned in the comment by #Sergio Tulentsev)
Use SADD to add all members to a key and check existence by SISMEMBER
Which one is better? Will it be a problem, compared to the same amount of keys in a single set, if I choose the first method and the number of keys in a database gets larger?
In fact, besides these two methods, you can also use the HASH data structure with HEXISTS command (I'll call this method as the third solution).
All these solutions are fast enough, and it's NOT a problem if you have a large SET, HASH, or keyspace.
So, which one should we use? It depends on lots of things...
Does the key has value?
Keys of both the first and the third solution can have value, while the second solution CANNOT.
So if there's no value for each key, I'd prefer the second solution, i.e. SET solution. Otherwise, you have to use the first or third solution.
Does the value has structure?
If the value is NOT raw string, but a data structure, e.g. LIST, SET. You have to use the first solution, since HASH's value CAN only be raw string.
Do you need to do set operations?
If you need to do intersection, union or diff operations on multiple data sets, you should use the second solution. Redis has built-in commands for these operations, although they might be slow commands.
Memory efficiency consideration
Redis takes more memory-efficient encoding for small SET and HASH. So when you have lots of small data sets, take the second and the third solution can save lots of memory. See this for details.
UPDATE
Do you need to set TTL for these keys?
As #dizzyf points out in the comment, if you need to set TTL for these keys, you have to use the first solution. Because items of HASH and SET DO NOT have expiration property. You can only set TTL for the entire HASH or SET, NOT their elements.

Redis scan match performance with large number of keys?

Can't find any info about redis scan match
does it mean that if I have 500,000 keys it will iterate over all of them one by one and check if they match the pattern? or it have some other clever trick to pull only relevance keys?
if its actually scan them all, is it performance wisely?
THanks
Scan is basically an alternate to keys command which is blocking. It will return a cursor and with that cursor you need to scan again and the process continues. Duplicates are also possible so you need to handle them in the app logic which means even if you have only 1 million keys and you scan for 10,000 items in each scan it can go more than 10 times.
So it's actually a trade off instead of using keys which is a blocking command but quick you can use scan which is actually slow in comparison with keys command but will not block in the production environment and still achieves what you need.
Hope this helps

ServiceStack.Redis SearchKeys

I am using the ServiceStack.Redis client on C#.
I added about 5 million records of type1 using the following pattern a::name::1 and 11 million records of type2 using the pattern b::RecId::1.
Now I am using redis typed client as client = redis.As<String>. I want to retrieve all the keys of type2. I am using the following pattern:
var keys = client.SearchKeys("b::RecID::*");
But it takes forever (approximately 3-5 mins) to retrieve the keys.
Is there any faster and more efficient way to do this?
You should work hard to avoid the need to scan the keyspace. KYES is literally a server stopper, but even if you have SCAN available: don't do that. Now, you could choose to keep the keys of things you have available in a set somewhere, but there is no SRANGE etc - in 2. you'd have to use SMEMBERS, which is still going to need to return a few million records - but at least they will all be available. In later server versions, you have access to SCAN (think: KEYS) and SSCAN (think: SMEMBERS), but ultimately you simply have the issue of wanting millions of rows, which is never free.
If possible, you could mitigate the impact by using a master/slave pair, and running the expensive operations on the slave. At least other clients will be able to do something while you're killing the server.
The keys command in Redis is slow (well, not slow, but time consuming). It also blocks your server from accepting any other command while it's running.
If you really want to iterate over all of your keys take a look at the scan command instead- although I have no idea about ServiceStack for this
You can use the SCAN command, make a loop search, where each search is restricted to a smaller number of keys. For a complete example, refer to this article: http://blog.bossma.cn/csharp/nservicekit-redis-support-scan-solution/

paging through entries in redis hash

I couldn't find a way to "page" through redis hashes (doc).
I've got ~5million hash entries in 1 redis db. I am trying to iterate through all of them without having to resort to building a list of entry keys.
Can this be achieved?
Since all the redis hash commands require the key element. You need to store your set of keys to page your hash.
See my answer to this question for an example of key iteration by using extra sets.
There is no way to avoid storing extra sets (or lists) and still iterate on a huge number of keys. The KEYS command is not an option.
I had exactly the same requirement of Redis Hash Pagination and yes it is possible to page through redis hash using HSCAN command. Detailed documentation of the same is present at SCAN.
Usage: Hscan <your key/hash name> <cursor-id> count <page-size>.
cursor id which should be passed initially is 0 and it returns a cursor-id and data which is of page-size. The cursor id returned needs to be passed in the subsequent call for fetching subsequent data.