How to remove Redis keys efficiently on the fly - redis

I have a Redis 2.8.3 service where I am storing data in sets (created with SADD) such as:
Customers (set)
.....Custname (set)
.........application (set)
..............time (set)
...................detail (hash)
Once each detail hash has been processed, it is removed using SREM and then if the parent set (time and application) is found empty using SCARD, it is removed using SREM.
Although this is working, it appears to be leaving the keys for each removed item. So there
are keys such as 'Customer:Custname:application:time' left lying around.
What is the most efficient way to remove the set members and remove the corresponding key at the same time?

The solution with LUA procedure would be best choise. The pseudocode looks like
if 1 == srem(key, value) then
if 0 == scard(key) then
del(key)
end if
end if

Related

Redis using members of sorted list to delete external keys

Using sort one can sort a set and get external keys using results from the sort component of query.
By way of example:
If the external key/value are defined as various keys using the pattern:itemkey:<somestring>
And a sorted list has list of the members then issuing command sort <lists key> by nosort get itemkey:* would get the values of the referenced keys.
I would like to be able to sort through a sorted list and delete these individual keys but it appears that sort <key> by nosort del itemkey:* is not supported.
Any suggestions on how to get list of values stored in a set and then delete the external keys?
Obviously I can do this with two commands, first getting the list of values and then by iterating through list call the delete function - but this is not desirable as I requite atomic operation.
To ensure atomic operation one can use either transactions or redis' lua scripts. For efficiency I decided to go with using script. This way the entire script is completed before next redis action/request is processed.
In code snippet below. I used loadScript in order to store script redis side reducing traffic with every call, the response from loadScript is then used as identifier to Jedis's evalsha command.
Using Scala (Note Jedis is a Java library, hence the .asJava):
val scriptClearIndexRecipe = """local names = redis.call('SORT', KEYS[1]);
| for i, k in ipairs(names) do
| redis.call('DEL', "index:recipe:"..k)
| end;
| redis.call('DEL', KEYS[1]);
| return 0;""".stripMargin
def loadScript(script: String): String = client.scriptLoad(script)
def eval(luaSHA: String, keys: List[String], args: List[String]): AnyRef = {
client.evalsha(luaSHA, keys.asJava, args.asJava)
}

Redis : Querying based on matching key pattren

I am new to Redis I tried to figure out this problem going through Redis documentation but no luck. Here are the details.
Lets say I inserted Strings like below.
Set category:1 "Men"
Set category:2 "Women"
Set category:3 "Kids"
Set category:4 "Home"
Set category:5 "shoes" ...
Now I want to get all the values by querying with keys which follow certain pattern in this case category:*.
Get category:*
Is there any way to get all categories like this?
Use SCAN. SCAN is the only safe way to iterate through the keys in a Redis database. SCAN will chunk out a portion of the keyspace and return a cursor (always the first result) and any values it found in that chunk. You start with a cursor of 0.
> SCAN 0 MATCH "category:*"
1) "1904"
2) (empty list or set)
Then you pass that cursor back into the SCAN command with the same pattern:
> SCAN 1904 MATCH "category:*"
1) "0"
2) 1) "category:3"
2) "category:2"
3) "category:4"
4) "category:1"
In this case the cursor returned is 0 which is the signal that the SCAN command has completed. The second response is an array with the keys found. Note that you need to run the SCAN command in a loop and none or only part of keys that match the pattern will be returned each time.
After you get the keys, you'll need to retrieve the values as normal (GET).
Just a note: From the look of how your data is structured, you're likely using an inappropriate data type: The categories would be better organized into a hash (e.g. HSET categories 1 men then you can use HGETALL)

redis how to autogenerate next key number

I'm crash coursing right now in redis, using 'the little redis book'.
What's not clear to me is how I can autogenerate key values.
So for example, the book uses this set statement:
set users:9001 '{"id":9001, "email":"leto#dune.gov"}'
How can i set things up so that the system keeps track of the next available id? In this case... 9002?
I know there is a INCR function... But I don't know how to incorporate both of these functions together.
So for example, let's say i do this using the redis-cli:
set mykey 1
set users:mykey '{"id":mykey, "email":"leto#dune.gov"}'
This works on the command line, but I need a way to do this programmatically. I'm thinking I would:
get mykey
INCR mykey
set users:mykey ....
Does this seem right? is there another way to do this? Also how do I programmatically using phpredis?
Thanks.
Yes, that is the right way to do it. But a small change in your approach,
When you do INCR you will get a incremented value returned by redis. you can use it directly in the next command. So it is simply,
var counter = INCR key
set users:counter . . .
So here you start from the index 1. ie, users:1, users:2 and so on.
Hope this is clear.

Redis: How to increment hash key when adding data?

I'm iterating through data and dumping some to a Redis DB. Here's an example:
hmset id:1 username "bsmith1" department "accounting"
How can I increment the unique ID on the fly and then use that during the next hmset command? This seems like an obvious ask but I can't quite find the answer.
Use another key, a String, for storing the last ID. Before calling HMSET, call INCR on that key to obtain the next ID. Wrap the two commands in a MULTI/EXEC block or a Lua script to ensure the atomicity of the transaction.
Like Itamar mentions you can store your index/counter in a separate key. In this example I've chosen the name index for that key.
Python 3
KEY_INDEX = 'index'
r = redis.from_url(host)
def store_user(user):
r.incr(KEY_INDEX, 1) # If key doesn't exist it will get created
index = r.get(KEY_INDEX).decode('utf-8') # Decode from byte to string
int_index = int(index) # Convert from string to int
result = r.set('user::%d' % int_index, user)
...
Note that user::<index> is an arbitrary key chosen by me. You can use whatever you want.
If you have multiple machines writing to the same DB you probably want to use pipelines.

How to retrieve all hash values from a list in Redis?

In Redis, to store an array of objects we should use hash for the object and add its key to a list:
HMSET concept:unique_id name "concept"
...
LPUSH concepts concept:unique_id
...
I want to retrieve all hash values (or objects) in the list, but the list contains only hash keys so a two step command is necessary, right? This is how I'm doing in python:
def get_concepts():
list = r.lrange("concepts", 0, -1)
pipe = r.pipeline()
for key in list:
pipe.hgetall(key)
pipe.execute()
Is it necessary to iterate and fetch each individual item? Can it be more optimized?
You can use the SORT command to do this:
SORT concepts BY nosort GET concept:*->name GET concept:*->some_key
Where * will expand to each item in the list.
Add LIMIT offset count for pagination.
Note that you have to enumerate each field in the hash (each field you want to fetch).
Another option is to use the new (in redis 2.6) EVAL command to execute a Lua script in the redis server, which could do what you are suggesting, but server side.