Is Redis LPOP / RPOP operation atomic? - redis

I am trying to build FIFO queue in Redis, but I am just worried about concurrency. What if 2 clients try to do RPOP operation simultaneously?
If RPOP/LPOP is not atomic then how can I achieve atomicity using MULTI/EXEC ?

Is Redis LPOP / RPOP operation atomic?
Yes, both LPOP and RPOP are atomic.
What if 2 clients try to do RPOP operation simultaneously?
If the size of the LIST is equal to or greater than 2, both clients get a different item. If the LIST has only one item, only one client gets the item, and the other client gets null reply. If the LIST is empty, both clients get null reply.
Another Solution
You can also use BLPOP or BRPOP to implement the FIFO. These two commands are also atomic and will block for empty LIST. See the doc for details.

Related

Redis LRANGE Pop Atomicity

I am having a redis data store in which there are unique keys stored. Now my app server will send multiple requests to redis to get some 100 keys from start and I am planning to use LRANGE command for the same.
But my requirement is that each request should receive unique set of keys,which means that if one request goes to redis for 100 keys then those keys will never be returned to any request in future.
As I saw that redis operations are atomic, so can i assume that if there multiple requests coming from app server at same time to redis, as redis is single thrreaded, so it will execute LRANGE mylist 0 100 and once it is completed (means once 100 keys taken and removes from List), only then next request will be processed, so atomicity is inbuild,is it correct?
Is it ever possible under any circumstance that two requests can get same 100 keys?
It sounds like the command you actually want is LPOP, since LRANGE doesn't remove anything from the list.
LPOP mylist 101
And, yes, this command is atomic, so no two clients will receive the same elements.

Check-and-increment a counter in Redis

I have an operation I need to get done N times and no more. The operation is done by many possibly parallel processes that recieve requests. A process has to check if the counter has exceeded N and, if not, then increment the counter and execute the operation. I suppose I could use a Redis counter for that.
Howerver if I just GET and then INCR a value I might run into a race condition that will result in the operation being done more than N times. How do I perform some kind of test-and-incr operation against Redis?
I know I could use WATCH but that's an optimistic lock. I expect there going to be very many collisions each second. This will result in a lot of failures. Maybe I could just wrap simple GET and INCR with some kind of external mutex. But I am not sure if it's good enough for performance.
You can use Redis INCR safely here.
INCR returns the value post the increment.
You check the value returned by INCR first ( see there is no need to do a GET )
& proceed to do the operation based on that value.
Only thing is you would have to set your INCR return value threshold as N+1 for limiting to N operations i.e. one extra redis operation than N.
For example, if we want to limit the operation to happen 3 times only, if INCR returns 4 after an increment, then
you stop doing further operation as it has already happened 3 times.

How many keys can be deleted in a single redis del command?

I want to delete multiple redis keys using a single delete command on redis client.
Is there any limit in the number of keys to be deleted?
i will be using del key1 key2 ....
There's no hard limit on the number of keys, but the query buffer limit does provide a bound. Connections are closed when the buffer hits 1 GB, so practically speaking this is somewhat difficult to hit.
Docs:
https://redis.io/topics/clients
However! You may want to take into consideration that Redis is single-threaded: a time-consuming command will block all other commands until completed. Depending on your use-case this may make a good case for "chunking" up your deletes into groups of, say, 1000 at a time, because it allows other commands to squeeze in between. (Whether or not this is tolerable is something you'll need to determine based on your specific scenario.)

compare redis command: multi and mget

there are two systems sharing a redis database, one system just read the redis, the other update it.
the read system is so busy that the redis can't handle it, to reduce the count of requests to redis, I find "mget", but I also find "multi".
I'm sure mget will reduce the number of requests, but will "multi" do the same? I think "multi" will force the redis server to keep some info about this transaction and collect requests in this transaction from the client one by one, so the total number of requests sent stays the same, but the results returned will be combined together, right?
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Short Answer: You should use MGET
MULTI is used for transaction, and it won't reduces the number of requests. Also, the MULTI command MIGHT be deprecated in the future, since there's a better choice: lua scripting.
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Since MULTI (with EXEC) command ensures transaction, all of the three GET commands (read operations) executes atomically. If the update happens before the read operation, you'll get the old value. Otherwise, you'll get the new value.
By the way, there's another option to reduce RTT: PIPELINE. However, in your case, MGET should be the best option.

how to set expiry for every item in redis queue

I am using jedis, a redis java client. I have a queue of string items. As per normal I am using lpush lpop rpush rpop for the necessary operations. But I will like to set expiry for each individual items in the queue. Is it possible?
This is not possible in redis by design for the sake of keeping redis simple and fast.
You can either store an expire value along with the string in the list, or store a separate list of expire times to let your application know if the key has expired.
There is also an alternative solution discussed here. You can store values in a sorted set with expire timestamps as scores and only select those members, whose scores are greater than certain timestamp. (This of course leaves it up to your app to clear the expired elements in a set)