Redis increment several fields in several hsets - redis

I have data of several users in redis e.g.
hset - user111; field - dayssincelogin .....
I want to periodically update dayssincelogin for all users, one way to do it is
KEYS user*
HINCRBY ${key from above} dayssincelogin 1
Is it possible to do this in a single call? If not what's the most optimal way? I'm using using redis cluster and java client.

You can't do multiple increments in one command but you can bulk your commands together for performance gains.
Use Redis Pipe-lining or Scripting.
In Jedis I dont thing LUA is supported (If someone could answer that :) )
As #mp911de suggested; Use Exec for LUA Scripting
and you can also use pipelining to execute your bulk methods faster.
Have a Pipelining readup here for more information
And here is the sample code to use Jedis Pipelining.
Pipeline p = jedis.pipelined();
p.multi();
Response<Long> r1 = p.hincrBy("a", "f1", -1);
Response<Long> r2 = p.hincrBy("a", "f1", -2);
Response<List<Object>> r3 = p.exec();
List<Object> result = p.syncAndReturnAll();
Edit: Redis allows multi key operations only when they are present in the same shard. You should arrange your keys in such a way to ensure data affinity. like key1.{foo} and key5678.{foo} will reside in the same server

Related

Redis Cluster transactions support

Is there any support for transactions in Redis Cluster python client? Obviously the keys participating in the transaction are mapped to the same slot using Hash Tags
For example: how to perform following commands atomically?
redis.delete("key1.{123}")
redis.set("key2.{123}", 1)
You can use pipelines with transaction=True (default)
pipe = redis.pipeline(transaction=True)
pipe.delete("key1.{123}")
pipe.set("key2.{123}", 1)
pipe.execute()

Sometimes redis keys are not deleted

I have implemented storing the results of a selection from a database (list) in Redis to speed up data loading on the site. When a cms user performs any operations (creates, deletes or edits article), the keys in redis are dropped and load fresh data from the database.
But sometimes it happens that one or two users do not drop their keys after performing operations with articles and old data remains in redis. The Internet was available, nothing was turned off. Why does this happen - are there any typical reasons that I need to know?
Do I need to block the database so that there are no multiple connections? But redis seems to be single-threaded. What am I doing wrong? The function for key drop is very simple:
function articlesRedisDrop($prefix)
{
$keys = $redis->keys($prefix."*");
if(count($keys) > 0)
{
foreach($keys as $key)
{
$redis->del($key);
}
}
}
guess that an atomic question. After $redis->keys($prefix."*"), before $redis->del($key), another connection added refresh data to redis.
You can try to combine get and del operations to single lua script.
local keys = redis.call("keys",string.format("%s.*",KEYS[1]))
for _, key in pairs(keys) do
redis.call("del", key)
end
then run the script with eval command and prefix param. If you meet performance problem with keys command, you can try scan or store all prefix keys to a set then get and delete all.

Redis multiple calls vs lua script

I have the below use case.
Set the key with a value
Get the key if it already exits other wise set it with a expiry.
Basically, I am trying to do a set with nx and get. Here is the lua script I came up with
local v = redis.call('GET', KEYS[1])
if v then
return v
end
redis.call('SETEX', KEYS[1], ARGV[1], ARGV[2])"
I am slightly confused whether I should use the above Lua script as compared to executing two different separate commands of get first and then set.
Any pros or cons of using the lua script. Or should two separate commands be better.
Yes, you should use the script.
If you use two separate Redis commands then you'll end up with a race condition: another process might set the value after your GET and before your SETEX, causing you to overwrite it. Your logic requires this sequence of commands to be atomic, and the best way to do that in Redis is with a Lua script.
It would be possible to achieve this without the script, by using MULTI and WATCH, but the Lua script is much more straightforward.

Using redis as an LRU cache for postgres

I have postgres 9.3 db and I want to use redis to cache calls the the DB (basically like memcached). I followed these docs, which means I have basically configured redis to work as an LRU cache. But am unsure what to do next. How do I tell redis to track calls to the DB and cache their output? How can I tell it's working?
In pseudo code:
see if redis has the record by 'record_type:record_id'
if so return the result
if not then query postgres for the record_id in the record_type table
store the result in redis by 'record_type:record_id'
return the result
This might have to be a custom adapter for the query engine that you are using.

Redis delete all key which are there in one list

I have a list A in redis having values
K1 , K2 , K3
I want to delete all keys from redis matching values in list.
Is there a way to do this thing on one command or pipelining ?
You can fetch your list on the client side and then pipe some delete commands on the server. There is no other possibility for your task to be accomplished, as the LUA scripting feature is missing for the moment. With it, you could execute your task on the server without the need to fetch the whole list on the client.
yes, you can do that using eval and Lua (since redis 2.6)
eval "redis.call('del', unpack(redis.call('smembers', 'A')))" 0